Made by DATEXIS (Data Science and Text-based Information Systems) at Beuth University of Applied Sciences Berlin
Deep Learning Technology: Sebastian Arnold, Betty van Aken, Paul Grundmann, Felix A. Gers and Alexander Löser. Learning Contextualized Document Representations for Healthcare Answer Retrieval. The Web Conference 2020 (WWW'20)
Funded by The Federal Ministry for Economic Affairs and Energy; Grant: 01MD19013D, Smart-MD Project, Digital Technologies
Similarly to vision loss, hearing loss can vary from full or partial inability to detect some or all frequencies of sound which can typically be heard by members of their species. For humans, this range is approximately 20 Hz to 20 kHz at ~6.5 dB, although a 10 dB correction is often allowed for the elderly. Primary causes of hearing loss due to an impaired sensory system include long-term exposure to environmental noise, which can damage the mechanoreceptors responsible for receiving sound vibrations, as well as multiple diseases, such as HIV or meningitis, which damage the cochlea and auditory nerve, respectively.
Hearing loss may be gradual or sudden. Hearing loss may be very mild, resulting in minor difficulties with conversation, or as severe as complete deafness. The speed with which hearing loss occurs may give clues as to the cause. If hearing loss is sudden, it may be from trauma or a problem with blood circulation. A gradual onset is suggestive of other causes such as aging or a tumor. If you also have other associated neurological problems, such as tinnitus or vertigo, it may indicate a problem with the nerves in the ear or brain. Hearing loss may be unilateral or bilateral. Unilateral hearing loss is most often associated with conductive causes, trauma, and acoustic neuromas. Pain in the ear is associated with ear infections, trauma, and obstruction in the canal.
Based on clinical testing of subjects with auditory neuropathy, the disruption in the stream of sound information has been localized to one or more of three probable locations: the inner hair cells of the cochlea, the synapse between the inner hair cells and the auditory nerve, or a lesion of the ascending auditory nerve itself.
Auditory neuropathy (AN) is a variety of hearing loss in which the outer hair cells within the cochlea are present and functional, but sound information is not faithfully transmitted to the auditory nerve and brain properly. Also known as auditory neuropathy/auditory dys-synchrony (AN/AD) or auditory neuropathy spectrum disorder (ANSD).
A neuropathy usually refers to a disease of the peripheral nerve or nerves, but the auditory nerve itself is not always affected in auditory neuropathy spectrum disorders.
Anosmia is the inability to perceive odor, or in other words a lack of functioning olfaction. Many patients may experience unilateral or bilateral anosmia.
A temporary loss of smell can be caused by a blocked nose or infection. In contrast, a permanent loss of smell may be caused by death of olfactory receptor neurons in the nose or by brain injury in which there is damage to the olfactory nerve or damage to brain areas that process smell. The lack of the sense of smell at birth, usually due to genetic factors, is referred to as congenital anosmia.
The diagnosis of anosmia as well as the degree of impairment can now be tested much more efficiently and effectively than ever before thanks to "smell testing kits" that have been made available as well as screening tests which use materials that most clinics would readily have.
Many cases of congenital anosmia remain unreported and undiagnosed. Since the disorder is present from birth the individual may have little or no understanding of the sense of smell, hence are unaware of the deficit.
Cortical deafness is a rare form of sensorineural hearing loss caused by damage to the primary auditory cortex. Cortical deafness is an auditory disorder where the patient is unable to hear sounds but has no apparent damage to the anatomy of the ear (see auditory system), which can be thought of as the combination of auditory verbal agnosia and auditory agnosia. Patients with cortical deafness cannot hear any sounds, that is, they are not aware of sounds including non-speech, voices, and speech sounds. Although patients appear and feel completely deaf, they can still exhibit some reflex responses such as turning their head towards a loud sound.
Cortical deafness is caused by bilateral cortical lesions in the primary auditory cortex located in the temporal lobes of the brain. The ascending auditory pathways are damaged, causing a loss of perception of sound. Inner ear functions, however, remains intact. Cortical deafness is most often cause by stroke, but can also result from brain injury or birth defects. More specifically, a common cause is bilateral embolic stroke to the area of Heschl's gyri. Cortical deafness is extremely rare, with only twelve reported cases. Each case has a distinct context and different rates of recovery.
It is thought that cortical deafness could be a part of a spectrum of an overall cortical hearing disorder. In some cases, patients with cortical deafness have had recovery of some hearing function, resulting in partial auditory deficits such as auditory verbal agnosia. This syndrome might be difficult to distinguish from a bilateral temporal lesion such as described above.
Auditory fatigue is defined as a temporary loss of hearing after exposure to sound. This results in a temporary shift of the auditory threshold known as a "temporary threshold shift" (TTS). The damage can become permanent (permanent threshold shift, PTS) if sufficient recovery time is not allowed for before continued sound exposure. When the hearing loss is rooted from a traumatic occurrence, it may be classified as noise-induced hearing loss, or NIHL.
There are two main types of auditory fatigue, short-term and long-term. These are distinguished from each other by several characteristics listed individually below.
Short-term fatigue
- full recovery from TTS can be achieved in approximately two minutes
- the TTS is relatively independent of exposure duration
- TTS is maximal at the exposure frequency of the sound
Long-term fatigue
- recovery requires a minimum of several minutes but can take up to several days
- dependent on exposure duration and noise level
Primary symptoms:
- sounds or speech becoming dull, muffled or attenuated
- need for increased volume on television, radio, music and other audio sources
- difficulty using the telephone
- loss of directionality of sound
- difficulty understanding speech, especially women and children
- difficulty in speech discrimination against background noise (cocktail party effect)
Secondary symptoms:
- hyperacusis, heightened sensitivity to certain volumes and frequencies of sound, resulting from "recruitment"
- tinnitus, ringing, buzzing, hissing or other sounds in the ear when no external sound is present
- vertigo and disequilibrium
Usually occurs after age 50, but deterioration in hearing has been found to start very early, from about the age of 18 years. The ISO standard 7029 shows expected threshold changes due purely to age for carefully screened populations (i.e. excluding those with ear disease, noise exposure etc.), based on a meta-analysis of published data. Age affects high frequencies more than low, and men more than women. One early consequence is that even young adults may lose the ability to hear very high frequency tones above 15 or 16 kHz. Despite this, age-related hearing loss may only become noticeable later in life. The effects of age can be exacerbated by exposure to environmental noise, whether at work or in leisure time (shooting, music, etc.). This is noise-induced hearing loss (NIHL) and is distinct from presbycusis. A second exacerbating factor is exposure to ototoxic drugs and chemicals.
Over time, the detection of high-pitched sounds becomes more difficult, and speech perception is affected, particularly of sibilants and fricatives. Patients typically express a decreased ability to understand speech. Once the loss has progressed to the 2-4kHz range, there is increased difficulty understanding consonants. Both ears tend to be affected. The impact of presbycusis on communication depends on both the severity of the condition and the communication partner.
Presbycusis (also spelled presbyacusis, from Greek "presbys" “old” + "akousis" “hearing”), or age-related hearing loss, is the cumulative effect of aging on hearing. It is a progressive and irreversible bilateral symmetrical age-related sensorineural hearing loss resulting from degeneration of the cochlea or associated structures of the inner ear or auditory nerves. The hearing loss is most marked at higher frequencies. Hearing loss that accumulates with age but is caused by factors other than normal aging (nosocusis and sociocusis) is not presbycusis, although differentiating the individual effects of distinct causes of hearing loss can be difficult.
The cause of presbycusis is a combination of genetics, cumulative environmental exposures and pathophysiological changes related to aging. At present there are no preventative measures known; treatment is by hearing aid or surgical implant.
Presbycusis is the most common cause of hearing loss, afflicting one out of three persons by age 65, and one out of two by age 75. Presbycusis is the second most common illness next to arthritis in aged people.
Many vertebrates such as fish, birds and amphibians do not suffer presbycusis in old age as they are able to regenerate their cochlear sensory cells, whereas mammals including humans have genetically lost this regenerative ability.
Spatial hearing loss, refers to a form of deafness that is an inability to use spatial cues about where a sound originates from in space. This in turn affects the ability to understand speech in the presence of background noise.
TTS imperceptibly gives way to PTS.
In addition to hearing loss, other external symptoms of an acoustic trauma can be:
- Tinnitus
- Some pain in the ear
- Hyperacusis
- Dizziness or vertigo; in the case of vestibular damages, in the inner-ear
The first symptom of noise-induced hearing loss is usually difficulty hearing a conversation against a noisy background. The effect of hearing loss on speech perception has two components. The first component is the loss of audibility, which is something like a decrease in overall volume. Modern hearing aids compensate this loss with amplification. However, difficulty in understanding speech represents selective frequency loss for which hearing aids and amplification do not help. This is known by different names such as “distortion,” “clarity loss,” and “Signal-to-Noise-Ratio (SNR)-loss.” Consonants, due to their higher frequency, seem to be lost first. For example, the letters “s” and “t” are the common letters that are difficult to hear for those with hearing loss due to them being our highest frequency sound in our language. Hearing loss can affect either one or both ears. When one ear is affected it causes problems with directional hearing. Directional hearing provides the ability to determine from which direction a sound came. Lacking this ability can cause confusion within individuals who have hearing loss in one ear.
Amblyaudia (amblyos- blunt; audia-hearing) is a term coined by Dr. Deborah Moncrieff from the University of Pittsburgh to characterize a specific pattern of performance from dichotic listening tests. Dichotic listening tests are widely used to assess individuals for binaural integration, a type of auditory processing skill. During the tests, individuals are asked to identify different words presented simultaneously to the two ears. Normal listeners can identify the words fairly well and show a small difference between the two ears with one ear slightly dominant over the other. For the majority of listeners, this small difference is referred to as a "right-ear advantage" because their right ear performs slightly better than their left ear. But some normal individuals produce a "left-ear advantage" during dichotic tests and others perform at equal levels in the two ears. Amblyaudia is diagnosed when the scores from the two ears are significantly different with the individual's dominant ear score much higher than the score in the non-dominant ear
Researchers interested in understanding the neurophysiological underpinnings of amblyaudia consider it to be a brain based hearing disorder that may be inherited or that may result from auditory deprivation during critical periods of brain development. Individuals with amblyaudia have normal hearing sensitivity (in other words they hear soft sounds) but have difficulty hearing in noisy environments like restaurants or classrooms. Even in quiet environments, individuals with amblyaudia may fail to understand what they are hearing, especially if the information is new or complicated. Amblyaudia can be conceptualized as the auditory analog of the better known central visual disorder amblyopia. The term “lazy ear” has been used to describe amblyaudia although it is currently not known whether it stems from deficits in the auditory periphery (middle ear or cochlea) or from other parts of the auditory system in the brain, or both. A characteristic of amblyaudia is suppression of activity in the non-dominant auditory pathway by activity in the dominant pathway which may be genetically determined and which could also be exacerbated by conditions throughout early development.
Since cortical deafness and auditory agnosia have many similarities, diagnosing the disorder proves to be difficult. Bilateral lesions near the primary auditory cortex in the temporal lobe are important criteria. Cortical deafness requires demonstration that brainstem auditory responses are
normal, but cortical evoked potentials are impaired. Brainstem auditory evoked potentials (BAEP), also referred to as brainstem auditory evoked responses (BAER) show the neuronal activity in the auditory nerve, cochlear nucleus, superior olive, and inferior colliculus of the brainstem. They typically have a response latency of no more than six milliseconds with an amplitude of approximately one microvolt. The latency of the responses gives critical information: if cortical deafness is applicable, LLR (long-latency responses) are completely abolished and MLR (middle latency responses) are either abolished or significantly impaired. In auditory agnosia, LLRs and MLRs are preserved.
Another important aspect of cortical deafness that is often overlooked is that patients "feel" deaf. They are aware of their inability to hear environmental sounds, non-speech and speech sounds. Patients with auditory agnosia can be unaware of their deficit, and insist that they are not deaf. Verbal deafness and auditory agnosia are disorders of a selective, perceptive and associative nature whereas cortical deafness relies on the anatomic and functional disconnection of the auditory cortex from acoustic impulses.
Children with amblyaudia experience difficulties in speech perception, particularly in noisy environments, sound localization, and binaural unmasking (using interaural cues to hear better in noise) despite having normal hearing sensitivity (as indexed through pure tone audiometry). These symptoms may lead to difficulty attending to auditory information causing many to speculate that language acquisition and academic achievement may be deleteriously affected in children with amblyaudia. A significant deficit in a child's ability to use and comprehend expressive language may be seen in children who lacked auditory stimulation throughout the critical periods of auditory system development. A child suffering from amblyaudia may have trouble in appropriate vocabulary comprehension and production and the use of past, present and future tenses. Amblyaudia has been diagnosed in many children with reported difficulties understanding and learning from listening and adjudicated adolescents are at a significantly high risk for amblyaudia (Moncrieff, et al., 2013, Seminars in Hearing).
Sensorineural hearing loss (SNHL) is a type of hearing loss, or deafness, in which the root cause lies in the inner ear or sensory organ (cochlea and associated structures) or the vestibulocochlear nerve (cranial nerve VIII) or neural part. SNHL accounts for about 90% of hearing loss reported. SNHL is generally permanent and can be mild, moderate, severe, profound, or total. Various other descriptors can be used such as high frequency, low frequency, U-shaped, notched, peaked or flat depending on the shape of the audiogram, the measure of hearing.
"Sensory" hearing loss often occurs as a consequence of damaged or deficient cochlear hair cells. Hair cells may be abnormal at birth, or damaged during the lifetime of an individual. There are both external causes of damage, including noise trauma, infection and ototoxic drugs, as well as intrinsic causes, including genetic mutations. A common cause or exacerbating factor in sensory hearing loss is prolonged exposure to environmental noise, for example, being in a loud workplace without wearing protection, or having headphones set to high volumes for a long period. Exposure to a very loud noise such as a bomb blast can cause noise-induced hearing loss.
"Neural", or 'retrocochlear', hearing loss occurs because of damage to the cochlear nerve (CVIII). This damage may affect the initiation of the nerve impulse in the cochlear nerve or the transmission of the nerve impulse along the nerve into the brainstem.
Most cases of SNHL present with a gradual deterioration of hearing thresholds occurring over years to decades. In some the loss may eventually affect large portions of the frequency range. It may be accompanied by other symptoms such as ringing in the ears (tinnitus), dizziness or lightheadedness (vertigo). SNHL can be genetically inherited or acquired as a result from external causes like noise or disease. It may be congenital (present at birth) or develop later in life. The most common kind of sensorineural hearing loss is age-related (presbycusis), followed by noise-induced hearing loss (NIHL).
Frequent symptoms of SNHL are loss of acuity in distinguishing foreground voices against noisy backgrounds, difficulty understanding on the telephone, some kinds of sounds seeming excessively loud or shrill (recruitment), difficulty understanding some parts of speech (fricatives and sibilants), loss of directionality of sound, esp. high frequency sounds, perception that people mumble when speaking, and difficulty understanding speech. Similar symptoms are also associated with other kinds of hearing loss; audiometry or other diagnostic tests are necessary to distinguish sensorineural hearing loss.
Identification of sensorineural hearing loss is usually made by performing a pure tone audiometry (an audiogram) in which bone conduction thresholds are measured. Tympanometry and speech audiometry may be helpful. Testing is performed by an audiologist.
There is no proven or recommended treatment or cure for SNHL; management of hearing loss is usually by hearing strategies and hearing aid. In cases of profound or total deafness, a cochlear implant is a specialised hearing aid which may restore a functional level of hearing. SNHL is at least partially preventable by avoiding environmental noise, ototoxic chemicals and drugs, and head trauma, and treating or inoculating against certain triggering diseases and conditions like meningitis.
SSHL is diagnosed via pure tone audiometry. If the test shows a loss of at least 30db in three adjacent frequencies, the hearing loss is diagnosed as SSHL. For example, a hearing loss of 30db would make conversational speech sound more like a whisper.
Auditory verbal agnosia can be referred to as a pure aphasia because it has a high degree of specificity. Despite an inability to comprehend speech, patients with auditory verbal agnosia typically retain the ability to hear and process non-speech auditory information, speak, read and write. This specificity suggests that there is a separation between speech perception, non-speech auditory processing, and central language processing. In support of this theory, there are cases in which speech and non-speech processing impairments have responded differentially to treatment. For example, some therapies have improved writing comprehension in patients over time, while speech remained critically impaired in those same patients.
The term "pure word deafness" is something of a misnomer. By definition, individuals with pure word deafness are not deaf – in the absence of other impairments, these individuals have normal hearing for all sounds, including speech. The term "deafness" originates from the fact that individuals with AVA are unable to "comprehend" speech that they hear. The term "pure word" refers to the fact that comprehension of verbal information is selectively impaired in AVA. For this reason, AVA is distinct from other auditory agnosias in which the recognition of nonspeech sounds is impaired. Classical (or pure) auditory agnosia is an inability to process environmental sounds. Interpretive or receptive agnosia (amusia) is an inability to understand music.
Patients with pure word deafness complain that speech sounds simply do not register, or that they tend not to come up. Other claims include speech sounding as if it were in a foreign language, the words having a tendency to run together, or the feeling that speech was simply not connected to the patient's voice.
Auditory agnosia is a form of agnosia that manifests itself primarily in the inability to recognize or differentiate between sounds. It is not a defect of the ear or "hearing", but a neurological inability of the brain to process sound meaning. It is a disruption of the "what" pathway in the brain. Persons with auditory agnosia can physically hear the sounds and describe them using unrelated terms, but are unable to recognize them. They might describe the sound of some environmental sounds, such as a motor starting, as resembling a lion roaring, but would not be able to associate the sound with "car" or "engine", nor would they say that it "was" a lion creating the noise. Auditory agnosia is caused by damage to the secondary and tertiary auditory cortex of the temporal lobe of the brain.
Auditory verbal agnosia (AVA), also known as pure word deafness, is the inability to comprehend speech. Individuals with this disorder lose the ability to understand language, repeat words, and write from dictation. Some patients with AVA describe hearing spoken language as meaningless noise, often as though the person speaking was doing so in a foreign language. However, spontaneous speaking, reading, and writing are preserved. The maintenance of the ability to process non-speech auditory information, including music, also remains relatively more intact than spoken language comprehension. Individuals who exhibit pure word deafness are also still able to recognize non-verbal sounds. The ability to interpret language via lip reading, hand gestures, and context clues is preserved as well. Sometimes, this agnosia is preceded by cortical deafness; however, this is not always the case. Researchers have documented that in most patients exhibiting auditory verbal agnosia, the discrimination of consonants is more difficult than that of vowels, but as with most neurological disorders, there is variation among patients.
Auditory verbal agnosia (AVA) is not the same as Auditory agnosia; patients with (nonverbal) auditory agnosia have a relatively more intact speech comprehension system despite their impaired recognition of nonspeech sounds.
Due to variations in study designs, data on the course of tinnitus showed few consistent results. Generally the prevalence increased with age in adults, whereas the ratings of annoyance decreased with duration.
Tinnitus can be perceived in one or both ears or in the head. It is the description of a noise inside a person’s head in the absence of auditory stimulation. The noise can be described in many different ways.
It is usually described as a ringing noise but, in some patients, it takes the form of a high-pitched whining, electric buzzing, hissing, humming, tinging or whistling sound or as ticking, clicking, roaring, "crickets" or "tree frogs" or "locusts (cicadas)", tunes, songs, beeping, sizzling, sounds that slightly resemble human voices or even a pure steady tone like that heard during a hearing test. It has also been described as a "whooshing" sound because of acute muscle spasms, as of wind or waves. Tinnitus can be intermittent or continuous: in the latter case, it can be the cause of great distress. In some individuals, the intensity can be changed by shoulder, head, tongue, jaw or eye movements. Most people with tinnitus have some degree of hearing loss.
The sound perceived may range from a quiet background noise to one that can be heard even over loud external sounds. The specific type of tinnitus called pulsatile tinnitus is characterized by hearing the sounds of one's own pulse or muscle contractions, which is typically a result of sounds that have been created by the movement of muscles near to one's ear, or the sounds are related to blood flow of the neck or face.
Visual agnosia is a broad category that refers to a deficiency in the ability to recognize visual objects. Visual agnosia can be further subdivided into two different subtypes: apperceptive visual agnosia and associative visual agnosia.
Individuals with apperceptive visual agnosia display the ability to see contours and outlines when shown an object, but they experience difficulty if asked to categorize objects. Apperceptive visual agnosia is associated with damage to one hemisphere, specifically damage to the posterior sections of the right hemisphere.
In contrast, individuals with associative visual agnosia experience difficulty when asked to name objects. Associative agnosia is associated with damage to both the right and left hemispheres at the occipitotemporal border. A specific form of associative visual agnosia is known as prosopagnosia. Prosopagnosia is the inability to recognize faces. For example, these individuals have difficulty recognizing friends, family and coworkers. However, individuals with prosopagnosia can recognize all other types of visual stimuli.
Agnosia is the inability to process sensory information. Often there is a loss of ability to recognize objects, persons, sounds, shapes, or smells while the specific sense is not defective nor is there any significant memory loss. It is usually associated with brain injury or neurological illness, particularly after damage to the occipitotemporal border, which is part of the ventral stream. Agnosia only affects a single modality, such as vision or hearing. More recently, a top-down interruption is considered to cause the disturbance of handling perceptual information.
There are three primary distinctions of auditory agnosia that fall into two categories.
Conduction aphasics will show relatively well-preserved auditory comprehension, which may even be completely functional. Spontaneous speech production will be fluent and generally grammatically and syntactically correct. Intonation and articulation will also be preserved. Speech will often contain paraphasic errors: phonemes and syllables will be dropped or transposed (e.g., "snowball" → "snowall", "television" → "vellitision", "ninety-five percent" → "ninety-twenty percent"). The hallmark deficit of this disorder, however, is in repetition. Patients will show a marked inability to repeat words or sentences when prompted by an examiner. After saying a sentence to a person with conduction aphasia, he or she will be able to paraphrase the sentence accurately but will not be able to repeat it, possibly because their "motor speech error processing is disrupted by inaccurate forward predictions, or because detected errors are not translated into corrective commands due to damage to the auditory-motor interface". When prompted to repeat words, patients will be unable to do so, and produce many paraphasic errors. For example, when prompted with "bagger", a patient may respond with, "gabber". Oral reading can also be poor.
However, patients recognize their paraphasias and errors and will try to correct them, with multiple attempts often necessary for success. This recognition is due to preserved auditory error detection mechanisms. Error sequences frequently fit a pattern of incorrect approximations featuring known morphemes that "a") share one or more similarly located phonemes but "b") differ in at least one aspect that makes the substituted morpheme(s) semantically distinct. This repetitive effort to approximate the appropriate word or phrase is known as "conduite d’approche". For example, when prompted to repeat "Rosenkranz", a German-speaking patient may respond with, "rosenbrau... rosenbrauch... rosengrau... bro... grosenbrau... grossenlau, rosenkranz... kranz... rosenkranz".
Conduction aphasia is a relatively mild language impairment, and most patients return to day-to-day life. Symptoms of conduction aphasia, as with other aphasias, can be transient, lasting only several hours or a few days. As aphasias and other language disorders are frequently due to stroke, their symptoms can change and evolve over time, or simply disappear. This is due to healing in the brain after inflammation or hemorrhage, which leads to decreased local impairment. Furthermore, plastic changes in the brain may lead to the recruitment of new pathways to restore lost function. For example, the right hemisphere speech systems may learn to correct for left-hemisphere damage. However, chronic conduction aphasia is possible, without transformation to other aphasias. These patients show prolonged, profound deficits in repetition, frequent phonemic paraphasias, and "conduite d'approche" during spontaneous speech.