Made by DATEXIS (Data Science and Text-based Information Systems) at Beuth University of Applied Sciences Berlin
Deep Learning Technology: Sebastian Arnold, Betty van Aken, Paul Grundmann, Felix A. Gers and Alexander Löser. Learning Contextualized Document Representations for Healthcare Answer Retrieval. The Web Conference 2020 (WWW'20)
Funded by The Federal Ministry for Economic Affairs and Energy; Grant: 01MD19013D, Smart-MD Project, Digital Technologies
Auditory neuropathy (AN) is a variety of hearing loss in which the outer hair cells within the cochlea are present and functional, but sound information is not faithfully transmitted to the auditory nerve and brain properly. Also known as auditory neuropathy/auditory dys-synchrony (AN/AD) or auditory neuropathy spectrum disorder (ANSD).
A neuropathy usually refers to a disease of the peripheral nerve or nerves, but the auditory nerve itself is not always affected in auditory neuropathy spectrum disorders.
Based on clinical testing of subjects with auditory neuropathy, the disruption in the stream of sound information has been localized to one or more of three probable locations: the inner hair cells of the cochlea, the synapse between the inner hair cells and the auditory nerve, or a lesion of the ascending auditory nerve itself.
Amblyaudia (amblyos- blunt; audia-hearing) is a term coined by Dr. Deborah Moncrieff from the University of Pittsburgh to characterize a specific pattern of performance from dichotic listening tests. Dichotic listening tests are widely used to assess individuals for binaural integration, a type of auditory processing skill. During the tests, individuals are asked to identify different words presented simultaneously to the two ears. Normal listeners can identify the words fairly well and show a small difference between the two ears with one ear slightly dominant over the other. For the majority of listeners, this small difference is referred to as a "right-ear advantage" because their right ear performs slightly better than their left ear. But some normal individuals produce a "left-ear advantage" during dichotic tests and others perform at equal levels in the two ears. Amblyaudia is diagnosed when the scores from the two ears are significantly different with the individual's dominant ear score much higher than the score in the non-dominant ear
Researchers interested in understanding the neurophysiological underpinnings of amblyaudia consider it to be a brain based hearing disorder that may be inherited or that may result from auditory deprivation during critical periods of brain development. Individuals with amblyaudia have normal hearing sensitivity (in other words they hear soft sounds) but have difficulty hearing in noisy environments like restaurants or classrooms. Even in quiet environments, individuals with amblyaudia may fail to understand what they are hearing, especially if the information is new or complicated. Amblyaudia can be conceptualized as the auditory analog of the better known central visual disorder amblyopia. The term “lazy ear” has been used to describe amblyaudia although it is currently not known whether it stems from deficits in the auditory periphery (middle ear or cochlea) or from other parts of the auditory system in the brain, or both. A characteristic of amblyaudia is suppression of activity in the non-dominant auditory pathway by activity in the dominant pathway which may be genetically determined and which could also be exacerbated by conditions throughout early development.
Spatial hearing loss, refers to a form of deafness that is an inability to use spatial cues about where a sound originates from in space. This in turn affects the ability to understand speech in the presence of background noise.
Children with amblyaudia experience difficulties in speech perception, particularly in noisy environments, sound localization, and binaural unmasking (using interaural cues to hear better in noise) despite having normal hearing sensitivity (as indexed through pure tone audiometry). These symptoms may lead to difficulty attending to auditory information causing many to speculate that language acquisition and academic achievement may be deleteriously affected in children with amblyaudia. A significant deficit in a child's ability to use and comprehend expressive language may be seen in children who lacked auditory stimulation throughout the critical periods of auditory system development. A child suffering from amblyaudia may have trouble in appropriate vocabulary comprehension and production and the use of past, present and future tenses. Amblyaudia has been diagnosed in many children with reported difficulties understanding and learning from listening and adjudicated adolescents are at a significantly high risk for amblyaudia (Moncrieff, et al., 2013, Seminars in Hearing).
Auditory processing disorder (APD), also known as central auditory processing disorder (CAPD), is an umbrella term for a variety of disorders that affect the way the brain processes auditory information. Individuals with APD usually have normal structure and function of the outer, middle and inner ear (peripheral hearing). However, they cannot process the information they hear in the same way as others do, which leads to difficulties in recognizing and interpreting sounds, especially the sounds composing speech. It is thought that these difficulties arise from dysfunction in the central nervous system.
The American Academy of Audiology notes that APD is diagnosed by difficulties in one or more auditory processes known to reflect the function of the central auditory nervous system.
APD can affect both children and adults, although the actual prevalence is currently unknown. It has been suggested that males are twice as likely to be affected by the disorder as females, but there are no good epidemiological studies.
Cortical deafness is a rare form of sensorineural hearing loss caused by damage to the primary auditory cortex. Cortical deafness is an auditory disorder where the patient is unable to hear sounds but has no apparent damage to the anatomy of the ear (see auditory system), which can be thought of as the combination of auditory verbal agnosia and auditory agnosia. Patients with cortical deafness cannot hear any sounds, that is, they are not aware of sounds including non-speech, voices, and speech sounds. Although patients appear and feel completely deaf, they can still exhibit some reflex responses such as turning their head towards a loud sound.
Cortical deafness is caused by bilateral cortical lesions in the primary auditory cortex located in the temporal lobes of the brain. The ascending auditory pathways are damaged, causing a loss of perception of sound. Inner ear functions, however, remains intact. Cortical deafness is most often cause by stroke, but can also result from brain injury or birth defects. More specifically, a common cause is bilateral embolic stroke to the area of Heschl's gyri. Cortical deafness is extremely rare, with only twelve reported cases. Each case has a distinct context and different rates of recovery.
It is thought that cortical deafness could be a part of a spectrum of an overall cortical hearing disorder. In some cases, patients with cortical deafness have had recovery of some hearing function, resulting in partial auditory deficits such as auditory verbal agnosia. This syndrome might be difficult to distinguish from a bilateral temporal lesion such as described above.
Beat deafness is a form of congenital amusia characterized by a person's inability to distinguish musical rhythm or move in time to it.
King–Kopetzky syndrome is an auditory disability characterised by difficulty in hearing speech in the presence of background noise in conjunction with the finding of normal hearing test results.
It is an example of auditory processing disorder (APD) or "auditory disability with normal hearing (ADN)".
King–Kopetzky syndrome patients have a worse Social Hearing Handicap index (SHHI) than others, indicating they suffer a significant degree of speech-hearing disability.
The condition is named after Samuel J. Kopetzky, who first described the condition in 1948, and P. F. King, who first discussed the aetiological factors behind it in 1954.
It seems that somatic anxiety and situations of stress may be determinants of speech-hearing disability.
Some studies indicated an increased prevalence of a family history of hearing impairment in these patients. The pattern of results is suggestive that King-Kopetzky patients may be related to conditions of autosomal dominant inheritance.
Auditory verbal agnosia (AVA), also known as pure word deafness, is the inability to comprehend speech. Individuals with this disorder lose the ability to understand language, repeat words, and write from dictation. Some patients with AVA describe hearing spoken language as meaningless noise, often as though the person speaking was doing so in a foreign language. However, spontaneous speaking, reading, and writing are preserved. The maintenance of the ability to process non-speech auditory information, including music, also remains relatively more intact than spoken language comprehension. Individuals who exhibit pure word deafness are also still able to recognize non-verbal sounds. The ability to interpret language via lip reading, hand gestures, and context clues is preserved as well. Sometimes, this agnosia is preceded by cortical deafness; however, this is not always the case. Researchers have documented that in most patients exhibiting auditory verbal agnosia, the discrimination of consonants is more difficult than that of vowels, but as with most neurological disorders, there is variation among patients.
Auditory verbal agnosia (AVA) is not the same as Auditory agnosia; patients with (nonverbal) auditory agnosia have a relatively more intact speech comprehension system despite their impaired recognition of nonspeech sounds.
Symptoms may vary according to the disorder's type and subtype present. SPD can affect one sense or multiple senses. While many people can present one or two symptoms, sensory processing disorder has to have a clear functional impact on the person's life.
Primary symptoms:
- sounds or speech becoming dull, muffled or attenuated
- need for increased volume on television, radio, music and other audio sources
- difficulty using the telephone
- loss of directionality of sound
- difficulty understanding speech, especially women and children
- difficulty in speech discrimination against background noise (cocktail party effect)
Secondary symptoms:
- hyperacusis, heightened sensitivity to certain volumes and frequencies of sound, resulting from "recruitment"
- tinnitus, ringing, buzzing, hissing or other sounds in the ear when no external sound is present
- vertigo and disequilibrium
Usually occurs after age 50, but deterioration in hearing has been found to start very early, from about the age of 18 years. The ISO standard 7029 shows expected threshold changes due purely to age for carefully screened populations (i.e. excluding those with ear disease, noise exposure etc.), based on a meta-analysis of published data. Age affects high frequencies more than low, and men more than women. One early consequence is that even young adults may lose the ability to hear very high frequency tones above 15 or 16 kHz. Despite this, age-related hearing loss may only become noticeable later in life. The effects of age can be exacerbated by exposure to environmental noise, whether at work or in leisure time (shooting, music, etc.). This is noise-induced hearing loss (NIHL) and is distinct from presbycusis. A second exacerbating factor is exposure to ototoxic drugs and chemicals.
Over time, the detection of high-pitched sounds becomes more difficult, and speech perception is affected, particularly of sibilants and fricatives. Patients typically express a decreased ability to understand speech. Once the loss has progressed to the 2-4kHz range, there is increased difficulty understanding consonants. Both ears tend to be affected. The impact of presbycusis on communication depends on both the severity of the condition and the communication partner.
Auditory verbal agnosia can be referred to as a pure aphasia because it has a high degree of specificity. Despite an inability to comprehend speech, patients with auditory verbal agnosia typically retain the ability to hear and process non-speech auditory information, speak, read and write. This specificity suggests that there is a separation between speech perception, non-speech auditory processing, and central language processing. In support of this theory, there are cases in which speech and non-speech processing impairments have responded differentially to treatment. For example, some therapies have improved writing comprehension in patients over time, while speech remained critically impaired in those same patients.
The term "pure word deafness" is something of a misnomer. By definition, individuals with pure word deafness are not deaf – in the absence of other impairments, these individuals have normal hearing for all sounds, including speech. The term "deafness" originates from the fact that individuals with AVA are unable to "comprehend" speech that they hear. The term "pure word" refers to the fact that comprehension of verbal information is selectively impaired in AVA. For this reason, AVA is distinct from other auditory agnosias in which the recognition of nonspeech sounds is impaired. Classical (or pure) auditory agnosia is an inability to process environmental sounds. Interpretive or receptive agnosia (amusia) is an inability to understand music.
Patients with pure word deafness complain that speech sounds simply do not register, or that they tend not to come up. Other claims include speech sounding as if it were in a foreign language, the words having a tendency to run together, or the feeling that speech was simply not connected to the patient's voice.
Auditory agnosia is a form of agnosia that manifests itself primarily in the inability to recognize or differentiate between sounds. It is not a defect of the ear or "hearing", but a neurological inability of the brain to process sound meaning. It is a disruption of the "what" pathway in the brain. Persons with auditory agnosia can physically hear the sounds and describe them using unrelated terms, but are unable to recognize them. They might describe the sound of some environmental sounds, such as a motor starting, as resembling a lion roaring, but would not be able to associate the sound with "car" or "engine", nor would they say that it "was" a lion creating the noise. Auditory agnosia is caused by damage to the secondary and tertiary auditory cortex of the temporal lobe of the brain.
Presbycusis (also spelled presbyacusis, from Greek "presbys" “old” + "akousis" “hearing”), or age-related hearing loss, is the cumulative effect of aging on hearing. It is a progressive and irreversible bilateral symmetrical age-related sensorineural hearing loss resulting from degeneration of the cochlea or associated structures of the inner ear or auditory nerves. The hearing loss is most marked at higher frequencies. Hearing loss that accumulates with age but is caused by factors other than normal aging (nosocusis and sociocusis) is not presbycusis, although differentiating the individual effects of distinct causes of hearing loss can be difficult.
The cause of presbycusis is a combination of genetics, cumulative environmental exposures and pathophysiological changes related to aging. At present there are no preventative measures known; treatment is by hearing aid or surgical implant.
Presbycusis is the most common cause of hearing loss, afflicting one out of three persons by age 65, and one out of two by age 75. Presbycusis is the second most common illness next to arthritis in aged people.
Many vertebrates such as fish, birds and amphibians do not suffer presbycusis in old age as they are able to regenerate their cochlear sensory cells, whereas mammals including humans have genetically lost this regenerative ability.
Sensory-based motor disorder shows motor output that is disorganized as a result of incorrect processing of sensory information affecting postural control challenges, resulting in postural disorder, or developmental coordination disorder.
The SBMD subtypes are:
1. Dyspraxia
2. Postural disorder
Acquired APD can be caused by any damage to or dysfunction of the central auditory nervous system and can cause auditory processing problems. For an overview of neurological aspects of APD, see Griffiths.
Neuroscientists have learned a lot about the role of the brain in numerous cognitive mechanisms by understanding corresponding disorders. Similarly, neuroscientists have come to learn a lot about music cognition by studying music-specific disorders. Even though music is most often viewed from a "historical perspective rather than a biological one" music has significantly gained the attention of neuroscientists all around the world. For many centuries music has been strongly associated with art and culture. The reason for this increased interest in music is because it "provides a tool to study numerous aspects of neuroscience, from motor skill learning to emotion".
Since cortical deafness and auditory agnosia have many similarities, diagnosing the disorder proves to be difficult. Bilateral lesions near the primary auditory cortex in the temporal lobe are important criteria. Cortical deafness requires demonstration that brainstem auditory responses are
normal, but cortical evoked potentials are impaired. Brainstem auditory evoked potentials (BAEP), also referred to as brainstem auditory evoked responses (BAER) show the neuronal activity in the auditory nerve, cochlear nucleus, superior olive, and inferior colliculus of the brainstem. They typically have a response latency of no more than six milliseconds with an amplitude of approximately one microvolt. The latency of the responses gives critical information: if cortical deafness is applicable, LLR (long-latency responses) are completely abolished and MLR (middle latency responses) are either abolished or significantly impaired. In auditory agnosia, LLRs and MLRs are preserved.
Another important aspect of cortical deafness that is often overlooked is that patients "feel" deaf. They are aware of their inability to hear environmental sounds, non-speech and speech sounds. Patients with auditory agnosia can be unaware of their deficit, and insist that they are not deaf. Verbal deafness and auditory agnosia are disorders of a selective, perceptive and associative nature whereas cortical deafness relies on the anatomic and functional disconnection of the auditory cortex from acoustic impulses.
Conduction aphasics will show relatively well-preserved auditory comprehension, which may even be completely functional. Spontaneous speech production will be fluent and generally grammatically and syntactically correct. Intonation and articulation will also be preserved. Speech will often contain paraphasic errors: phonemes and syllables will be dropped or transposed (e.g., "snowball" → "snowall", "television" → "vellitision", "ninety-five percent" → "ninety-twenty percent"). The hallmark deficit of this disorder, however, is in repetition. Patients will show a marked inability to repeat words or sentences when prompted by an examiner. After saying a sentence to a person with conduction aphasia, he or she will be able to paraphrase the sentence accurately but will not be able to repeat it, possibly because their "motor speech error processing is disrupted by inaccurate forward predictions, or because detected errors are not translated into corrective commands due to damage to the auditory-motor interface". When prompted to repeat words, patients will be unable to do so, and produce many paraphasic errors. For example, when prompted with "bagger", a patient may respond with, "gabber". Oral reading can also be poor.
However, patients recognize their paraphasias and errors and will try to correct them, with multiple attempts often necessary for success. This recognition is due to preserved auditory error detection mechanisms. Error sequences frequently fit a pattern of incorrect approximations featuring known morphemes that "a") share one or more similarly located phonemes but "b") differ in at least one aspect that makes the substituted morpheme(s) semantically distinct. This repetitive effort to approximate the appropriate word or phrase is known as "conduite d’approche". For example, when prompted to repeat "Rosenkranz", a German-speaking patient may respond with, "rosenbrau... rosenbrauch... rosengrau... bro... grosenbrau... grossenlau, rosenkranz... kranz... rosenkranz".
Conduction aphasia is a relatively mild language impairment, and most patients return to day-to-day life. Symptoms of conduction aphasia, as with other aphasias, can be transient, lasting only several hours or a few days. As aphasias and other language disorders are frequently due to stroke, their symptoms can change and evolve over time, or simply disappear. This is due to healing in the brain after inflammation or hemorrhage, which leads to decreased local impairment. Furthermore, plastic changes in the brain may lead to the recruitment of new pathways to restore lost function. For example, the right hemisphere speech systems may learn to correct for left-hemisphere damage. However, chronic conduction aphasia is possible, without transformation to other aphasias. These patients show prolonged, profound deficits in repetition, frequent phonemic paraphasias, and "conduite d'approche" during spontaneous speech.
SSHL is diagnosed via pure tone audiometry. If the test shows a loss of at least 30db in three adjacent frequencies, the hearing loss is diagnosed as SSHL. For example, a hearing loss of 30db would make conversational speech sound more like a whisper.
Sensory dysfunction disorder is a reported neurological disorder of information processing, characterized by difficulty in understanding and responding appropriately to sensory inputs. Sensory dysfunction disorder is not recognized by the American Medical Association. "Sensory processing (SP) difficulties have been reported in as many as 95% of children with autism, however, empirical research examining the existence of specific patterns of SP difficulties within this population is scarce."
The brain receives messages from the body's sensory systems, which informs the brain of what is going on around and to a person's body. If one or more of these systems become overstimulated, it may result in what is known as Sensory Dysfunction Disorder. An example of a response to overstimulation is expressed by A. Jean Ayres, in "Sensory Integration and the Child: Understanding Hidden Sensory Challenges". She writes, "When the flow of sensations is disorganized, life can be like a rush-hour traffic jam” (p. 289). The following sensory systems are broken down into individual categories to better understand the impact a sensitivity can have on an individual.
There are three primary distinctions of auditory agnosia that fall into two categories.
A communication disorder is any disorder that affects an individual's ability to comprehend, detect, or apply language and speech to engage in discourse effectively with others. The delays and disorders can range from simple sound substitution to the inability to understand or use one's native language.