Speech and language: A cortical and subcortical System
What the silence of the isolate right hemisphere has dramatized is that speech is not solely a cortical function. Subcortical fiber tracts as well as gray matter areas deep within the brain particularly the thalamus also participate in speech and language.
The thalamus can beconceived of as a great relay station. Receiving nerve fiber projections from the cortex and lower nervous system structures and radiating fibers to all parts of the cortex.
Emerging as especially important to speech and language function is the left thalamus. Damage to portions of this structure produces involuntary repetition of words and disturbs the patient’s ability to name objects. The thalamusis thought to be involved in the focusing of attention temporarily heightening the receptivity of certain cortical sensory areas. Ojemann and Ward (1971) observed that information presented to patients during and after simulation, that information that had been presented prior to simulation.
They speculated that the thalamus may provide an interaction between language and memory mechanisms.Neurolinguists are far from being certain which neuroanatomical structures are essential to the encoding and decoding of linguistic stimuli, but they agree that speech results from an integrated cortical and subcortical system. An awareness that neural sensory, motor and associative mechanisms are interconnected is basic to understanding how the brain functions to encode and decode language.
Asimple model can representour knowledge of the transmission of signals to the language mechanism. In figure 12.5, the dark band between the semicircles (which represent coronal sections of the cerebral hemispheres) represents the hemispheric connection. Notice that impulses coming from the right side of the body have direct acces to the dominat speech center, whereas those from the left must touchbase with the right hemisphere before passing over the corpus callosum for processing. The left hemisphere is not dominant , however, for the processing of all auditory signals. Nonspeech environmental sounds do not have to be passed on to the left hemisphere but are processed primarily in the right hemisphere. How do we know this?
Evidence from Dichotic Listening Research
By means of aresearch technique called dichotic listening, we can analyze the characteristics of incoming stimuli processed by the individual hemispheres. During a dichotic listening task two different stimuli are presented simultaneously, trough earphones, to the left and right ears. For example, the right ear may be given the word base and the left ear ball. The listeners are instructed to say what they heart.Interestingly, certain types of stimuli delivered to a particular ear will be more accurately reported by the listener. This is because the nervous system is capable of scanning incoming stimuli and routing them to that area of the brain specialized for their interpretation. Kimura (1961) was the first to observe that when two digits were presented simultaneously, one to each ear, the listener moreaccurately identified those presented to the right ear. However, when the listener was known to have the less common right hemisphere dominance for speech, Kimura observed a left ear advantage. In other words, the ear having more direct access to the language center had an advantage. Although there is some auditory input to each cortex from the ear on the same side of the body, these uncrossed, oripsilateral, inputs are thought to be suppressed. The right ear advantage (REA) was originally thought to exist only for linguisdcally meaningful stimuli, but the same advantage has been found for nonsense syllables, speech played backward, consonant-vowel syllables, and even small units of speech such as fricatives. Intrigued by these findings, investigators have sought to discover those...