TRENDS in Cognitive Sciences
Vol.9 No.12 December 2005
Towards a neural basis of music perception
Stefan Koelsch1 and Walter A. Siebel2
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany Conﬂict Research Center, Wiesbaden, Germany
Music perception involves complex brain functions underlying acoustic analysis, auditory memory, auditory sceneanalysis, and processing of musical syntax and semantics. Moreover, music perception potentially affects emotion, inﬂuences the autonomic nervous system, the hormonal and immune systems, and activates (pre)motor representations. During the past few years, research activities on different aspects of music processing and their neural correlates have rapidly progressed. This article provides anoverview of recent developments and a framework for the perceptual side of music processing. This framework lays out a model of the cognitive modules involved in music perception, and incorporates information about the time course of activity of some of these modules, as well as research ﬁndings about where in the brain these modules might be located.
music an ideal tool to investigate the workingsof the human brain. When we listen to music, the auditory information passes through different processing stages until bodily reactions are possibly elicited, and until a musical percept becomes conscious. This article presents a model in which the different stages of music perception are assigned to different modules (see Figure 1; for investigations related to music production, see, e.g. [4,5]).The current model is based on previous modular approaches to music perception [6,7], but extends them by: (i) relating operations of different modules to ERP components (thus being able to provide information about the time course of their activity); (ii) adding modules that have become important in the literature on music perception in the past 5 years or so; and (iii) integrating recentresearch about where in the brain some of these modules might be located. Early processing stages Acoustic information is translated into neural activity in the cochlea, and progressively transformed in the auditory brainstem, as indicated by different neural response properties for pitch, timbre, roughness, intensity and interaural disparities in the superior olivary complex and the inferior colliculus[8,9]. This pre-processing enables the registration of auditory signals of danger as early as at the level of the superior colliculus and the thalamus. From the thalamus, information is projected mainly into the (primary) auditory cortex . The thalamus is also directly connected with the amygdala  and medial orbitofrontal cortex , structures implicated in emotion and control ofemotional behaviour. In the auditory cortex (most probably in primary and adjacent secondary auditory ﬁelds), more speciﬁc information about acoustic features, such as pitch height, pitch chroma, timbre, intensity and roughness is extracted [10,13–17]. These operations appear to be reﬂected in electrophysiological recordings in ERP components that have latencies of up to 100 ms (e.g. the P1 and N1; foreffects of musical training on feature extraction see, e.g. ). The details about how and where in the auditory cortex these features are extracted are still not well understood. With respect to the meaning of sounds it is interesting that just a short single tone can sound, for example, ‘bright’, ‘rough’, or ‘dull’; that is, single tones are already capable of conveying meaning information.Introduction During the past few years, music has increasingly been used as a tool for the investigation of human cognition and its underlying brain mechanisms. Music is one of the oldest and most basic socio-cognitive domains of the human species. It is assumed by some that human musical abilities played a key phylogenetical role in the evolution of language, and that music-making behaviour...