- For many years the only evidence we had pertaining to Neanderthal language came from our reconstructions of their speech production apparatuses. As we read last week, much of the evidence yielded from this side of the debate is highly contested. Fortunately, since communication requires a link between both speakers and listeners, oral communication is only one part of the story. It turns out that the strongest case for Neanderthal language may come from their hearing. But since our ears don’t fossilize, how can we know?
- Introduction and Terms Glossary
- Part 1: Language Genes
- Part 2: The Anatomy of Speech
- Part 4: Speculations and Moving Forward
This is the third part of my series on Neanderthal language. If you have not read the introduction to the series and the two other sections, you can follow the links above. The introduction also includes a terms glossary.
As the science of hominin hearing is relatively new and being updated constantly with new finds, this post will be shorter than the other two previous sections. I had considered including in this section a short bit on Neanderthal neuroanatomy, but I figured I would leave that for now for a future discussion.
What We Hear and How We Hear It
The study of hearing, also known as audition, is a complicated field relying on multiple levels of expertise ranging from biology to perception to physics. Although it is often difficult to study some aspects of human hearing, many biologists, and now paleoanthropologists, have undertaken the task of studying hearing in non-humans.
The anatomy of the ear can be broken down in three parts consisting of the outer, middle, and inner ear. The outer ear is the part we see and the ear canal within, this is generally as far as you can go with a Q-tip without breaking your eardrum. Your middle ear consists of your ear drum and the three smallest bones in the human body- the malleus, incus, and stapes. These are known as the ossicles, and in Latin these mean hammer, anvil, and stirrup, respectively. The inner ear, which is located in the bony matrix of the skull consists of the cochlea which is attached to three semicircular canals.
Sound first travels and is gathered in the outer ear- its shape is critical for boosting sound waves as they reach the skull. As it travels from the outer ear and into the ear canal, it hits the ear drum which vibrates against the malleus. Vibrations in the malleus then transmits to the stapes which finally transmits it to the cochlea. This spiral-like bone is filled with fluid and tiny hairs which vibrate, sending a signal to your auditory nerve. Everything after that is up to the brain. The semi-circular canals are unrelated, but are quite ancient. These are used to coordinate movement and balance.
In terms of assessing what human hearing is, audiologists have several methods they can rely on for constructing a map of hearing ranges (known as an audiogram). The first and most straightforward are response tests done by playing back pure tones which vary at different power levels (decibels) and frequencies (Hz) and asking people to respond either by raising their hand or pressing a button when they hear. The second, which is used in my dissertation research on primates, involves measuring brain waves autonomically sent to the brain stem upon hearing puretone sounds. A third one involves measuring acoustic feedback known as otoacoustic emissions evoked by the cochlea upon hearing sounds.
Compared to other large-bodied primates, human hearing is relatively derived. Based on what we know from our audiograms, humans without hearing loss generally hear in a range from 20Hz-20kHz. Each audiogram for a species also has a range of best sensitivity. Humans have a relatively increased sensitivity to sounds in the low-frequency range of 1-4kHz with a range of best sensitivity in the 2-4kHz range. Chimpanzees on the other hand exhibit an area of best sensitivity in the range of about 8kHz and hear as high as 27kHz (although Tecumseh Fitch disputes this on the basis that chimpanzees used in audiology studies may have exhibited hearing loss, their heightened sensitivity at 8kHz is worth noting- the human and chimp audiograms in respect to higher frequencies are not the same).1,2
What these thresholds mean for language are pretty important. As discussed in the previous post on speech, the first two formant frequencies of vowels are generally below 2.5 kHz. What we did not talk about in the previous discussion was human consonants. Consonants, which are produced by impeding airflow from the vocal tract either by control of the lips or tongue, are often produced in higher ranges than vowels. In fact, the area between 3-5kHz is particularly important for the consonants commonly shared by many languages around the world.3
Evolution of Hearing Apparatuses
Unfortunately for paleoanthropologists working in the field of hominin hearing, we lack the ability to examine their hearing systems through the typical performance tasks we use to measure hearing in infants and non-human animals. Fortunately, because the ossicles are involved in hearing and are sometimes preserved, we can make some predictions based on the parts of the middle and inner ear that sometimes fossilize.
Despite the ossicles of fossil remains being studied for some time in paleanthropology, the first study I am aware of which attempted to deduce mechanistic inferences from them was by Jacobo Moggi-Cecchi and Mark Collard in 2002 when they looked at the remains of a hominin stapes from the remains of a late Australopithecus africanus found in South Africa known as STS 151. Noting that the footplate of the stapes, which directly transmits energy to the cochlea, was smaller than those found in humans and Neanderthals and more similar to those found in other great apes, the authors concluded that the hearing of Australopithecines and early Homo must have been more like the high-frequency attuned hearing of great apes and other primates.4
As this study’s methods were relatively limited, the authors were unable to actually determine precise values for the hearing range due to their relatively straightforward comparative approach. Since then, more highly nuanced methods based on models developed from already known audiograms for other mammals have been applied to the auditory capabilities of hominins.5
The models themselves are quite complex and difficult for me to explain to layman, but they are based on a number of values extracted and interpreted as the amount of power eventually sent into the cochlea by the ossicles in the inner ear. Most of this work is currently ongoing in a research paradigm lead by Rolf Quam and his students at Binghamton University, but several conclusions thus far have been reached regarding hominins from Australopithecus afarensis to Paranthropus robustus and finally to our hominin in question, the Neanderthals.6
Based on the evidence thus far, it appears that the hearing between Neanderthals and humans are almost equal, and both are derived from the hearing of earlier hominins. The southern African hominins Australopithecus africanus and Paranthropus robustus both exhibited an increased sensitivity between 1.5 and 3.5 kHz. Compared to chimpanzees, this means that their hearing was better at lower sensitivities, but compared to humans this means that they lacked as much pickup at the high sensitivities. Quam and his team have interpreted this to mean that these hominins lacked the ability to perceive many of the critical consonants in the human language system today. This might sound like an incomplete story to readers though as the early hominin ability to hear at a lower frequency than chimpanzees is not explained, but Quam and his colleagues presented this sensitivity as being completely unrelated to language.7
One of the issues in communication is the challenge presented by what happens to a signal during the time it is travelling from sender to receiver. This is where the term, “lost in transmission,” comes from. For this reason, many bioacoustic analyses try to take into account issues in the environment which affect transmission of a signal. The team’s explanation was that the Southern African environment of the time was similar to the one of today, being characterized by an abundance of wide open habitats which contained a number of resources which we now know both of these species regularly exploited. In open habitats, the biggest factor limiting the transmission of a signal is wind. Counterintuitively, compared to closed habitats such as forests which contain objects that block wind and off of which sounds bounce, open habitats are notorious for obscuring signals.8 In these settings, both low frequency signals and close-range communication are optimal for good transmission. As such, open habitat animals such as elephants have adapted to utilize infrasound (meaning sounds so low a frequency that humans cannot perceive them) while savannah monkeys have restricted much of their communication to short-range low-frequency calls.9,10
In respect to their hearing apparatus, the middle ear of Neanderthals differs in several ways from humans. Although Quam’s team at one point looked at ossicles from Neanderthals and concluded there was overlap with humans, another study concluded that this was more than likely a result of similarities in body size, as the overall shape of the bones were different.11 Yet the authors of this other study were surprised to find that despite non-overlap between these shapes, the results were largely non-function and the predicted hearing ranges of Neanderthals and humans were pretty much the same.12
How would this be possible? Is this pattern in Neanderthals simply a coincidence due to similar environments or convergent evolution? Were their results wrong? Well we also know from Quam’s studies that the hominins from Sima de los Huesos in Spain dating to about 350,000 years ago had ossicles which predict a hearing range very similar to the range found in humans. Incidentally, these hominins were not on the lineage leading to humans, and were instead part of a unique Neanderthal clade. This means that prior to 350,000 years ago, the ancestor of both Neanderthals and modern humans more than likely had a similar hearing apparatus. Additionally, as we learned in our last discussion, these hominins possessed a hyoid bone nearly identical to those found in humans.13
If we are to believe these hearing reconstructions based on fossil evidence, this means that despite adapting to separate environments with different acoustic properties and possessing very different body plans, the descendants of the last common ancestor of Neanderthals and humans retained the same middle-ear plan. Additionally, although some folks have argued that the cochlea is unreliable for assessing hearing ranges in fossil hominins, it is worth noting that the cochlea of Neanderthals and many human populations are also similar.14
The Importance of Consonants
The most important thing gathered from these hearing studies are the Neanderthals’ hypothetical ability to discriminate consonants. As Tecumseh Fitch argued in our post last week, quantal theory means that pretty much any primate can produce vowels which they could perceive. Consonants are a different matter as their structure is constrained by factors such as the lips, tongue, and other things which reconstructions of the speech apparatus cannot account for. Even more importantly, just because any primate can produce consonants doesn’t mean that any primate can perceive them.
One of the marked things about language acquisition in humans is that vowels are learned first and consonants come next. Calling a child by his or her name in the early stages of life and changing up a consonant will usually lead to fewer corrections than if you change a vowel. Much of this may have something to do with the undeveloped brain’s ability to mimic consonants during infant babbling- vowels are easier to produce, consonants are actually tough. As we get older the consonants eventually become more important.15,16
Our consonants also have important implications for speech as a whole and for its evolution. In human language, about 90% of languages contain consonants in the high range of 3-5kHz.3 Although yet untested, humans more likely than not possess more adept discrimination of these sounds than other animals. One theory for the evolution of language, known as the frame/content theory depends upon the relationship of consonants to vowels for creating syllables and the contents of speech. If you will recall from last week’s discussion, the dispersion-focalization theory predicts that most phonemes produced in speech will be produced in an attempt both to maximize acoustic space between other components of speech as well as be relatively easier to articulate.17 Many of the consonants in this high range, such as the voiceless fricatives, match these predictions.
Given the evidence discussed thus far, it seems that there is a growing case for the presence of Neanderthal speech in some shape or form in both the anatomical and genetic record. What I have presented to you on this so far in the series is just one side of the story and many scientists, including some of which I have cited, might actually disagree with what I have said so far. In our final entry I will be addressing many of the primary critiques of Neanderthal language and speculating on what Neanderthal vocalizations may have sounded like given what we know right now.
You can follow my colleague Alex Velez, who reconstructs hearing like we read about today in Rolf Quam’s lab, on Twitter @ADVel6. You can also follow Dr. Mark Collard, who initially studied the stapes of STS 151, @profmarkcollard.
1Quam, R.M., Coleman, M.N. and Martínez, I., 2014. Evolution of the auditory ossicles in extant hominids: metric variation in African apes and humans. Journal of Anatomy, 225(2), pp.167-196.
2Fitch, W.T., 2010. The Evolution of Language. Cambridge University Press.
3Maddieson, Ian, and Sandra Ferrari Disner. Patterns of Sounds. Cambridge university press, 1984.
4Moggi-Cecchi, J. and Collard, M., 2002. A fossil stapes from Sterkfontein, South Africa, and the hearing capabilities of early hominids. Journal of Human Evolution, 42(3), pp.259-265.
5Rosowski, J.J., 1991. The effects of external‐and middle‐ear filtering on auditory threshold and noise‐induced hearing loss. The Journal of the Acoustical Society of America, 90(1), pp.124-135.
6Martınez, I., Quam, R.M. and Rosa, M., 2008, May. Auditory capacities of human fossils: a new approach to the origin of speech. In Proceedings of the 2nd ASA-EAA Joint Conference Acoustics (pp. 4177-4182).
7Quam, R., Martínez, I., Rosa, M., Bonmatí, A., Lorenzo, C., de Ruiter, D.J., Moggi-Cecchi, J., Valverde, M.C., Jarabo, P., Menter, C.G. and Thackeray, J.F., 2015. Early hominin auditory capacities. Science Advances, 1(8), p.e1500355.
8 Waser, P.M. and Brown, C.H., 1986. Habitat acoustics and primate communication. American Journal of Primatology, 10(2), pp.135-154.
9Herbst, C.T., Stoeger, A.S., Frey, R., Lohscheller, J., Titze, I.R., Gumpenberger, M. and Fitch, W.T., 2012. How low can you go? Physical production mechanism of elephant infrasonic vocalizations. Science, 337(6094), pp.595-599.
10Owren, M.J. and Bernacki, R.H., 1988. The acoustic features of vervet monkey alarm calls. The Journal of the Acoustical Society of America, 83(5), pp.1927-1935.
11Quam, R., Martínez, I. and Arsuaga, J.L., 2013. Reassessment of the La Ferrassie 3 Neandertal ossicular chain. Journal of Human Evolution, 64(4), pp.250-262.
12Stoessel, A., David, R., Gunz, P., Schmidt, T., Spoor, F. and Hublin, J.J., 2016. Morphology and function of Neandertal and modern human ear ossicles. Proceedings of the National Academy of Sciences, 113(41), pp.11489-11494.
13Martínez, I., Rosa, M., Arsuaga, J.L., Jarabo, P., Quam, R., Lorenzo, C., Gracia, A., Carretero, J.M., de Castro, J.M.B. and Carbonell, E., 2004. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain. Proceedings of the National Academy of Sciences, 101(27), pp.9976-9981.
14Spoor, F., Hublin, J.J., Braun, M. and Zonneveld, F., 2003. The bony labyrinth of Neanderthals. Journal of Human Evolution, 44(2), pp.141-165.
15Floccia, C., Nazzi, T., Delle Luche, C., Poltrock, S. and Goslin, J., 2014. English-learning one-to two-year-olds do not show a consonant bias in word learning. Journal of Child Language, 41(5), pp.1085-1114.
16Beckman, M.E. and Edwards, J., 2000. The ontogeny of phonological categories and the primacy of lexical learning in linguistic development. Child Development, 71(1), pp.240-249.
17MacNeilage, P.F., 1998. The frame/content theory of evolution of speech production. Behavioral and Brain Sciences, 21(4), pp.499-511.
18Schwartz, J.L., Boë, L.J., Vallée, N. and Abry, C., 1997. The dispersion-focalization theory of vowel systems. Journal of Phonetics, 25(3), pp.255-286.