A widely reported recent study on the origins of human language carried out by a researcher in New Zealand has kept me Googling between more important commitments for several weeks. Quentin D. Atkinson, a biologist at the University of Auckland in New Zealand, claims to have detected a “signal” that indicates that all the world’s languages originated in Africa. Atkinson has applied statistical methods developed to create genetic trees based on DNA sequences to study the frequency of phonemes― the consonants, vowels and tones that compose the basic building blocks of language―in 504 different world languages to support his theory that the transmission and evolution of language has occurred in much the same way as human genetic transmission and development.
Confirmation of his theory would establish a “DNA” of human language. According to Atkinson, several African languages that incorporate “click” sounds have 100 or more phonemes, compared, for example, to English, which has about 45, and Spanish, which has 24. In essence, Atkinson’s hypothesis holds that the farther away language travelled from its African origins, the more its phoneme count would decay.
His research was inspired by another recent study whose findings indicated that the number of phonemes in a given language increases with a rise in the number of people who speak it. Based on these findings, he conjectured that phoneme diversity would not only increase as a population grew, but would also decline whenever a small group split from the main population and migrated to another geographic area. The process of population division and migration of splinter groups across the planet produces what biologists refer to as the “serial founder” effect. Dr. Atkinson’s theory holds that the greater the geographic distance between migrant groups and the populations that spawned them is, the greater the degree of reduction in the phonemic diversity of their language will be, following the pattern of reduction in human genetic diversity in migrant groups.
Somewhat serendipitously, an ancient myth of the Wa-Sania, an East-African Bantu group, claims that humans originally shared a common language, but that a severe famine scattered them in search of food. According to this story, it was the trauma of famine and dispersion that caused them to speak in other tongues.
I’m not a biologist and can’t judge whether Dr. Atkinson’s theory is sound or not, but the news of his work compelled me to search the Web for references on linguistic phonemes, a subject which does, to a degree, fall within my professional bailiwick.
A Wikipedia site devoted to the evolution of language divides the development of language into four stages:
- Stage I: Phoneme = sentence (pictographic language);
- Stage II: Phoneme = word or phrase (ideographic language);
- Stage III: Phoneme = syllable (syllabic language);
- Stage IV: Phoneme = sound (phonetic language).
As language evolves, it becomes more complex; one might say it starts with a grunt of fear or pleasure and slowly works its way along until it develops such fine distinctions as the technical difference between a rising and falling diphthong and the aesthetic difference between describing something as being rosy and peachy.
It seems that we all go through our own individual linguistic evolutions. While searching for insights into Dr. Atkinson’s work in the Internet, I came across a document posted by a University of Louisiana professor that stated: “At six months of age, babies can respond to every phoneme uttered in languages as diverse as Hindi and Nthlakampx, a native American language with certain consonant combinations that are impossible to distinguish for nonnative speakers,” noting, however, that at the same age they begin “to show a preference for words that have a prosodic organization typical of words in their native language.” By the time these same children have reached their ninth month, they already demonstrate a preference for listening to their own “native” language.
From this point on, children’s loss of interest in the greater part of the vast treasure trove of human phonemes rapidly accelerates. “By the time children are ten months old, they are adept at making discriminations between phonemes in their native language and begin to lose their sensitivity to the shades of difference between the phonemes of foreign languages. In fact, at ten months of age, they have lost nearly two-thirds of the capacity they possessed at six months.”
The article goes on to explain that once children’s “auditory maps” are formed, which usually occurs at the end of their first year of life, they are no longer able to pick up phonemes they are unaccustomed to hearing, as none of their brains’ neuron clusters have been assigned the task of responding to those particular sounds. In other words, they become functionally deaf to sounds not present in their native language. As we grow older, our brains have fewer and fewer uncommitted neurons with which to respond to new phonemes. This explains why with each passing year, learning a new language becomes more difficult.” Difficult, but not impossible; I was in my mid-forties when I began to study Spanish.
So why are we born with a full set of language tools only to lose most of them in infancy? “The pattern of neurological development that produces our language abilities strongly indicates that it is a system designed (evolved) to cope with continuously changing linguistic environments.” Reading this, I already feel a little better, perhaps to the point of believing that I’ve been designed by nature to be small-scale linguistic success.
I discovered that tracing the stages we all go through in learning our first language is a good mental exercise for a translator. In fact, a review of these stages launched me into a profound reflection on what my brain does all day while I take a text part in one language and put it back together again in another. Anyone who has studied Spanish seriously will have tackled the challenge of understanding phonology. I cringed the first time I faced the task, but the work of wrapping my mind around nasal, fricative, and lateral phonemes (not to mention the trill and the tap) has served me well in my work as a translator. According to a UNESCO report on literacy, “Phonological awareness refers to the ability to attend to the sounds of language as distinct from its meaning. Studies of both alphabetic and non-alphabetic languages show that phonological awareness is highly correlated with reading ability. For alphabetic languages, phonemic awareness is especially important because the letters of the alphabet map onto individual
sound units (phonemes).” If you want to know how phonetically aware you are in English, check out this well-organized list on Robert Kurtz’s marvelous website speech-language-development.com:
Pre-phonemic discriminatory listening skills: the ability to distinguish among non-speech environmental sounds (e.g., a beanbag falling on a wooden floor versus a plastic ball falling on a wooden floor), and to identify objects by the sound they make (e.g., a horn, a bell, a helicopter, etc.)
Alliteration and rhyme: the ability to identify and produce words that rhyme or that begin with the same phoneme.
Phoneme Segmentation: the ability to analyze the syllables and individual phonemes of a word, phrase, or sentence.
Phoneme Isolation: the ability to identify the first, middle, or last phonemes in a monosyllabic word.
Phoneme Deletion: the ability to identify how a word would sound if a part of it were omitted.
Phoneme Substitution: the ability to replace a phoneme in a word with another phoneme to form a new word.
Phoneme Blending: the ability to identify a word when hearing parts of the word presented in isolation.
Letter-sound correspondence: the ability to identify the phonemes represented by individual letters and combinations of letters.
Phonetic reading: the ability to “sound out” and pronounce unfamiliar or nonsense words based on spelling.
Phonetic spelling: the ability to use prior knowledge of spelling rules to write familiar words the student has not learned to spell.
Phonology is a translator’s solfège. Just as a singer needs to develop the flexibility to work with both the fixed do solfège employed by speakers of Romance and Slavic languages and the moveable do solfège employed in Anglo-Saxon and Asian countries, a translator needs to learn how to “sound out” words and texts correctly in all the languages he or she works with.
Translators who have not developed cross-linguistic phonetic skills are apt to find themselves struggling with a continual cognitive dissonance that leads them to miss the subtleties of the source text and churn out unnatural and stilted translations. Due to the speed and volume of modern communication, most of any translator’s work is soon forgotten. That does not mean, however, that these documents shouldn’t “sing” to the reader. Take the case of John Mason Neale, a nineteenth-century English clergyman who dedicated much of his time to writing on subjects such as history and theology. Neale also produced travel books, poems, hymns, and books for children, but one of his specialties was translating and freely adapting the texts of ancient and medieval hymns from Latin and Greek to English. Although he was reputedly tone-deaf, he had “an exquisite ear for melody in words.” Among the texts he contributed to English liturgical music is the still popular “Good Christian Men, Rejoice.” In this enduring and much-loved song text, Neale showed himself to be a master of the techniques outlined above in Robert Kurtz’s list, even when working with “dead” languages.
For those interested in reading more about bilingual competence, and the importance of phonemes in language comprehension and construction, one of the most complete resources available on cross-linguistic skill transfer is the Metalinguistic Transfer in Spanish/English Biliteracy webpage created by Jill Kerper Mora of San Diego State University.