How do infants learn their native language in such a rapid and effortless manner? This question has puzzled researchers in early language acquisition over several decades, and research has now revealed that infants acquire advanced knowledge about their language (or languages) even before their first birthday, and these early language abilities become the building blocks for the development of more advanced skills such as vocabulary and literacy. However, some infants’ early experiences with language can be influenced by individual differences in language exposure, which is the case of infants with hearing loss. Therefore, it is primordial for researchers to develop an understanding of how these different experiences impact early language abilities to ensure that these infants are exposed to the most optimal type of language environment and develop language skills to their maximum potential. This is the goal of the Seeds of Language Development research project.
Hearing is part of the human multi-sensory perception system, and it has important implications for speech and language acquisition. Infants’ ability to hear the speech sounds of their language even before they are born plays a fundamental role in the acquisition of speech and language abilities later in life. Because of this, hearing loss, even at a moderate level (>40 dB), can have an impact on the development of these early language abilities and later language skills and academic achievement. This highlights the importance of early detection of hearing loss and early intervention – infants whose hearing loss is detected before the age of 6 months develop better receptive and expressive vocabulary skills at 25 – 36 months of age [1 & 2]. This is the case for infants in Australia. The Australian Universal Newborn Hearing Screening programme allows very early detection of hearing loss, and infants receive hearing devices such as hearing aids or Cochlear implants within 2 months after diagnosis.
One of the most important milestones in language acquisition takes place during a child’s first months of life. Infants come to the world as language-universal listeners, but they become language-specific listeners by the time of their first birthday. That is, infants are first sensitive to contrasts in speech sounds that are and are not part of their native language, but then this ability becomes fine-tuned to their native language. This means that by around 12 months of age, infants are less likely to be able to detect the differences between the speech sounds that do not exist in their native language while they continue to improve their ability to detect the differences within their own language [3 & 4]. This ability is essential to allow the child to identify what sound changes do and do not signal contrasts between words in their language.
Children learn this information about the sounds of their language from the speech that they hear in their surroundings, primarily their parents’ infant directed speech (IDS). IDS, also known as baby talk or parentese, is the special type of speech that parents use when interacting with their young babies. IDS has a number of characteristics that differentiate it from speech that we use with adults, such as higher pitch, more positive affect, exaggerated speech sounds, and slower tempo (longer duration) [5 & 6]. These characteristics engage and maintain infants’ attention and facilitate social interactions. Importantly, IDS has been proposed to also facilitate the task of language acquisition, i.e., it helps infants to learn to talk and understand the language better and faster [5, 7, 8 & 9].
Importantly, it appears that parents unconsciously adapt the characteristics of their IDS to produce the type of speech that is most suitable to their baby’s language needs. This appears to be the case for parents who speak to babies with hearing loss [10 & 11]. For example, one laboratory study looked at the effects of reduced hearing on parents’ speech. In this study, mothers’ speech was recorded during a play session with their baby. In these sessions, the mother sat in one room, and her baby sat in a different room, and they could see and hear each other over a computer monitor (similar to video chatting on Skype). Importantly, for half of the session, the researchers turned the volume of the mother’s voice heard by the infant down. What they found is that even though mothers were not aware of this manipulation, they modified they way in which they produced speech sounds in their IDS when their infant could not hear them. This finding shows that parents are sensitive to the feedback from their infant. Infants elicit the characteristics in their parents’ speech that are shaped to meet their particular language and communication needs .
The next step in this research is to understand the connection between the characteristic of parental speech to babies with hearing loss and the development of their early language skills. This is the purpose of our new research project, Seeds of Language Development. This project is conducted in collaboration between the MARCS BabyLab and Hear and Say, Brisbane Centre. It includes infants and children with hearing loss from 7 months to 3 years of age. All the tasks included in this project take form of play sessions and games so they are fun, enjoyable and safe for the babies and are approved for use in conjunction with hearing devices either hearing aids or Cochlear implants.
We are currently inviting families to take part in this project. All infants and children who participate will receive a BabyLab Scientist degree and a small gift at the end of each session. All the sessions will be conducted at the Hear and Say, Brisbane Centre. With your help we can develop a better understanding of the relationship between hearing loss, parental speech and language input, and children’s language outcomes. If you are interested to take part or want to find out more about our project, please call us on 0490 874 632 or email@example.com.
 Yoshinaga-Itano, C., & Apuzzo, M. L. (1998). Identification of hearing loss after age 18 months is not early enough. American Annals of the Deaf 143, 380–387.
 Yoshinaga-Itano, C., Sedey, A. L., Coulter, D. K., & Mehl, A. L. (1998). Language of early- and later-identified children with hearing loss. Pediatrics 102, 1161–1171.
 Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development 7(1), 49–63.
 Polka, L., & Werker, J. F. (1994). Developmental changes in perception of nonnative vowel contrasts. Journal of Experimental Psychology: Human Perception and Performance 20(2), 421–435.
 Kitamura, C., & Burnham, D. (1998). The infant’s response to maternal vocal affect. In C. Rovee-Collier (Ed.), Advances in infancy research (Vol. 12, pp. 221–236). Stamford, CT: Ablex.
 Burnham, D., Kitamura, C., & Vollmer-Conna, U. (2002). What’s new pussycat: On talking to animals and babies. Science 296, 1435.
 Kitamura, C., & Burnham, D. (2003). Pitch and communicative intent in mother’s speech: adjustments for age and sex in the first year. Infancy 4 (1), 85–110.
 Kitamura, C., & Lam, C. (2009). Age-specific preferences for infant-directed affective intent. Infancy 14 (1), 77–100.
 Fernald, A., & Mazzie, C. (1991). Prosody and focus in speech to infants and adults. Developmental Psychology 27 (2), 209–221.
 Lam, C., & Kitamura, C. (2010). Maternal interactions with a hearing and hearing impaired twin: similarities in pitch exaggeration but differences in vowel hyperarticulation. Journal of Speech, Language, and Hearing Research 53, 543–555.
 Lam, C. & Kitamura, C. (2012). Mommy, speak clearly: induced hearing loss shapes vowel hyperarticulation. Developmental Science 15(2), 212-221. Doi:10.1111/j.1467-7687.2011.01118.x.