Breaking into Language: The Beginnings of Infant Language Learning

Pediatric Pathways

Breaking into Language: The Beginnings of Infant Language Learning

Author

American Family Children's Hospital Pediatric Pathways: Jenny Saffran

Jenny Saffran, PhD

 

Our Services

Infant Learning Lab

After reading this article and answering the review questions the reader will be able to:

  1. Explain how language comprehension precedes language production
  2. Describe one key aspect of early language learning: word segmentation
  3. Define statistical learning
  4. Recognize potential challenges in early language development

 

Breaking into Language: The Beginnings of Infant Language Learning

Imagine that you are faced with the following challenge: you must discover the internal structure of a system that contains tens of thousands of units, all generated from a small set of materials. These units, in turn, can be assembled into an infinite number of combinations. While only a subset of those combinations is correct, the subset itself is, for all practical purposes, infinite. Somehow you must converge on the structure of this system in order to use it to communicate. And you are a very young child.

This system is human language. The units are words, the materials are the small set of sounds from which they are constructed, and the combinations are the sentences into which they can be assembled. Given the complexity of this system, it seems improbable that mere children could discover its underlying structure and use it to communicate. Yet most do so with eagerness and ease, all within the first few years of life. How does this process unfold?

In the Infant Learning Lab at the Waisman Center, University of Wisconsin – Madison, we endeavor to understand how infants discover the structure of their native language – or languages – during the first two years of life. Infants don’t produce their first word until around 12 months of age; earlier for some, later for others. However, the process of learning a native language begins as early as researchers have been able to measure it.

Rather remarkably, language learning begins in the womb. In a classic study, DeCasper and Spence (1986) asked pregnant women to read aloud from The Cat in the Hat during the last six weeks of their pregnancy. Thus, the women’s fetuses were repeatedly exposed to the same highly rhythmical pattern of speech sounds. The question was whether they would recognize the familiar story after birth.

To find out, the researchers tested them as newborns. The infants were fitted with miniature headphones and given a special pacifier to suck on. When the infants sucked in one particular pattern, they heard the familiar story through the headphones, but when they sucked in a different pattern, they heard an unfamiliar story. The babies quickly increased their sucking in the pattern that enabled them to hear the familiar story. Thus, these newborns apparently recognized and preferred the rhythmic patterns from the story they had heard in the womb.

Over the course of the first few postnatal months, infants make similarly impressive strides in figuring out how their native language works. By four months of age, they recognize the sounds of their own names, and prefer to listen to those sounds over unfamiliar names. Infants also become attuned to the individual speech sounds characteristic of their native language.

While newborns and young infants are able to perceive the sounds of languages from around the world, they rapidly hone in on the sounds of their native language. By six months of age, infants learning English perceive vowels differently than infants learning Swedish or Japanese. These data show a remarkable attunement to the native language, unfolding many months before infants make these distinctions in their spoken language (for an overview, see Kuhl, 2004).

In the Infant Learning Lab, we have been particularly interested in how infants solve a different language-learning problem: finding the boundaries between words. Unlike written language, where the boundaries between words are demarcated by spaces, spoken language does not contain pauses as cues to word boundaries. This is easy to observe when listening to a foreign language; speech appears to fly by very quickly, with no breaks other than breaths. Even when speaking to infants, however, we do not place consistent pauses between words. This raises a fascinating problem in child development: how do infants figure out where words begin and end, given that they are not marked by clear acoustic boundaries? If learners can’t figure out where words begin and end, it will be very difficult to learn much else about the language (i.e., what words mean, how words combine into sentences, etc.).

We have focused on a specific hypothesis about how infants might discover boundaries between words: statistical learning. The idea is that infants can use the statistical properties of speech to segment speech into words. Consider a sequence like “pretty baby.” As native speakers, we know there is a boundary between "ty" and "ba," but to a novice learner, those four syllables might each be a word. Or perhaps the four syllables are a single four-syllable word. How do learners figure out that the correct boundary lies between "ty" and "ba?"

Intuitively, syllables that are part of the same word co-occur together more frequently than syllables spanning a word boundary. Returning to the “pretty baby” example, syllables from the same word, like pre-tty, are more likely to co-occur than syllables spanning a word boundary, like ty-ba.

In a 1996 paper published in "Science," we tested the hypothesis that infants are sensitive to syllable co-occurrence statistics (Saffran, Aslin, & Newport, 1996). To do so, we exposed 8-month-old infants to a novel language. This “artificial language” was created for the purpose of the experiment to ensure that there were no cues to word boundaries other than the statistical properties of the speech; syllables that followed one another reliably were part of the same word, while syllables that did not predict one another spanned word boundaries. After two minutes of exposure to the artificial language, we tested infants’ listening preferences for words from the language versus sequences that spanned word boundaries. The results revealed a reliable pattern of preferences, indicating that the infants had picked up the statistical properties of the speech we had played for them.

Of course, statistics aren’t the only possible cue to word boundaries. Infants are also attuned to other properties of speech, including rhythm and pauses, which might provide information about word boundaries. In one follow-up study, we exposed infants to a new artificial language that had the pitch patterns characteristic of infant-directed speech (which tends to be far more sing-song than speech to adults, and which even newborns prefer over adult-directed speech). We found that infants were better at tracking the statistical properties of speech when it was presented with infant-directed pitch contours (Thiessen, Hill, & Saffran, 2005).

In more recent studies, we have found that infants similarly track the statistics of speech when presented with real languages with which they are unfamiliar, such as Italian (Pelucchi, Hay, & Saffran, 2009). Indeed, we have now run well over 100 experiments in the Infant Learning lab, including 1,000-2,000 Dane County infants each year. These studies have revealed remarkable language learning abilities in the lab.

The statistics that infants track facilitate numerous aspects of language learning, from discovering the sounds of the native language to mapping sounds to meanings to basic aspects of grammar. Moreover, these learning abilities are not limited to language. Infants show similar statistical learning abilities across multiple domains, from music to visual processing to understanding the actions of others. It thus appears that the ability to track statistical properties in the environment is a very basic aspect of cognitive functioning that facilitates many different aspects of child development.

Because our lab is sited at the Waisman Center, a research and clinical center focused on human development and developmental disabilities, we have had the opportunity to move beyond studies of typically-developing infants to begin to examine the implications of our work for children facing myriad developmental challenges. The methods that we use in our research do not require any form of verbal response (given that we primarily study infants), and the motor responses we measure are very simple (head turns or eye movements). These methods are thus ideal for investigating processes of language development in infants and toddlers with linguistic, cognitive, and/or motor challenges.

Over the past five years, we have been involved in studies with numerous populations of young children, including late-talking toddlers, deaf toddlers with cochlear implants, toddlers with Down Syndrome, toddlers and children with Autism Spectrum Disorders, children with Fragile X Syndrome, and children with Specific Language Impairment. It is our hope that by learning more about typically-developing language learners and applying our results to these populations of children, we will increase our understanding about their linguistic challenges and potential intervention strategies.

To learn more about the Infant Learning Lab, including recent publications and opportunities to participate in ongoing research studies, please visit http://www.waisman.wisc.edu/infantlearning.

Go to CME questions

Back to top 

References

  1. DeCasper, A. J., & Spence, M. J. (1986). Prenatal maternal speech influences newborns’ perception of speech sounds. Infant Behavior and Development, 9, 133–150.
  2. Kuhl, P. K. (2004). Early language acquisition: cracking the speech code. Nature Reviews Neuroscience, 5, 831–43.
  3. Pelucchi, B., Hay, J. F., & Saffran, J. R. (2009). Statistical learning in a natural language by 8-month-old infants. Child Development, 80, 674-685.
  4. Saffran, J. R., Aslin, R.N., & Newport, E.L. (1996) Science, 274, 1926-1928.
  5. Thiessen, E. D., Hill, E., & Saffran, J. R. (2005). Infant-directed speech facilitates word segmentation. Infancy, 7, 53-71.