A computer program that learns to decode sounds from different
languages in the same way that a baby does helps to shed new light on how people learn to talk, researchers said on Tuesday.
They said the finding casts doubt on theories that babies are born
knowing all the possible sounds in all of the world's languages.
"The debate in language acquisition is around the question of how
much specific information about language is hard-wired into the brain
of the infant and how much of the knowledge that infants acquire
about language is something that can be explained by relatively
general purpose learning systems," said James McClelland, a
psychology professor at Stanford University in Palo Alto, California.
McClelland says his computer program supports the theory that babies
systematically sort through sounds until they understand the
structure of a language.
"The problem the child confronts is how many categories are there and
how should I think about it. We're trying to propose a method that
solves that problem," said McClelland, whose work appears in the
Proceedings of the National Academy of Sciences.
Expanding on some existing ideas, he and a team of international
researchers developed a computer model that resembles the brain
processes a baby uses when learning about speech.
He and colleagues tested their model by exposing it to "training
sessions" that consisted of analyzing recorded speech in both English
and Japanese between mothers and babies in a lab.
What they found is the computer was able to learn basic vowel sounds
right along with baby.