The First Word: The Search for the Origins of Language (29 page)

BOOK: The First Word: The Search for the Origins of Language
10.66Mb size Format: txt, pdf, ePub

It would not be possible for Kirby, or anyone for that matter, to sit down and calculate the ways in which thousands of generations of different individuals may have interacted, and this is what makes digital modeling such a powerful tool. It offers a strong contrast to the armchair models that linguists have used for many years. For example, mainstream linguistics saw language as taking place between an idealized speaker and an idealized hearer. These two were representatives of a population of individuals who spoke pretty much the same language and were basically identical to one another. But this model blurs the distinction between the population and its constituent individuals. Digital modeling allows researchers to account for individuals within language communities. Modeling, then, can consist of at least two tiers of interactions—between individual agents within a population and between populations of these agents.

“If you look at the lifetimes of individuals, you see massive changes in there, from nothing to a full language user,” explained Kirby. “It’s a hugely complex process that leads from one state to another.
7
Then, on top of that, language changes in a community. So the new thing that’s emerging is this desire to link individuals with populations in the model directly, by saying, ‘Let’s put together lots of agents that are seriously individual, and see what happens when there is a population of these.’”

Because Kirby is working on a vast biological timescale, his models usually involve very simple, idealized aspects of language, like the ordering of words. “They almost seem trivial,” he said. Eventually, the models will become much more complex, and ideally the particular models that show how language might have evolved from its earliest beginning will mesh with models that show how languages have changed in more recent times—as, for example, how Latin changed into Italian, French, and other Romance languages.

Traditionally linguists have carved up the long history of language into language evolution and more recent language change. Language evolution examined how the human species developed the ability to speak with human language. Language change and growth studies focused on how that first language, once acquired, became thousands of different languages over tens of thousands of years. More and more computer modelers have come to believe that the process is more seamless than that, and language change is to some degree the same as language evolution. The obvious model here is biological life—in the same way that species, once formed, can keep on speciating, the process by which sound and meaning ratchet themselves up into language in the first place leads inevitably to the process by which that language becomes a multitude of languages.

“I would say,” Kirby explained, “that the same process or parts of the same process have to be going on. What’s tricky about modeling it is the timescales. They are so hugely different. To model biological evolution in a computer you obviously need thousands and thousands of generations, and currently the problem is getting a computer that has the resolution to look at very fine facts about language evolution or language change.”

In attempting to incorporate linguistic change in both individuals and populations, Kirby and other modelers like him are actually trying to tease out three different timescales and three different evolutionary processes that contribute to language evolution: two types of linguistic evolution—in the individual and in the population—and biological evolution, tracking how one species becomes another. “That’s what is unique about language,” said Kirby. “That is what makes it really special in the natural world and probably one of the most complex systems we know of—it’s dynamic and adaptive at all three different timescales, the biological, the cultural, and the individual. They are all operating together, and that’s where language comes from—out of that interaction.”

 

 

 

Kirby and a number of other researchers find one metaphor especially useful for thinking about language: imagine that it is a virus, a nonconscious life-form that evolves independently of the animals infected by it. Just as a standard virus adapts to survival in its physical environment, the language virus adapts to survival in its environment—a complicated landscape that includes the semi-linguistic mind of the infant, the individual mind of the speaking adult, and the collective mind of communicating humans.

According to Terrence Deacon, language and its human host are parasitic upon each other. “Modern humans need the language parasite in order to flourish and reproduce just as much as it needs humans to reproduce.”
8
It’s an analogy that goes straight to the heart of how much language means to us as a species. If some global disaster killed all humans, there would be no language left. If language suddenly became inaccessible to us, perhaps we would all die, too.

The most exciting implication of the language-as-virus metaphor is the finding that some features of language have less to do with the need of individuals to communicate clearly with one another than with the need of the language virus to ensure its own survival. That is, in the same way that the traits of a particular animal reflect its evolutionary adjustments to survival in a particular environment, so, too, do the features of language structure reflect its struggle to survive in its environment—the human mind. Reproduction is still the driving force of the evolutionary process, but it’s not our reproduction: it’s the reproduction of language itself.

If language is a virus and its properties are shaped by its drive to survive, then the traditional linguistic goal of reducing all language to a set of rules or parameters is misguided. As Deacon explained, “Languages are more like living organisms than mathematical proofs, so we should study them [in] the way we study organism structure, not as a set of rules.”
9
By this light the quirky grammars of the world’s languages make about as much sense as a pelican does, and English syntax is as elegant as, say, a panda. You can view any animal purely as a formal system, and you can describe it to a great extent using mathematics, but ultimately living organisms cannot be distilled into rule sets, though each is beautiful, elegant, and perfect in its own way.

If you accept the language-as-virus metaphor, you can’t backward-engineer a language-specific mental device simply by looking at the language we have now. If language structure is the result of cultural evolution and accretion, then it’s a historical process as well as a mental one. Accordingly, one of Kirby’s models showed that a language that has the basic property of compositionality—that is, the meaning of an utterance results from the meaning of its parts and the way they are structured—is going to be more successful at surviving than one that doesn’t.
10
Languages that don’t develop compositionality are not robust, and they soon die.

“In the model where we don’t allow the agents to see all of the language,” said Kirby, “structure evolves. The explanation for this is that a structured language can be learned even if you don’t see all of it, because you can generalize pieces of it. Whereas an unstructured language, well, you can imagine a big dictionary where every single thing you might ever want to say is listed with a different word. To learn that language, you’d have to see every single word and learn it. But a language that puts words together and allows you to combine them in different ways can be learned from a much smaller set of examples.”

As with biological evolution, the road to survival is not straightforward. “What happens,” explained Kirby, “if you’re forced to learn from a small set of examples is that initially you do very badly, but the language itself adapts in such a way that it is more easily learned by you. We see it happening before our eyes in the simulations. The languages change, and eventually, somewhere along the line, a little pattern will emerge, and that will be learned much more easily than all the other ones. So over time you get this adaptation to the learner by the language. It makes total sense psychologically—the language can’t survive if it’s not learned.”

In 1990 Steven Pinker proposed that our language ability derives from the fact that it is used for communication. Does the virus metaphor completely contradict this approach to language evolution? It doesn’t have to. Pinker argued that the appearance of design was evidence of the hand of evolution. This remains relevant for accounts that focus on the survival needs of language. The strong design constraints shown by language in Kirby’s model still result from evolution—but the object undergoing that particular evolution is language, not us.
11

 

 

 

Kirby, Deacon, and the computation modeler Morten Christiansen, a professor at Cornell University in New York State, are especially interested in why language is learned so readily by children. Their approach flips the old notion of poverty of stimulus on its head: if language is driven to survive, and the language learners of the world are children, language must be adapted to the quirks and traits of the child’s mind. As Deacon puts it, language is designed to be “particularly infective for the child brain.”

So if language in its very structure has all or most of the clues that children require to learn it, then the need for some kind of language organ starts to look dubious. In its strongest version, this approach means there is no support for the argument that grammar is so complicated that children simply can’t learn it without a grammar-specific device.

It makes more sense to talk about language learning than about language acquisition, argue Kirby and Christiansen.
12
Their point is simply this: Children do, of course, readily learn language. Instead of beginning with the assumption that this is an impossible task that requires extra explanation, they simply begin by asking, how do they do it?

There is inevitably a human predisposition to language learning. “It’s absolutely true that there is an innate component to the process of language learning,” said Kirby. “It would be ludicrous to say otherwise. At the most basic level, not every species can speak the languages that we speak, so there must be something there. But in a more subtle sense, we know that we must have some biases. We can’t learn everything. There is no such thing as a general-purpose learner, a learner that can be exposed to any task and learn it. So yes, there’s linguistic innateness.”

The question remains: How much of this bias to learning language is actually language-specific? Said Kirby, “If you added up all of the influence of our learning bias, and all the things that give rise to our learning bias, then the number of things that aren’t specific to language but still affect the way we learn language vastly outweigh any language specifics within there.” It’s more accurate, explain Kirby and Christiansen, to talk of universal bias than of universal grammar.
13

 

 

 

Another researcher takes up, almost literally, where Jean-Jacques Rousseau left off. Luc Steels heads the Sony Computer Science Laboratories in Paris, which is only a few blocks from the Panthéon, where Rousseau is buried. More than two hundred years after Rousseau wrote about the origin of language, Steels is spearheading a research program that may help us get closer to the answer. He asks: “What are the mental mechanisms and patterns of interaction that you need to get a communication system off the ground?”

Steels’s way of imagining the first language users is considerably more practical than his intellectual forebear’s. He manages a group of graduate and postdoctoral students, and together they are building creatures—not unlike the inhabitants of Rousseau’s primeval forest, the Adam and Eve of language.

In the beginning, Steels’s robots had only a single eye and a brain, and their primordial jungle was limited to some basic shapes and colors. Their eyes were black cameras sitting on top of large tripods. Their brains were computers, and their world was a small whiteboard, at which they stared.

Steels made his creatures look at shapes and think about what they saw, and then he encouraged them to talk to one another about it. He is trying to build a linguistic system from the bottom up, as it happened once before, sometime in the last six million years.

Embodiment is crucial. Steels is not modeling language, or a person, or a brain, or a world. His goal is to ground his experiments in hardware that is able to perceive the real physical world. If you go to the lab, you can watch Steels set up his robots and provoke a ricochet of signals between the bodies and the things they perceive; soon a cascade of meaning develops, and a linguistic system emerges before your eyes. Creating a linguistic animal means that, in this context, communication is not a separate, self-contained program, but instead is profoundly shaped by the development of the creature and its world. “These agents are as real as you can get,” said Steels. “They are artificial in the sense that they are built by us, but they operate for real in the real world, just like artificial light gives real light with which you can read a book in the dark.”

Steels’s fundamental motivation is to explore the design of an emerging communication system. “The approach I take,” he explained, “is a bit like the Wright brothers, who were trying to understand how flight was possible by building physical aircraft and experimenting with it. They did not try to model birds, nor did they run computer simulations (which would have been difficult at the time…). Once you have a theory of aerodynamics, you can take a fresh look at birds and better understand why the wings have a certain shape or why a particular size of bird has the wing span it does.” With such insight into the emergence of mental mechanisms underlying a communication system, a dialogue with researchers such as anthropologists, archaeologists, neurobiologists, and historical linguists may contribute ideas to the puzzle of human language evolution.

In most of Steels’s “talking heads” experiments, the robots’ brains consisted of memory and the ability to produce wordlike sounds. The robots’ main way of sensing the world was through vision. Their eyes were directed at simple scenes and objects—a plastic horse, a wooden mannequin—and each robotic individual was forced to find a way to recognize color, segment images, and identify these specific objects. In simpler versions of the experiment the world at which the robots gaze was a whiteboard on which a variety of colored, geometric shapes were fastened. The basic idea is that there is a cycle of back-and-forth between perception of the world and production of language, as the robots adapt and respond to a changing environment in the same way that humans have to.

Other books

Turnabout Twist by Lois Lavrisa
Goddess by Morris, Kelee
All That Mullarkey by Sue Moorcroft
The Tent: A Novella by Burke, Kealan Patrick
Alice Next Door by Judi Curtin
Sorry, Bro by Bergeron, Genevieve
The Color of Hope by Kim Cash Tate