Authors: Randolph M. Nesse
Physicians first recognized the dangers of high levels of bilirubin in those babies whose blood cells had an Rh antigen that is attacked by their mother’s antibodies. The rapid breakdown of blood cells and resulting high bilirubin levels sometimes caused permanent brain damage. Today this can usually be prevented by administering substances that prevent the mother from developing Rh antibodies or by giving the baby an exchange transfusion at birth. But many babies who do not have Rh antigens also have visible jaundice at birth. To prevent any possibility of brain damage, such babies are often treated with exposure to bright light, which changes the bilirubin in the skin to a form that can be excreted in the urine, thus hastening the disappearance of jaundice.
So far it looks as if the high bilirubin levels at birth are simply a glitch in the mechanism, one we can fortunately circumvent by routine medical treatment. John Brett at the University of California at San Francisco and Susan Niermeyer at the Children’s Hospital in Denver have taken a more careful evolutionary look at this situation. They note that the first breakdown product of hemoglobin is biliverdin, a water-soluble chemical that is excreted directly in birds, amphibians, and reptiles. In mammals, however, biliverdin is converted to bilirubin, which is then transported throughout the body bound to the blood protein albumin. Furthermore, bilirubin levels at birth are under partial genetic control and therefore could be lowered by natural selection if that were beneficial. This led Brett and Niermeyer to suspect that high bilirubin levels at birth might be adaptive. As they put it, “Given that all babies will be jaundiced well
above the adult level within the first postnatal week and over half will be visibly jaundiced, it seems difficult to imagine that something is wrong with all of these infants.” Further investigation revealed that bilirubin is an effective scavenger of the free radicals that damage tissues by oxidation. At birth, when the baby must suddenly start breathing, the arterial oxygen concentration becomes three times as great, with concordant increases in damage from free radicals. Adult levels of defenses against free radicals are only gradually implemented during the first weeks of life, as the bilirubin levels decrease. If Brett and Niermeyer are correct, we need to rethink our treatment of jaundice of the newborn, perhaps saving millions of dollars in unnecessary treatment each year.
The risks of light treatment have been inadequately investigated, but we know that color vision impairments can result from continuous bright light in the first few days after birth. We want to make it clear that the adaptive interpretation of Brett and Niermeyer has not been widely accepted and strongly caution parents against refusing to let their babies have light treatment if their doctors deem it necessary. It would be worthwhile, however, for parents to ask questions and to get second opinions, and for scientists to initiate studies to provide the decisive answers.
T
he baby is home now, and the wonderful joy is punctuated, regularly, day and night, by hours of wails that cannot be ignored. It is easy enough to understand how crying benefits the baby. If it is hungry, thirsty, hot, cold, frightened, or in pain, the baby cries and a parent comes to meet its needs. A baby unable to cry might be seriously neglected. How does the baby’s cry affect parents? It gets on their nerves, to put it mildly. Parents do whatever needs to be done to stop the crying, at any time of day or night. Genes that make the cry aversive to parents are selected for because those same genes are in the child, who benefits from the parent’s discomfort and resulting aid. The parent suffers, but its genes in the baby benefit—a fine example of the actions of kin selection.
If the baby cries for a good reason, all to the good. But is all crying a call for necessary help? Often it is impossible to find any cause at all, and yet nothing seems to stop the baby’s crying. This is the
most common reason new mothers consult their pediatricians, who usually call the problem “colic” despite little evidence that gastrointestinal difficulties are responsible. Ronald Barr, a pediatrician at McGill University, has made an intensive study of infant crying. He finds that babies with supposed colic do not cry more often or at special times, just longer each time. This has led him to suggest that such crying is normal, although perhaps prolonged by modern practices such as long intervals between feedings. !Kung women in Africa carry their infants constantly and feed them whenever they cry, at least once and often three or four times per hour for two minutes at each feeding. By contrast, American mothers feed their two-month-old infants approximately seven times a day with an average of three hours between feedings. In an experimental study, Barr asked a group of mothers to carry their babies at least three hours per day. Mothers in that group reported that their babies cried only half as long as those whose mothers did not receive the special instructions.
Barr suggests that frequent crying increases fitness by promoting bonding with the mother and by encouraging frequent feeding, which maintains lactation and prevents any competing pregnancy. This last argument again illustrates the conflict of interests between the parent and the offspring. The frequency of babies “spitting up” may be another instance in which the baby manipulates the mother, in this case to make more milk than is in her interests. Or “spitting up” may be explained as a result of unnaturally infrequent but larger feedings. An examination of the phenomenon in hunter-gatherer societies could provide an answer, but it is not the kind of thing that anthropologists routinely report.
M
any a parent’s greatest fear is of going to wake the baby and finding it dead in the crib. Sudden infant death syndrome (SIDS) kills more babies than any other cause of death except accidents—1.5 per 1000 babies, or more than 5000 per year in the United States alone. The cause, however, remains unknown. James McKenna, an anthropologist
from Pomona College, has investigated SIDS from an evolutionary and cross-cultural perspective and found that crib deaths are many times more frequent in modern societies than in tribal cultures. The SIDS rate is especially high, as much as ten times higher, in those cultures in which babies sleep apart from their parents instead of in the same bed. In a series of experiments that simultaneously measured the movements and brain waves of sleeping mothers and their babies, he found substantial relationships between the sleep cycles of mothers and babies who sleep together. He suggests that this coordination leads to intermittent arousals that sustain SIDS-vulnerable babies through periods when their breathing might otherwise cease. The more fundamental problem, cessation of breathing, may be related to the extreme immaturity of the human infant’s nervous system, the price of avoiding the danger of the birth of babies with too large a skull to fit through the pelvis. None of this is to say that SIDS is in any way normal, only that the tendencies that make some infants vulnerable to it may have been far less dangerous in a natural environment, where mothers usually sleep with their newborns.
E
ventually the mother begins to discourage the baby from nursing. In industrial societies, this usually occurs sometime in the first year, while in hunter-gatherer cultures nursing lasts an average of three to four years. The interval between births is critical to maximizing reproduction. If it is too short, the first infant may still need so much milk and effort that the next infant will not survive. If the mother waits too long, she is wasting her reproductive potential. As you might expect from our discussions of parent-offspring conflict, this is yet another instance in which the interests of the mother and the infant diverge. There will come a time, usually when an infant is two to four years old, when it is in the mother’s genetic interests to conceive again but in the baby’s interests to keep nursing and prevent her from having another baby. This is the weaning conflict, discussed by biologist Robert Trivers in his classic paper that first outlined the divergent interests of parents and their offspring. He noted that weaning conflicts have a natural end point. Eventually, the baby can do well enough with solid foods
and less aid from the mother that it too will benefit more from having a baby brother or sister (who shares half its genes) than from continuing to monopolize its mother.
During the period of weaning conflict, how can the infant manipulate its mother to continue nursing? Here again Trivers had a brilliant insight. The infant, unable to force the mother to keep nursing, can only use deception, and the best deception is to convince the mother that it is in her best interest to let nursing continue. How can the baby accomplish such deception? Simply by acting younger and more helpless than it really is. Psychologists have long recognized this pattern and named it
regression
, but we believe Trivers has offered the first evolutionary explanation, with implications that are just beginning to be explored.
Parent-offspring conflicts don’t end with weaning; they just change their form. For a long period in childhood, conflicts are relatively routine and mild, but come adolescence, all hell breaks loose. Teenagers may want to do everything their own way and insist that no help of any sort is needed. Then, at the least difficulty, they are back into the regression act, apparently helpless and needy and asking for more than the parents want to give. This isn’t so surprising, really. It is just the last major episode of parent-offspring conflict in the long drama of development. In a few years the adolescent really will be independent and beginning to look longingly at a potential partner with whom to raise a family and start a new episode in that ongoing drama of adaptively modulated conflict and cooperation called sexual reproduction.
I sometimes hold it half a sin
To put in words the grief I feel:
For words, like Nature, half reveal
And half conceal the Soul within.
But, for the unquiet heart and brain,
A use in measured language lies;
The sad mechanic exercise,
Like dull narcotics, numbing pain.
—Alfred, Lord Tennyson,
In Memoriam
, canto V
A
young woman recently came to the Anxiety Disorders Clinic at The University of Michigan, complaining of attacks of overwhelming fear that had come out of the blue several times each week for the past ten months. During these attacks, she experienced a sudden onset of rapid pounding heartbeats, shortness of breath, a feeling that she might faint, trembling, and an overwhelming sense of doom, as if she were about to die. A few years ago, such people usually insisted that they had heart disease, but this person, like so many now, had read about her symptoms and knew that they were typical of panic disorder. In the course of the evaluation it came out that she had experienced her first
panic attacks at about the same time as she had begun an extramarital affair. When the doctor asked if there might be a connection, she said, “I don’t see what that has to do with it. Everything I read says that panic disorder is a disease caused by genes and abnormal brain chemicals. I just want the medicine that will normalize my brain chemicals and stop these panic attacks, that’s all.”
How times change! Twenty years ago, people who insisted that their anxiety was “physical” were often told that they were denying the truth in order to avoid painful unconscious memories. Now many psychiatrists would readily agree that depression or anxiety can be a symptom of a biological disease caused by brain abnormalities that need drug treatment. Some people, like the woman described above, so embrace this view that they are offended if the psychiatrist insists on attending to their emotional life. The opening lines of an influential review article summarize these changes:
The field of psychiatry has undergone a profound transformation in recent years. The focus of research has shifted from the mind to the brain … at the same time the profession has shifted from a model of psychiatric disorders based on maladaptive psychological processes to one based on medical diseases.
Strong forces have pushed the field of psychiatry to adopt this “medical model” for psychiatric disorders. The change began in the 1950s and 1960s with discoveries of effective drug treatments for depression, anxiety, and the symptoms of schizophrenia. These discoveries spurred the government and pharmaceutical companies to fund research on the genetic and physiological correlates of psychiatric disorders. In order to define these disorders so research findings from different studies could be compared, a new approach to psychiatric diagnosis was created, one that emphasizes sharp boundaries around clusters of current symptoms instead of continuous gradations of emotions caused by psychological factors, past events, and life situations. Academic psychiatrists focus increasingly on the neurophysiological causes of mental disorders. Their views are transmitted to residents in training programs and to practitioners via postgraduate medical seminars. Finally, with the rise of insurance funding for medical care during recent decades and the possibility of federal funding for universal medical coverage in the United States,
organizations of psychiatrists have become insistent that the disorders they treat are medical diseases like all others and therefore deserve equal insurance coverage.
Are panic disorder, depression, and schizophrenia medical diseases just like pneumonia, leukemia, and congestive heart failure? In our opinion, mental disorders are indeed medical disorders, but
not
because they are all distinct diseases that have identifiable physical causes or because they are necessarily best treated with drugs. Instead, mental disorders can be recognized as medical disorders when they are viewed in an evolutionary framework. As is the case for the rest of medicine, many psychiatric symptoms turn out not to be diseases themselves but defenses akin to fever and cough. Furthermore, many of the genes that predispose to mental disorders are likely to have fitness benefits, many of the environmental factors that cause mental disorders are likely to be novel aspects of modern life, and many of the more unfortunate aspects of human psychology are not flaws but design compromises.