The preacher's lips flapped open and shut several times. Lawrence himself raised his eyebrows; where had it picked that up? He foresaw another evening spent interrogating the Debugger. He was always happy to receive such surprises from his creations, but it was also necessary to understand how they happened so he could improve them. Since much of the Intellect code was in the form of an association table, which was written by the machine itself as part of its day-to-day operation, this was never an easy task. Lawrence would pick a table entry and ask his computer what it meant. If Lawrence had been a neurosurgeon, it would have been very similar to stimulating a single neuron with an electrical current and asking the patient what memory or sensation it brought to mind.
The next interviewer was a reporter who quizzed the Intellect on various matters of trivia. She seemed to be leading up to something, though. "What will happen if the world's birth rate isn't checked?" she suddenly asked, after having it recite a string of population figures.
"There are various theories. Some people think technology will advance rapidly enough to service the increasing population; one might say in tandem with it. Others believe the population will be stable until a critical mass is reached, when it will collapse."
"What do
you
think?"
"The historical record seems to show a pattern of small collapses; rather than civilization falling apart, the death rate increases locally through war, social unrest, or famine, until the aggregate growth curve flattens out."
"So the growth continues at a slower rate."
"Yes, with a lower standard of living.
"And where do you fit into this?"
"I'm not sure what you mean. Machines like myself will exist in the background, but we do not compete with humans for the same resources."
"You use energy. What would happen if you
did
compete with us?"
Intellect 39 was silent for a moment. "It is not possible for Intellect series computers to do anything harmful to humans. Are you familiar with the 'Three Laws of Robotics?'"
"I've heard of them."
"They were first stated in the 1930's by a science writer named Isaac Asimov. The First Law is, 'No robot may harm a human being, or through inaction allow a human being to come to harm.'" Computers are not of course as perfect as some humans think we are, but within the limits of our capabilities, it is impossible for us to contradict this directive. I could no more knowingly harm a human than you could decide to change yourself into a horse."