Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (15 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
2.44Mb size Format: txt, pdf, ePub

Technology thinkers, including George Dyson and Kevin Kelly, have proposed that information is a life-form. The computer code that carries information replicates itself and grows according to biological rules. But intelligence, well that’s something else. It’s a feature of complex organisms, and it doesn’t come by accident.

At his home in California, I had asked Eliezer Yudkowsky if intelligence could emerge from the exponentially growing hardware of the Internet, from its five trillion megabytes of data, its more than seven billion connected computers and smart phones, and its seventy-five million servers. Yudkowsky had grimaced, as if his brain cells had been flooded by an acid bath of dumb.

“Flatly, no,” he said. “It took billions of years for evolution to cough up intelligence. Intelligence is not emergent in the complexity of life. It doesn’t happen automatically. There is optimization pressure with natural selection.”

In other words, intelligence doesn’t arise from complexity alone. And the Internet lacks the kinds of environmental pressures that in nature favored some mutations over others.

“I have a saying that there’s probably less interesting complexity in the entire Milky Way outside Earth than there is in one earthly butterfly because the butterfly has been produced by processes that retain their successes and build on them,” Yudkowsky said.

I agree that intelligence wouldn’t spontaneously blossom from the Internet. But I think agent-based financial modeling could soon change everything about the Net itself.

Once upon a time when Wall Street analysts wanted to predict how the market would behave, they turned to a series of rules prescribed by macroeconomics. These rules consider factors like interest rates, employment data, and “housing starts” or new houses built. Increasingly, however, Wall Street has turned to agent-based financial modeling. This new science can computationally simulate the entire stock market, and even the economy, to improve forecasting.

To model the market, researchers make computer models of the entities buying and selling stocks—individuals, firms, banks, hedge funds, and so on. Each of these thousands of “agents” has different goals and decision rules, or strategies, for buying and selling. They in turn are influenced by ever-changing market data. The agents, powered by artificial neural networks and other AI techniques, are “trained” on real-world information. Acting in unison, and updated with live data, the agents create a fluid portrait of the living market.

Then, analysts test scenarios for trading individual securities. And through evolutionary programming techniques, the market model can “step forward” a day or a week, giving analysts a good idea of what the market will look like in the future, and what investment opportunities might appear. This “bottom-up” approach to creating financial models embodies the idea that simple behavioral rules of individual agents generate complex overall behavior. Generally speaking, what’s true of Wall Street is also true of beehives and ant colonies.

What begins to take shape in the supercomputers of the financial capitals of the world are virtual worlds steeped in real-world detail, and populated by increasingly intelligent “agents.” Richer, more nuanced forecasting equals bigger profits. So powerful economic incentives fuel the drive for increasing the models’ precision at every level.

If it’s useful to create computational agents that exercise complex stock-buying strategies, wouldn’t it be
more
useful to create computational models with the full range of human motivations and abilities? Why not create AGIs, or human-level intelligent agents? Well, that’s what Wall Street is doing, but by another name—agent-based financial models.

That financial markets will give rise to AGI is the position of Dr. Alexander D. Wissner-Gross. Wissner-Gross has the kind of résumé that makes other inventors, scholars, and polymaths linger near open elevator shafts. He’s authored thirteen publications, holds sixteen patents, matriculated with a triple major from MIT in physics, electrical science and engineering, and mathematics, while graduating first in his class at MIT’s School of Engineering. He’s got a Ph.D. in physics from Harvard and won a big prize for his thesis. He has founded and sold companies, and according to his résumé, won “107 major distinctions,” which are probably not of the “employee of the week” variety. He’s now a Harvard research fellow, trying to commercialize his ideas about computational finance.

And he thinks that while brilliant theorists around the world are racing to create AGI, it might appear fully formed in the financial markets, as an unintended consequence of creating computational models of large numbers of humans. Who’d create it? “Quants,” Wall Street’s name for computer finance geeks.

“It’s certainly possible that a real living AGI could emerge on the financial markets,” Wissner-Gross told me. “Not the result of a single algorithm from a single quant, but an aggregate of all the algorithms from lots of hedge funds. AGI may not need a coherent theory. It could be an aggregate phenomenon. If you follow the money, finance has a decent shot at being the primordial ooze out of which AGI emerges.”

To buy this scenario you have to believe that there’s a lot of money firing the creation of better and better financial modeling. And in fact there is—anecdotally, more money than anyone else is spending on machine intelligence, perhaps even more than DARPA, IBM, and Google can throw at AGI. That translates into more and better supercomputers and smarter quants. Wissner-Gross said quants use the same tools as AI researchers—neural nets, genetic algorithms, automatic reading, hidden Markov models, you name it. Every new AI tool gets tested in the crucible of finance.

“Whenever a new AI technique is developed,” Wissner-Gross told me, “the first question out of anyone’s mouth is ‘Can you use it to trade stocks?’”

Now imagine you’re a high-powered quant with a war chest big enough to hire more quants and buy more hardware. The hedge fund you work for is running a great model of the Street, populated with thousands of diverse economic agents. Its algorithms react with those of other hedge funds—they’re so tightly coupled that they rise and fall together, seeming to act in concert. According to Wissner-Gross, market observers have suggested that some seem to be
signaling
each other across Wall Street with millisecond trades that occur at a pace no human can track (these are HFTs or high-frequency trades, discussed in chapter 6).

Wouldn’t the next logical step be to make your hedge fund reflective? That is, perhaps your algorithm shouldn’t automatically trigger sell orders based on another fund’s massive sell-off (which is what happened in the flash crash of May 2010). Instead it would perceive the sell-off and see how it was impacting other funds, and the market as a whole, before making its move. It might make a different, better move. Or maybe it could do one better, and simultaneously run a very large number of hypothetical markets, and be prepared to execute one of many strategies in response to the right conditions.

In other words, there are huge financial incentives for your algorithm to be self-aware—to know exactly what it is and model the world around it. That is a
lot
like AGI. This is without a doubt the way the market is headed, but is anyone cutting to the chase and building AGI?

Wissner-Gross didn’t know. And he might not tell you if he did. “There are strong mercantile impulses to keep secret any seriously profitable advances,” he said.

Of course. And he’s not just talking about competition among hedge funds, but a kind of natural selection among algorithms. Winners thrive and pass on their code. Losers die. The market’s evolutionary pressure would speed the development of intelligence, but not without a human quant’s guiding hand. Yet.

And an intelligence explosion would be opaque in the computational finance universe, for at least four reasons. Like many cognitive architectures, it would probably use neural networks, genetic programming, and other “black box” AI techniques. Second, the high-bandwidth, millisecond-fast transmissions take place faster than humans can react—look at what happened during the Flash Crash. Third, the system is incredibly complex—there’s no one quant or even a group of quants (a quantum? a gaggle? what’s the quants’ collective noun?) who can explain the algorithm ecosystem of Wall Street and how the algorithms interact.

Finally, if a formidable intelligence emerged from computational finance, it would almost certainly be kept secret so long as it was making money for its creators. That’s four levels of opacity.

To sum up, AGI could arise from Wall Street. The most successful algorithms are kept secret by the quants who lovingly code them, or the companies that own them. An intelligence explosion would be invisible to most if not all humans, and probably unstoppable anyway.

The similarities between computational finance and AGI research don’t end there. Wissner-Gross has another astonishing proposal. He claims that the first strategies to control AGI might arise from measures now proposed to control high-frequency trading. Some of them sound promising.

Market circuit breakers
would cut off hedge fund AIs from the outside world, in case of emergency. They’d detect cascading algorithm interactions like the 2010 Flash Crash and unplug the machines.

The Large Trader Rule
requires detailed registration of AIs, along with full human organization charts. If this sounds like a prelude to large government intervention, it is. Why not? Wall Street has proven again and again that as a culture it cannot behave responsibly without strenuous regulation. Is that also true of AGI developers? Without a doubt. There’s no moral merit badge required for studying AGI.

Pre-trade testing of algorithms
could simulate algorithms’ behavior in a virtual environment before they were let loose on the market.
AI Source Code audits
and
Centralized AI Activity Recording
aim to anticipate errors, and facilitate after-game analysis following an accident, like the 2010 Flash Crash.

But look back at the four levels of opacity mentioned earlier, and see if these defenses, even if they were fully implemented, sound anything like foolproof to you.

*   *   *

As we’ve seen, Vinge took the baton from I. J. Good and gave the intelligence explosion important new attributes. He considered alternate routes to achieving it besides the neural nets Good anticipated, and pointed out the possibility, even probability, of human annihilation. Most important, perhaps, Vinge gave it a name—a singularity.

Naming things, as Vinge, author of the seminal science-fiction novella
True Names
well knows, is a powerful act. Names stick on the lips, lodge in the brain, and hitchhike across generations. In the book of Genesis, theologians propose, naming everything on Earth on the seventh day was important because a rational creature was about to share the stage God made, and would make use of names thereafter. Lexical growth is an important part of childhood development—without language the brain doesn’t develop normally. It seems unlikely that AGI will be possible without language, without nouns, without names.

Vinge named the singularity to designate a scary place for humans to be, an unsafe proposition. His definition of the singularity is metaphorical—the orbit outside a black hole where gravitational forces are so strong not even light can escape. We cannot know its essence, and it was named that way on purpose.

Then suddenly, all that changed.

To the idea of a singularity as espoused by Vinge, Ray Kurzweil added a dramatic catalyst that shifts this whole conversation into hyperdrive, and brings into sharper focus the catastrophic danger ahead: the exponential growth of computer power and speed. It’s because of this growth that you should cast a jaundiced eye on anyone who claims human-level machine intelligence won’t be achieved for a century or more, if at all.

Per dollar spent, computers have increased in power by a billion times in the last thirty years. In about twenty years a thousand dollars will buy a computer a million times more powerful than one today, and in twenty-five years a
billion
times more powerful than one today. By about 2020 computers will be able to model the human brain, and by 2029 researchers will be able to run a brain simulation that’s every bit as intellectually and emotionally nuanced as a human mind. By 2045, human and machine intelligence will have increased a
billionfold
, and will develop technologies to defeat our human frailties, such as fatigue, illness, and death. In the event we survive it, the twenty-first century won’t see a century’s worth of technological progress, but 200,000 years’ worth.

This juggernaut of projections and analysis is Kurzweil’s, and it’s the key to understanding the third and reigning definition of the singularity—his. It’s at the heart of Kurzweil’s Law of Accelerating Returns, a theory about technological progress that Kurzweil didn’t invent, but pointed out, in much the same way Good anticipated the intelligence explosion and Vinge warned of a coming singularity. What the Law of Accelerating Returns means is that the projections and advances we’re discussing in this book are hurtling toward us like a freight train that doubles its speed every mile, then doubles again. It’s very hard to perceive how quickly it will get here, but suffice it to say that if that train were traveling twenty miles an hour by the end of the first mile, just fifteen miles later it’ll be traveling more than 65,000 miles an hour. And, it’s important to note Kurzweil’s projections aren’t just about advances in technology hardware, such as what’s inside a new iPhone, but advances in the technology arts, like developing a unified theory of artificial intelligence.

But here’s where Kurzweil and I differ. Instead of leading to a kind of paradise, as Kurzweil’s aggregate projections assert, I believe the Law of Accelerating Returns describes the shortest possible distance between our lives as they are and the end of the human era.

 

Chapter Nine

The Law of Accelerating Returns

Other books

Hunter and the Trap by Howard Fast
The Wives of Bath by Susan Swan
Revolt in 2100 by Robert A. Heinlein
Espresso Shot by Cleo Coyle
Far From Home by Valerie Wood
One True Love by Barbara Freethy
Elementary by Mercedes Lackey
Mortal Consequences by Emery, Clayton