The Next Decade (34 page)

Read The Next Decade Online

Authors: George Friedman

Tags: #Non-Fiction

BOOK: The Next Decade
2.02Mb size Format: txt, pdf, ePub

Compounding the economic effects of a graying population will be an increasing life expectancy coupled with an attendant increase in the incidence of degenerative diseases. As more people live longer, Alzheimer’s disease, Parkinson’s disease, debilitating heart disease, cancer, and diabetes will become an overwhelming burden on the economy as more and more people require care, including care that involves highly sophisticated technology.

Fortunately, the one area of research that is amply funded is medical research. Political coalitions make federal funding sufficiently robust to move from basic research to technological application by the pharmaceutical and biotech industries. Still, the possibility of imbalance remains. The mapping of the genome has not provided rapid cures for degenerative diseases, nor has anything else, so over the next ten years the focus will be on palliative measures.

Providing such care could entail labor costs that will have a substantial drag on the economy. One alternative is robotics, but the development of effective robotics depends on scientific breakthroughs in two key areas that have not evolved in a long time: microprocessors and batteries. Robots that can provide basic care for the elderly will require tremendous amounts of computing power as well as enhanced mobility, yet the silicon chip is reaching the limits of miniaturization. Meanwhile, the basic programs needed to guide a robot, process its sensory inputs, and assign tasks can’t be supported on current computer platforms. There are a number of potential solutions, from biological materials to quantum computing, but work in these areas has not moved much beyond basic research.

Two other converging technological strands will get bogged down in the next decade. The first is the revolution in communications that began in the nineteenth century. This revolution derived from a deepening understanding of the electromagnetic spectrum, a scientific development driven in part by the rise of global empires and markets. The telegraph provided near-instantaneous communications across great distances, provided that the necessary infrastructure—telegraph lines—was in place. Analog voice communications in the form of the telephone followed, after which infrastructure-free communications developed in the form of wireless radio. This innovation subsequently divided into voice and video (television), which had a profound effect on the way the world worked. These media created new political and economic relations, allowing both two-way communications and centralized broadcast communications, a “one to many” medium that carried implicitly great power for whoever controlled the system. But the hegemony of centralized, one-to-many broadcasting has come to an end, overtaken by the expanded possibilities of the digital age. The coming decade marks the end of a sixty-year period of growth and innovation in even this most advanced and disruptive digital technology.

The digital age began with a revolution in data processing required by the tremendous challenges of personnel management during World War II. Data on individual soldiers was entered as nonelectronic binary code on computer punch cards for sorting and identification. After the war, the Defense Department pressed the transformation of this primitive form of computing into electronic systems, creating a demand for massive mainframes built around vacuum tubes. These mainframes entered the civilian market largely through the IBM sales force, serving businesses in everything from billing to payrolls.

After development of the transistor and the silicon-based chip, which allowed for a reduction in the size and cost of computers, innovation moved to the West Coast and focused on the personal computer. Whereas mainframes were concerned primarily with the manipulation and analysis of data, the personal computer was primarily used to create electronic analogs of things that already existed—typewriters, spreadsheets, games, and so on. This in turn evolved into handheld computing devices and computer chips embedded in a range of appliances.

In the 1990s, the two technological tributaries, communications and data, merged into a single stream, with information in electronic binary form that could be transmitted by way of existing telephone circuits. The Internet, which the Defense Department had developed to transmit data between mainframe computers, quickly adapted to the personal computer and the transmission of data over telephone lines using modems. The next innovation was fiber optics for transmitting large amounts of binary data as well as extremely large graphics files.

With the advent of graphics and data permanently displayed on websites, the transformation was complete. The world of controlled, one-to-many broadcasting of information had evolved into an infinitely diffuse system of “many to many” narrowcasting, and the formally imposed sense of reality provided by twentieth-century news and communications technology became a cacophony of realities.

The personal computer had become not only a tool for carrying out a series of traditional functions more efficiently but also a communications device. In this it became a replacement for conventional mail and telephone communications as well as a research tool. The Internet became a system that combined information with sales and marketing, from data on astronomy to the latest collectibles on eBay. The Web became the public square and marketplace, tying mass society together and fragmenting it at the same time.

The portable computer and the analog cell phone had already brought mobility to certain applications. When they merged together in the personal digital assistant, with computing capability, Internet access, and voice and text messaging, plus instant synchronization with larger personal computers, we achieved instantaneous, global access to data. When I land in Sydney or Istanbul, my BlackBerry instantly downloads my e-mail from around the world, then enables me to read the latest news as the plane taxis to the gate. The revolution in communications has reached an extreme point.

We are now at an extrapolative and incremental state in which the primary focus is on expanding capacity and finding new applications for technology developed years ago. This is a position similar to the plateau reached by personal computers at the end of the dot-com bubble. The basic structure was in place, from hardware to interface. Microsoft had created a comprehensive set of office applications, wireless connectivity had emerged, e-commerce was up and running at Amazon and elsewhere, and Google had launched its search engine. But it is very difficult to think of a truly transformative technological breakthrough that occurred in the past ten years. Instead of breaking new ground, the focus has been on evolving new applications, such as social networking, and on moving previous capabilities to mobile platforms. As the iPad demonstrates, this effort will continue. But ultimately, this is rearranging the furniture rather than building a new structure. Microsoft, which transformed the economy in the 1980s, is now a fairly staid corporation, protecting its achievements. Apple is inventing new devices that make what we already do more efficient. Google and Facebook are finding new ways to sell advertising and make a profit on the Internet.

Radical technological innovation has been replaced by a battle for market share—finding ways to make money by introducing small improvements as major events. Meanwhile, the dramatic increases in productivity once driven by technology, which helped in turn to drive the economy, are declining, which will have a significant impact on the challenges we face in the decade ahead. With basic research and development down and corporate efforts focused on making incremental improvements in the last generation’s core technology, the primary global growth impetus is limited to putting existing technologies into the hands of more people. Since the sale of cell phones has reached the saturation point already and corporations are reluctant to invest in unnecessary upgrades, this is a problematic prescription for growth.

This is not to say that the world of digital technology is moribund. But computing is still essentially passive, restricted to manipulating and transmitting data. The next and necessary phase is to become active, using that data to manipulate and change reality, with robotics as a primary example. Moving to that active phase is necessary for achieving the huge boost in productivity that will compensate for the economic shifts associated with the demographic change about to hit.

The U.S. Defense Department has been working on military robots for a long time, and the Japanese and South Koreans have made advances in civilian applications. However, much scientific and technological work remains to be done if this technology is to be ready when it will be urgently needed, in the 2020s.

Even so, relying on robotics to solve social problems simply begs another vexing question, which is how we are to power these machines. Human labor by itself is relatively low in energy consumption. Machines emulating human labor will use large amounts of energy, and as they proliferate in the economy (much as personal computers and cell phones did), the increase in power consumption will be enormous.

Questions of powering technological innovation in turn raise the great and heated debate about whether the increased use of hydrocarbons is affecting the environment and causing climate change. While this question engages the passions, it really isn’t the most salient issue. The question of climate change raises two others that demand astute presidential leadership: first, is it possible to cut energy use? and second, is it possible to continue growing the economy using hydrocarbons, and particularly oil?

There is an expectation built into public policy that says it is possible to address the issue of energy use through conservation. But much of the recent growth of energy consumption has come from the developing world, which makes solving the problem by cutting back wishful thinking at best.

The newly industrialized countries in Asia and Latin America are not about to cut their energy use in order to solve energy issues or prevent certain island nations from being inundated by the rising waters of warmer seas. From their point of view, conservation would relegate them permanently to the Third World status they have fought long and hard to escape. In their view, the advanced industrial world of the United States, western Europe, and Japan should cut
its
energy use in order to compensate for over a century of profligate consumption.

In 2010 there was a summit in Copenhagen to address the question of energy use, or, more precisely, carbon dioxide emissions. The proposal was made to cut emissions. At a time when energy consumption is growing, cutting emissions at all poses a significant challenge. Except for a dramatic new source of energy, that sort of cut can be reached only by substantial decreases in fossil fuel consumption. Riding your bicycle to work and careful recycling will not do it.

The Copenhagen initiative collapsed because it was politically unsustainable. None of the leaders of the advanced industrial world could possibly persuade the public to accept the significant cuts in standard of living that reducing fossil fuel use would have required. For people to balk is not irrational. They are measuring a certainty against a probability. The certainty is that their lives would be significantly constrained by such reductions in consumption, which would lead to widespread economic dislocation. The probability—which is questioned by some—is that climate change will occur, with equally devastating results. That the change in the climate will be harmful rather than beneficial might well be true. But the question is whether the probable or possible effects on children and grandchildren outweigh the certainty of immediate consequences. This may be an unpleasant fact, but it explains the outcome of the Copenhagen and Kyoto meetings on climate change that failed to successfully develop strategies for reducing greenhouse emissions.

For the next decade, the assumption must be that energy use will continue to surge, and thus the issue is not whether to cut fossil fuel consumption but whether there will be enough fossil fuels to deal with rising demand. Nonfossil fuels cannot possibly come on line fast enough to substitute for energy use in the short term. It takes well over ten years to build a nuclear power plant. Wind and water power could manage only a small fraction of consumption. The same is true of solar power. For the decade ahead, whatever long-term solutions might exist, the problem is going to be finding the fuel for rising energy use while, ideally, restricting increases in carbon output.

Energy use falls into four broad categories: transportation, electrical generation, industrial uses, and nonelectrical residential uses (heating and air-conditioning). Over the next decade, energy for transportation will continue to be petroleum-based. The cost of shifting the existing global fleet to another energy source is prohibitive and won’t happen within ten years. Some transportation will shift to electrical, but that simply moves fossil fuel consumption from the vehicle to the power station. Electrical generation is more flexible, as it accepts oil, coal, and natural gas. The same is possible for industrial uses. Home heating and air-conditioning can be converted, at some cost.

There is talk of global oil output having reached its historic high and now being in decline. Certainly oil production has moved to less and less hospitable areas, such as the deep waters offshore and shale, which require relatively expensive technology. That tells us that even if oil extraction has not reached its peak, all other things being equal, oil prices will continue to rise. Offshore drilling has cost and maintenance problems. As we saw with the recent BP disaster off the coast of Louisiana, an accident happening a mile under water is hard to fix. But even apart from environmental damage, wells are very expensive. Shale installations are expensive as well, and when the price of oil falls below a certain point, extraction becomes uneconomical and the investment is tied up or lost. But leaving aside broader questions of peak prices, the increased energy consumption we will see over the next decade cannot be fueled by oil, or at least not entirely.

That leaves two choices for the ten years ahead. One is coal; the other is natural gas. Widespread conservation sufficient to reduce energy consumption in absolute terms is not going to happen in the United States, let alone the world as a whole. The ability to produce more oil is limited, and the vulnerabilities in an oil economy to interdictions by countries such as Iran make it a very risky proposition. The ability of alternative energy sources to have a decisive impact in this decade is minimal at best. No nuclear power plant started now will be operational in five or six years. But a choice between more coal and more natural gas is not the choice the president will want to make. He will want a silver bullet of rapid availability, no environmental impact, and low cost. In this decade, however, he will be forced to balance what is needed against what is available. In the end, he will pick both, with natural gas having the greater surge.

Other books

Adore by Doris Lessing
CRAVE by Victoria Danann
High-Riding Heroes by Joey Light
Charlotte & Leopold by James Chambers
The Mahabharata Secret by Doyle, Christopher C
Rhineland Inheritance by T. Davis Bunn
A Game of Universe by Eric Nylund