The Future of Success (9 page)

Read The Future of Success Online

Authors: Robert B. Reich

Tags: #Business & Economics, #Labor

BOOK: The Future of Success
2.42Mb size Format: txt, pdf, ePub

Shrinks can find unexpected commercial applications for what the geek has devised. The drug minoxidil was originally developed for the purpose of lowering blood pressure. Although it proved effective against hypertension, it had one disconcerting side effect on women: It stimulated hair growth. Only then did shrinks, viewing it from a different perspective, see minoxidil’s commercial possibilities for overcoming baldness. The innovative process often operates just this way. Breakthroughs come not only in scientific, technical, and artistic discovery, but in the discovery of how such insights can best be used.

Geeks, similarly, can discover new applications for widely available technologies by learning more about potential markets. Several years ago, hospitals needed better means of tracking patients as they moved through the health-care system. Too many patients were getting lost between HMOs, primary-care physicians, and specialists, and their records were incomplete. A geek familiar with software used by shippers to track packages as they moved through different modes of transportation figured that, with only slight modification, the same software could track patients. He then successfully customized it to the needs of specific hospitals and health-care providers.

The more geeks and shrinks can learn from each other, the more innovation will occur. But the typical large industrial-strength bureaucracy isolated geeks within reseach-and-development silos and isolated shrinks within sales and marketing departments. The result was occasional insights about technology and a set of insights about the needs of consumers, but little or no connection between the two, and very little real innovation. For many years, Xerox’s famed Palo Alto Research Center was a seedbed of new ideas for the electronics industry. But Xerox itself never quite figured out how to use the ideas. Its corporate headquarters in Stamford, Connecticut, was fixated on the needs of current customers for duplicating and retrieving documents, and never understood the market potential for what its inventors in California were discovering. One of the very few innovations to make it from Palo Alto to Stamford was laser imaging—largely because an entrepreneur inside Xerox named Robert Adams happened to understand this emerging technology
and
its potential market well enough to connect them, and to champion the new product inside Xerox.

Mutual learning that leads to continuous innovation tends to be informal, unplanned, serendipitous. This is why the new economy is rewarding small entrepreneurial groups composed of geeks and shrinks, rather than big hierarchical bureaucracies—and why the best of such groups are organized loosely, often in open-style offices where they can see one another, or find each other within seconds. The casual attire you see in these new entrepreneurial businesses—open collars, blue jeans, and running shoes—isn’t just for show. People tend to be at their most creative and spontaneous, and most willing to share casual thoughts and ideas, when they’re feeling as comfortable as they are when they’re with good friends.

Entrepreneurial regions of the country—places that spawn a disproportionately large number of innovative businesses—typically have pools of talented geeks and shrinks who constantly intermingle. Boston’s high-tech corridor has benefited from proximity to both the technological insights of MIT and the marketing insights of Harvard Business School. Harvard’s faculty is not reputed for its technological prowess, nor is MIT’s for its marketing acumen, yet the students who emerge from both institutions and remain in the region subsequently learn from one another, and this mutual learning has helped fuel the regional boom.

Silicon Valley has similarly benefited from a concentration of geeks (many of them graduates of Stanford University, in Palo Alto) and also of venture capitalists with a keen sense of what it takes to make ideas commercially successful. The Valley’s entrepreneurial roots go back to the late 1930s, when Fred Terman, an engineering professor at Stanford, persuaded two of his students, William Hewlett and David Packard, to form a company and got Stanford to transform some of its peach groves into a high-tech industrial park. But the eventual flowering depended on shrewd venture capitalists and marketers who turned this geek paradise into companies like Sun Microsystems, Cisco, Silicon Graphics, and Yahoo.

For seven decades, Hollywood has been a seedbed of artists who know how to utilize the film medium (screenwriters, actors, directors, costume designers, cinematographers) and also of marketing wizards who know how to take the public’s pulse (agents, publicists, studio executives, producers)—the talent and the suits. On Wall Street, financial geeks come into direct contact with financial marketers, and the outcome is a stream of financial innovation.

These regions spawn innovation not because they have an abundance of either geeks or shrinks but because they have a concentration of both, in the right balance. If the balance tips one way or the other, the regions become less entrepreneurial, lose their “edge,” and become either irrelevant or stale. Some would say Hollywood already has too many shrinks and not enough original artists to be capable of true innovation. Its output has become formulaic and predictable. Some allege that the New York literary “scene” has become unbalanced in the opposite direction—too ingrown and self-indulgent, too obsessed by its own cleverness and too indifferent to public taste, to set trends any longer in literary innovation. Israel is a major center of technological innovation—brimming with skilled engineers, technicians, and computer programmers, many of them emigrants from the former Soviet Union—but it still lacks the marketing savvy to be entrepreneurial on its own. Israel’s geeks rely on global firms with shrinks who know what will sell.

A CAUTIONARY NOTE ON INTEGRITY AND MARKETABILITY

Nothing I have said should be taken to suggest that invention or artistry requires commercial popularity in order for it to be great, beautiful, or otherwise worthy. Software can still be “cool” even if no one outside the world of geekdom appreciates it. A film can merit an Oscar even if its box office take is disappointing, a novel deserve a National Book Award even if few readers enjoy it. The customer is not
always
right. In fact, an excessive reliance on pleasing consumers may rob creativity of its very soul.

There are two distinct vantage points from which a piece of work may be evaluated: according to the aesthetics of its medium, or according to its popularity in the market. Film critics, book critics, those who pass judgment on new software or any other new invention may be using either measure. “This is a wonderful film” may mean that its talented creators have pushed the art of filmmaking to a new level of taste, subtlety, and beauty, even if it’s a dud at the box office; or it may mean that the public is likely to find the film to be enormously enjoyable, even though it’s drivel.

That these distinctions are rarely made explicit causes no small mischief. As the economy grows more fiercely competitive, commercial evaluation (“Two thumbs up!”) can all but silence aesthetic criticism. Consumers face so many choices that they place ever-greater value on advice about what they will enjoy or find satisfying. There’s correspondingly less interest in aesthetic criticism—what consumers should or would like, were their tastes more finely honed.

And yet society needs both. Consumers surely are helped by reviews alerting them to software or films or any other inventions they’re likely to enjoy; and it is perfectly reasonable for geeks and other creators to know how they can best delight the public. But there is also value in educating the public about aesthetic standards inherent in a medium, quite apart from the public’s likely enjoyment. And in a culture obsessed by what sells, inventors and artists can benefit from aesthetic criticism. Otherwise, society runs the danger of losing that which provokes, angers, ennobles, challenges, or otherwise forces people to face truths from which they would rather escape.

Several decades ago, before competition began to intensify, there were arbiters of taste—art critics, reviewers, essayists, educators, and graybeards—within the professions who continually passed judgment on the quality of work being done. Some were stuffy and self-important, and their pieties reflected conventional doctrines and tired formalisms. Others, however, were daring and insightful. All presided over a continuing discussion about standards, which reminded society about the difference between the good and the popular.

But in a world of intensifying competition in which consumers can get exactly what they want—where software can even analyze their past purchases and advise them on what else they will enjoy or find interesting—such standard-bearers seem increasingly irrelevant. The only legitimate measure of worth seems to be what is desired, and the best indication of that is what sells. All else is deemed arbitrary. Yet when it is all marketing, there is less space for professional or artistic integrity.

Jason Epstein, who joined Random House as an editor in the late 1950s, writes that he and his colleagues at that time thought of themselves as “caretakers of a tradition, like London tailors or collectors of Chinese porcelain,” rather than as businessmen. “It was always a pleasure when one of our books became a best-seller, but what counted more was a book that promised to become a permanent part of the culture.”
7
Intensifying competition, propelled by the new power of buyers, is forcing every publisher to worry more about the bottom line. When all writers, actors, and musicians work for global media and communications conglomerates locked in intense competition, who will dare flout convention and create something startling or disturbing? When every geek works for relentlessly commercial enterprises, who will do the basic research that has no immediate or apparent commercial value?

The danger is acute for professionals who once were sheltered from the demands of the marketplace and who have a special responsibility to reveal truths in ways that may be unpopular or unfashionable. Their livelihoods now increasingly depend on their popularity. Journalists are now under increasing compulsion to write or broadcast whatever sells, regardless of how incendiary or inaccurate. New technologies permit almost instant feedback. Online magazines know how many people have clicked on each article they offer, within each issue; so do their advertisers and investors. As the ability to measure market responses grows ever more sophisticated, pressure grows to give buyers exactly what they want.

The nonprofit world provides scarcely more shelter to do or say what’s provocative but unpopular. A friend, a program director of a nonprofit foundation, tells me that she is pressured to steer grants in directions that corporate funders think advantageous for public relations, and away from anything that might be considered controversial or embarrassing. Not a few university professors have been known to target the topics of their research—although, one hopes, not their findings—to the interests of organizations with money to back research projects. Museum directors want “blockbuster” shows that will lure the crowds and please the patrons—which almost always means yet another round of Impressionists or antiquities.

It’s of course possible that the exquisite tailoring of products to unique tastes made possible by new technologies will offer talented geeks and shrinks new outlets for their more eccentric efforts. They’ll be able to connect with equally eccentric buyers without having to worry about acceptance by a mass market. If that’s the case, then integrity need not be overly compromised by marketability, because there’s almost always going to be
some
market, even if a tiny one. At least one among 1,500 television channels will offer a niche for richly provocative TV; at least one small online publisher will distribute intriguing books for which very little readership exists. And yet, it must be asked whether these little slivers of artistic defiance will exert any influence on a culture pandering more efficiently than ever to what’s popular, or whether they will merely function as remote and harmless escape valves for the ever more conveniently ignored.

The greatest threat to freedom of speech in many modern societies comes not from overt controls by oppressive regimes but from a more fiercely competitive market in which buyers can so easily switch to whatever they find more satisfying. Such a marketplace dictates with increasing ferocity what will be written, broadcast, and researched. The public, deluged with what delights it and protected from what may cause it discomfort, is thus armored against what it may need to know.

T
HE DEMAND FOR
creative workers—for geeks and shrinks, as I have called them—will continue to grow because they are the masters of innovation, and innovation lies at the heart of the new economy. These workers can quickly create products that are better or cheaper than what came before. They’re competing with other groups of geeks and shrinks who are racing to create even better and cheaper products, and do so even faster. As this competition intensifies, it’s fueling even greater demand for the services of such creative workers. These jobs, therefore, are likely to pay increasingly well. They also are likely to be intellectually or artistically engaging, emotionally absorbing, personally satisfying, and sometimes boundlessly frustrating. They are almost certain to claim a lot of time, even outside formal business hours. The working mind of the creative geek or shrink rarely shuts off completely.

CHAPTER FOUR

The Obsolescence of Loyalty

I
NNOVATIVE GEEKS
and shrinks are in greater demand, but anyone who does anything for pay that’s repetitive or routine—which can be done more cheaply by a machine or computer software or someone elsewhere around the world—is likely to be losing economic ground. This is because of the intensifying pressure on all enterprises to trim costs, and their increasing capacity to do so through technologies with global reach. Most of these people will remain employed, but fewer of them will engage in routine production. Many will be providing personal attention, which computers cannot do because it requires a human touch, and foreign workers cannot do from abroad because it involves direct contact with those receiving it.
*

The problem for most people who aren’t doing particularly well isn’t that they lack a job. If they inhabit the United States, they’re likely to be employed if they want to be. Their larger problem is that they don’t earn much. In Europe and Japan and much of the rest of the world, where wage rates are still less flexible than they are in America, workers who are not in much demand are either unemployed and living on welfare (as in Europe) or employed in “make-work” jobs and living off the good graces of companies willing to pay them more than the market value of their services (as in Japan). Yet the salad days of generous unemployment benefits and of corporate benevolence are coming to an end even in European countries and Japan. These other nations are gradually falling in line with the American system. Global investors and consumers are insisting on it.

Even profitable American companies have been “downsizing,” “rightsizing,” “reengineering,” “decruiting,” “deselecting,” or whatever is the currently fashionable euphemism for firing. At the same time they’re bidding more for talented geeks and shrinks, they’re also cutting the jobs or the wages of routine workers, eliminating or reducing their health benefits, trimming their pension contributions, and subcontracting work to other firms with lower wages and benefits. Increasingly, they’re relying on Web-based business-to-business auctions to find best buys among suppliers, who in turn must cut
their
costs in order to stay competitive. The nonprofit sector is going through a similar squeeze. Hospitals, museums, and even charities are slashing costs in ways that would have been thought brutal even in the private sector three decades ago. Universities are paring back the ranks of tenured professors and relying more on academic nomads on yearly contracts with low wages and no benefits. They’re turning over much of their maintenance, dining, custodial, and other routine services to for-profit vendors who can do all of it more cheaply.

Nor are companies any longer especially loyal to their hometowns. This is because fewer of them
have
hometowns. Gone are the days when large firms could be relied on to be the major employers and benefactors where they were headquartered—Kodak in Rochester, New York; Procter & Gamble in Cincinnati; Coca-Cola in Atlanta; Levi Strauss in San Francisco. All are downsizing, outsourcing, and dispersing.
1
Typically, worldwide corporate headquarters are now found in well-manicured office parks conveniently located near international airports; factories and laboratories are everywhere around the globe; suppliers and partners are nowhere in particular, and they continuously change. When the Dodgers left Brooklyn in the 1950s, people wept. How
could
they? Now teams routinely leave one town for another offering a newer arena with more skyboxes. Fans still refer to “their” home teams, but the pronoun’s meaning has become cloudy. The Florida Marlins, lacking a hometown even in their name, won the 1997 World Series with a transitory group of players cobbled together by an owner who had bought most of them the previous winter, and who shortly after the victory threatened to unload the stars and sell the team if Miami didn’t build him a new stadium.

It’s tempting to conclude from all this that enterprises are becoming colder-hearted, and executives more ruthless—and to blame it on an ethic of unbridled greed that seems to have taken hold in recent years and appears to be increasing. But this conclusion would be inaccurate. The underlying cause isn’t a change in the American character. It is to be found in the increasing ease by which buyers and investors can get better deals, and the competitive pressure this imposes on all enterprises. As the pressure intensifies, institutional bonds are loosening.

Years ago, when choice was far more limited and switching more difficult, consumers and investors tended to stay put. As a result, institutional bonds were stronger. The tameness of competition allowed for an implicit social compact. Employees worked steadily and reliably, in return for which employers provided them with steady work as long as the enterprise was profitable. Local retailers and service businesses, facing only limited competition in their neighborhoods, did likewise. Universities, receiving a steady stream of students and donations, granted tenure to a large portion of their professoriat. Hospitals, enjoying predictable numbers of patients and steady budgets, steadily enlarged their medical and nursing staffs. The wages of almost everyone drifted upward.

The executive suite of the large-scale American enterprise at midcentury was a quietly distinguished place of mahogany and glass, pile carpeting and oriental rugs, in which men went about their work with no particular urgency. The stability that characterized large-scale production bestowed a quiescence and certitude upon those who were in charge. With investors and consumers securely in place, the chief executive at midcentury could be magnanimous toward all. “The job of management,” benevolently declared Frank Abrams, chairman of Standard Oil of New Jersey, in a 1951 address that was typical of the era, “is to maintain an equitable and working balance among the claims of the various directly interested groups .         .         . stockholders, employees, customers, and the public at large.” The large organization, from this perspective, was a quasi-public enterprise with responsibilities toward everyone. And those who headed them were gaining professional status, Abrams opined, because “they see in their work the basic responsibilities [to the public] that other professional men have long recognized in theirs.”
2

Such magnanimity also afforded men like Abrams a wide latitude to do whatever they wished with their companies’ revenues, balancing claims as they saw fit. One claim notably missing from Abrams’s list but often honored above all others was the claim of executives themselves for comfortable lives, not unduly impinged upon by any of the other claimants. The midcentury executive served on a multitude of corporate and nonprofit boards, pursued several rounds of golf each week, entertained lavishly, engaged in highly visible acts of charity, sometimes dabbled in public affairs. University presidents and foundation heads led similarly unperturbed lives.
*

At the start of the twenty-first century, top executives are sounding a sharply different note. No longer are companies responsible to employees, communities, and the public at large. They view their sole duty as maximizing the value of their investors’ shares—which they accomplish by furiously cutting costs and adding value. Roberto C. Goizueta, former CEO of Coca-Cola, stated the new logic with particular clarity. “Businesses are created to meet economic needs,” he said. When they “try to become all things to all people, they fail.         .         .         . We have one job: to generate a fair return for our owners.         .         .         . We must remain focused on our core duty: creating value over time.”
3
Presidents of universities, hospitals, museums, and major charities are now similarly obsessed with building their endowments and assuring adequate revenues.

THE NEW LOGIC OF DISLOYALTY

Who’s to blame for America’s increasingly singular focus on earnings? Indirectly, and in large measure, I am, and you probably are too. It’s not that we’ve intentionally willed any of this to occur. Rather, the new logic of disloyalty is the unintended by-product of the increasing ease with which all of us can get better deals. The new logic of disloyalty, in other words, begins at home. Take a close look at the big corporations that have been doing most of the cutting and slashing, and you’ll see why.

Start with a share of stock, which is literally a share of future profits. Stock prices at any moment reflect the best guess of large numbers of investors, sifting through all available information, as to the current value of those future profit streams. Share prices are not perfect predictors; investors may be irrationally exuberant or overly pessimistic. But over the slightly longer term, a company’s share price is the best predictor available about a company’s future profitability, and thus its current value. In this way, the share price acts like an early-warning system: If top executives make decisions that most company investors think will reduce future profits, investors will sell their shares, and share prices will drop. If they drop too low, the company will have a harder time raising the money it needs to innovate for the future. Investors simply won’t trust current managers to use the money well. A low share price invites efforts to oust current executives and replace them with those who’ll do better.

Investors have become steadily more powerful in this role because of their increasing willingness and ability to switch to better deals. It started in 1974, without fanfare or even much notice, when the International Nickel Company bought up enough shares in the Electric Storage Battery Company to give International Nickel control, and promptly ousted Electric Storage’s executives. Before International Nickel did this dirty deed, Wall Street had viewed such aggression as unseemly, if not unethical. But a precedent had been set. Soon, what seemed audacious was commonplace. There were twelve hostile takeovers of companies valued at $1 billion or more during the remainder of the 1970s. During the 1980s, there were more than 150.

“Raiders,” as they came to be known around corporate suites with awe and trepidation, saw opportunities for large returns by acquiring companies and slashing costs. You might say these aggressors saw possibilities that had escaped notice by comfortable executives accustomed to the tame old world of stable oligopolies. Or you might say the raiders were willing to be more ruthless by borrowing to the hilt in order to mount their raids (wielding high-risk “junk” bonds to do “leveraged buyouts”), squeezing suppliers, fighting unions, slashing wages, and subcontracting to lower-cost producers all over the world. Both descriptions would be equally accurate. The result was higher profits, which meant higher share prices. Several of the warriors and junk-bond kings who were condemned in the 1980s for their ruthlessness are today lionized for making American companies more “competitive.” It’s a fair point, although their strategies hardly always work as planned. When the prices of junk bonds plummeted in the late 1980s, savings-and-loan companies that had been eager to purchase them when their prices were higher went famously bust, and American taxpayers ended up footing a very large bill. RJR Nabisco, the largest of the leveraged buyouts of the 1980s, was unceremoniously dismembered in 1999.

The mere possibility of a hostile takeover has altered the behavior of corporate chieftains as well as investors. Investors—including the pension funds and mutual funds where most of us now park whatever savings we have—demand and expect more. These institutions have grown large because they can so efficiently choose and switch investments on our behalf. And they’re willing to grant eye-popping rewards to executives who act aggressively to cut costs and gain larger profits, and thus lift share prices. Increasingly, executive “compensation packages” are linked to share prices through generous stock options and rich bonuses if targets are met or exceeded. My colleagues and I in the Clinton administration inadvertently contributed to this trend. Arriving in Washington in 1993 with the new President’s pledge that no company should be able to deduct from its corporate income taxes executive compensation in excess of $1 million, we advised that the deduction be allowed if the extra inducement was linked to “performance”—that is, an increase in the company’s share price. Stock options and bonuses thereafter exploded. Raising the share price became paramount, whatever that required. In 1980, the typical chief executive of a large American company took home about forty times the annual earnings of a typical worker; in 1990, the ratio rose to about eighty-five times. Between 1990 and the end of the century, total executive compensation rose from an average of $1.8 million to an average $12 million—an increase of more than 600 percent, resulting in compensation packages that averaged 419 times the earnings of a typical production worker.
4

Executives who fail to raise their stock prices, on the other hand, are apt to lose their jobs.
5
Between 1990 and 2000, high-priced heads rolled at IBM, AT&T, Sears, General Motors, Xerox, Coca-Cola, Aetna, and other blue-chip American corporations. Such decapitations often occur quickly, bloodlessly, sometimes after a tenure of only a few months. In the wake of results that disappointed Wall Street and sent stock prices tumbling in the first quarter of 1999, the board of Compaq Computer immediately ousted its chief executive. “[S]ome of our competitors have done a better job positioning themselves” for the Internet, Compaq’s board chairman explained to the
New York Times.
6
Translation: We had to get another chief executive who would move faster to slash costs and shift to new technologies—and show dramatically to Wall Street that we were back on track.

Traditional corporate boards were filled with handpicked cronies of the chief executive. But under the banner of “good corporate governance,” pension funds, mutual funds, and other institutional investors have demanded that boards be more independent. If they don’t oust a poorly performing chief executive, investors may sack the entire board. This happened in May 1998, when the giant pension fund that manages the retirement savings of most college professors, including mine, ousted the nine-member board of Furr’s/Bishop’s, Inc., a company that runs a chain of cafeterias in the South and Midwest. One of the ousted board members described the coup as “astonishing.”
7

Other books

Shirley by Charlotte Brontë
A Week at the Lake by Wendy Wax
Esra by Nicole Burr
The Good Doctor by Damon Galgut
Sophia by D B Reynolds
Son of a Preacher Man by Arianna Hart
Down the Up Escalator by Barbara Garson
April Raintree by Beatrice Mosionier