Take-home pay depended more on the number of years with the organization than on individual effort. Union contracts stipulated seniority; white-collar workers moved up pay ladders. Such predictability not only helped the large-scale organization plan its production; it also helped families plan their futures. One’s pay “grade” started at a modest level, when household expenses at that early point in life rarely required more. The grade level gradually rose with experience and maturity, allowing employees to take out home loans and car loans with confidence that they could be repaid. As paychecks grew, “starter” homes and cars could be traded up, and children could be raised. At age sixty-five, after forty or more years with a company, the typical full-time employee retired with a gold watch or pin and a company pension providing a modest fixed sum thereafter. Social Security and personal savings provided the rest. Retirees could then expect another five or six years of card games with old friends and visits from the grandchildren before dying with the satisfaction of having put in a full working life.
Limited effort.
Factory work was still hard on muscles and joints, but by the midpoint of the twentieth century it was no longer dangerous, for the most part. And the effort it required of blue-collar workers was carefully circumscribed by work rules and job classifications. The white-collar worker of midcentury took the job seriously but rarely obsessed. “[T]here are few subjects upon which [young men] will discourse more emphatically,” noted Whyte, “than the folly of elders who have a single-minded devotion to work.”
16
The employee sold the organization his time, not his soul. Young Tom Rath, the hero of Sloan Wilson’s best-selling novel of the 1950s,
The Man in the Gray Flannel Suit,
typified the prevailing norm. Tom turns down a challenging job, explaining to his boss, “I’m just not the kind of guy who can work evenings and weekends and all the rest of it forever. . . . I’m not the kind of person who can get all wrapped up in a job—I can’t get myself convinced that my work is the most important thing in the world.” The benevolent boss understands. “There are plenty of good jobs where it’s not necessary for a man to put in an unusual amount of work,” he says, kindly. “Now, it’s just a matter of finding the right spot for you.”
17
By law, blue-collar workers were owed time-and-a-half for any more than forty hours of work each week. White-collar salary workers were also expected to put in no more than a fixed amount of work time, beginning and ending strictly on schedule. In the pre-employment era, people had been paid for completing particular tasks; the large-scale enterprise, by contrast, paid people for putting in predictable time. It has been suggested that even the way people
thought
about time shifted with industrialization, from “task time”—the number of minutes or hours necessary to finish a particular task—to “clock time,” as measured in uniform intervals.
18
Large economies of scale could be achieved only if jobs were coordinated like clockwork. Frederick Winslow Taylor, the management theorist, pioneered “time and motion” studies to discover the most efficient means of doing a particular set of repetitive movements within a certain fixed interval of time.
Efficiency came at the price of tedium for some. The organization was, in this respect, like a large version of the machines at its core. All the pieces had to fit together, unobtrusively. The organization ran by rules. Factory workers were not paid to think. Henry Ford once complained that when he hired a pair of hands, he also got a human being. Where no rules were available, there were rules for setting new rules. If the vast organizational machine was to attain maximum efficiency, all behavior had to be fully anticipated. Blue-collar workers adhered to job classifications and work rules; white-collars followed standard operating procedure. “What should a person do,” a midlevel executive asked Norman Vincent Peale, America’s most popular armchair therapist of the 1950s, “who is unhappy and bored in his job after twenty years but who earns a nice salary and hasn’t the nerve to leave?” Seeking a different job was out of the question, Peale counseled; even altering the current one was too ambitious an undertaking. Peale advised accepting one’s fate: “[W]ake up mentally and strive for some understanding of what [you] can accomplish in [your] present position.”
19
All of the above allowed for a strict border between paid work and the rest of one’s life. By midcentury, work and home were different places. Home was often in the suburbs, a commute away from office or factory. Most blue-collar men could afford the accouterments of middle-class life without relying on a second wage earner. Some middle-class women worked outside the home nonetheless, within the few professions (such as teaching) open to them; poor women continued to clean houses and cook for pay. But most women remained in their homes, waxing, polishing, and cleaning them until they shone as brightly as they did in the advertisements, and rearing children with the kind of supreme attentiveness guaranteed to produce a self-indulgent generation of postwar boomers.
This allocation of responsibility was widely accepted, although it created new problems. “At first she is slightly resentful” of her husband’s life on the job, a magazine for salesmen solemnly warned. “In time she may become openly jealous.” Here was a particularly dangerous juncture: “Unless brought under control, [the jealousy] can end up in irreparable damage to the salesman’s worth to his employer.” The solution was for housewives to join the League of Women Voters, the Parent-Teacher Association, or even the school board, and thus feel “worthwhile.” A corollary danger arose “if the husband [was] moving up rapidly” in the corporation, thus creating “a wedge between husband and wife, for while he is getting post-graduate finishing through travel and exposure to successful older men, her tastes are often frozen at their former level by lack of any activity but child rearing.”
20
The remedy was for her to throw herself into voluntary suburban activity with even greater gusto.
Wage compression, and the expansion of the middle class.
The large-scale organization compressed wages, raising them for workers at the bottom and limiting them at the top. Unions prevented wages from dropping too low, and the organization had no need to bestow lavish compensation on top executives since most of them rose through the ranks and would not be poached by another firm. The constraint was also social: It would be thought unseemly for top executives to earn large multiples of the earnings of people in the middle ranges or at the bottom.
Apart from the occasional promotion, the large-scale organization did not differentiate among employees who had put in the same number of years on the job. All midlevel executives of equal tenure were paid about the same, as were university professors of the same rank and experience, hospital administrators, journalists working for large daily newspapers, and civil servants. In short, one’s status and income were bureaucratically determined. “[B]usiness is coming more and more to assume the shape of the government civil service,” noted a sociological text of the 1950s. Employees’ incomes “depend upon the rules of bureaucratic entry and promotion. . . . Income is determined by functional role in the bureaucracy.” It was not surprising therefore that “[t]he trend of income distribution has been toward a reduction in inequality. Owners have been receiving a smaller share relative to employees; professionals and clerks have been losing some of their advantages over operatives and laborers.”
21
At midcentury, almost half of all American families fell comfortably within the middle class (then defined as family units receiving from $4,000 to $7,500 after taxes, in 1953 dollars). Notably, most of these middle-class families were headed not by professionals or business executives but by skilled and semiskilled factory workers, clerks, salesmen, and wholesale and retail workers, who managed the flows of product through the great pyramids of large-scale production. A majority received health and pension benefits through work.
To be successful was to be respected in one’s community, to earn a decent living and be promoted up the corporate ladder, to own a home in the suburbs and have a stable family, and to be well liked and widely admired. These aspirations were not unrealistic for a large and growing percentage of Americans.
Yet America of the 1950s still harbored vast inequalities. The very poor remained almost invisible. Discrimination was deeply entrenched. Blacks were overtly relegated to second-class citizenship and inferior jobs. Few women dared aspire to professions other than teaching or nursing. It would be decades before such barriers began to fall.
POST-EMPLOYMENT
By the turn of the twenty-first century, these tacit rules of employment had all but vanished. The new logic detailed in preceding chapters demonstrates that they have become increasingly irrelevant to working life. Less than one in ten private-sector workers belongs to a union; the white-collar “organization man” is a vanishing species. While most people still rely on wages or salaries, the old employment contract is quickly eroding. To wit:
The end of steady work.
Steady work—a predictable level of pay from year to year—has disappeared for all but a handful of working people (among the rare holdouts, tenured professors who write about employment). Buyers’ widening choices and easier switches have made it almost impossible for any organization to guarantee a consistent stream of income to anyone working within it. To stay competitive in this volatile environment, organizations have to turn all fixed costs (especially payrolls, which are among their largest) into variable costs that rise and fall according to the choices buyers make. As a result, earnings have become less and less predictable. Evidence of
job
instability is less conclusive, but this is largely a matter of semantics. A job that’s formally classified as “permanent” or “full-time,” but whose pay varies considerably from month to month or even from year to year, is not, as a practical matter, a job one can rely on.
22
The new precariousness is manifest in many ways. Nearly everyone is now on “soft money,” in the sense that their earnings vary with contracts, grants, or sales from one period to the next. Much has been said about the rising tide of temporary workers, part-timers, freelancers, e-lancers, independent contractors, and free agents—variously estimated to constitute a tenth to a third of the civilian labor force.
23
But the portion of employees uncertain about how much they’ll earn from year to year or even from month to month is far bigger than even the largest estimate. Increasingly, the take-home pay of full-time employees depends on sales commissions, individual bonuses, work-team bonuses, profit-sharing, billable hours, stock options, and other indicia of performance—all of which can as easily drop as grow.
24
An increasing number of workers also move from project to project within their companies, or for clients. If there’s no project for them to work on, or if no project manager wants their services, they’re unceremoniously “beached,” and their pay declines accordingly. They may continue to be listed as full-time employees, but as a practical matter they are barely on the payroll.
Small businesses of less than twenty-five employees are creating most new jobs, but the incomes of their full-time employees are also unpredictable, because small businesses disappear at a much higher rate than larger ones; job tenure in small businesses averages 4.4 years, in contrast to 8.5 years at firms with 1,000 or more employees.
25
In addition, many large companies are discovering they can make more money turning full-time employees into full-time licensees or franchisees, thereby shifting market risks onto them while increasing their incentive to work hard. Before 1979, taxicab leasing was illegal in New York City. Drivers were employed by large fleets of cabs, sharing with the company a percentage of each day’s receipts. Many companies offered health insurance and retirement plans. But the taxi companies found it more profitable to lease the cabs to the drivers, and pressed for a change in the law. By the late 1990s, most drivers were independent operators, paying taxi companies $90 to $135 for a twelve-hour rental, with no health or pension coverage. They could earn more money than before if they hustled, but might end up with less. Perhaps not incidentally, taxicab accidents were on the rise.
26
Benefits have become as precarious as earnings. In 1980, more than 70 percent of workers received some form of health benefit from their employers. By the late 1990s, the percentage had slipped to about 60 percent. And even when employees have some coverage, it has become less generous, requiring them to take on higher co-payments, deductibles, and premiums.
27
Employment in the nonprofit sector of the economy offers no greater security. Donors, foundations, and grant-making agencies that once routinely renewed their contributions are now almost as fickle as consumers and investors. A growing portion of university payrolls depends on grants and funded research from outside the university. And as support has become less predictable, universities have had to rely more on contract workers whose jobs and pay vary accordingly. In 1970, only 22 percent of university faculty were part-timers. By the end of the 1990s, the proportion had risen to more than 40 percent, not including a rising tide of graduate-student teachers.
28
All told, slightly more than half of university teachers are itinerants, moving through the groves of academe like migrant farmworkers.
The necessity of continuous effort.
Earnings now depend less on formal rank or seniority, and more on an employee’s value to customers. It’s not unusual for a twenty-three-year-old geek with a hot skill to be earning several times more than a “senior manager” three levels up.
29
In America’s most competitive industries—harbingers of times to come—the half-life of talented people continues to shorten. The stars of Wall Street, Silicon Valley, and Hollywood are coming to resemble professional athletes who can count on no more than ten to fifteen years before losing their competitive edge. Twentysomething software engineers are in great demand; when they’re over forty, they’re over the hill. Surveys show that six years after graduating with a degree in computer science, 60 percent are working as software programmers; after twenty years, only 19 percent are still at it. This largely explains why high entry salaries and generous signing bonuses are still not enough to entice greater numbers of undergraduates into the field. They know how quickly they’ll become obsolete.
30