Read A Companion to the History of the Book Online
Authors: Simon Eliot,Jonathan Rose
Despite the mergers of the 1960s, independent publishing survived. Wiley and McGraw-Hill are family-controlled, and W. W. Norton is employee-owned. New presses started, and some of them succeeded, becoming medium-sized or even large publishing houses. Workman Publishing, founded in 1968, has a list focused on cooking, cats, and calendars, but with the acquisition of Algonquin in 1989, it added literary fiction and nonfiction to its list. Its bestselling book,
What to Expect When You’re Expecting,
was published in 1984 and has ten million copies in print. Ten Speed Press, which began in 1970 as the publisher of cycling books, created a long-term bestseller with
What Color is your Parachute?,
and now, with three additional imprints, issues about 150 titles each year. Many of these serve niche markets – regional audiences, minority groups, and religious groups. The largest group of independent publishers is formed by the nonprofit houses, which grew in number and expanded their publishing programs between World War II and 1970. These include university presses, museums, historical societies, academic societies, and research institutes. Their output is small compared to that of the commercial houses, but their cultural influence is disproportionate to their size.
Concerns about consolidation extended well beyond the publishing houses. Authors and their agents worried that they would face reduced choices and lose negotiating power. Social commentators worried that the quality of American books would plummet, that innovation would be stifled, and that unpopular political views would find no outlet. These fears were understandable, but they were not realized.
Although conglomerates were theoretically in a position to reduce authors’ incomes from subsidiary rights, they were never able to do so. Before companies producing a variety of media merged, authors generally transferred their copyrights to a hardback publisher, who then sold – to the highest bidders – paperback, translation, film, television, and other rights, sharing the revenues with authors as specified in their contracts. Authors feared that, after mergers, their rights would be “sold” internally and that the prices would be artificially held down. This did not happen, because the hardback divisions had to meet
their
revenue expectations, so they continued to sell to the highest bidders, whether within the conglomerate or outside. In some cases, one division of a conglomerate ended up bidding against another.
It did become more difficult for authors to find publishers for first novels and for what publishers call “mid-list” books – those expected to sell fewer than ten thousand copies. In most cases, smaller houses and some nonprofits picked up the slack. Moreover, publishers are notoriously poor at predicting sales. Alfred Knopf once told a visitor that, among the dozen manuscripts on his desk, one was a bestseller. When the visitor asked which it was, Knopf replied that if he knew that he would not have to publish all twelve. Where one editor sees a mid-list book, another may see a bestseller.
The impact of consolidation on readers was mitigated by the fact that, even after being swallowed up by conglomerates, book publishing remained essentially a cottage industry. Large corporations could reduce costs by combining business functions, but they were generally unable to combine editorial offices. For the most part, imprints within publishing houses retained their own editorial staffs, their own standards, and their own identities even when they had to meet new revenue expectations.
The impact was also mitigated by the fact that publishing thrives on controversy. When one house turns down a book because it offends the political or moral sensibilities of its corporate owner, the story promptly shows up on the front pages of a newspaper (often one owned by a competing conglomerate), and within days the book has been sold to another house that is less easily offended (or is offended by other views) and is eager to cash in on the notoriety. In 1990, staff members at Simon & Schuster protested the imminent publication of Brett Easton Ellis’s
American Psycho
because of its vivid depiction of violence against women. Richard Snyder, president of the firm, canceled publication of the book, already in page proof. The book was immediately picked up by the Vintage imprint of Random House and published three months later.
Nevertheless, the shift from private to public ownership was significant. A publicly held company is far different from an entrepreneurial house. Editors might still choose titles that reflected their own literary values, but their decisions then went before committees with more stringent views on profitability. Personal relationships between authors and editors became business relationships between agents and contract departments. When publishing houses were integrated into conglomerates (or larger publishing houses), their position became tenuous: a shift in corporate emphasis, or poor financial performance, might lead to the sale of a publishing division or the replacement of editorial directors. Management responsible to Wall Street could not be as generous or civic-minded as J. P. Morgan. The acquisitions of the 1960s led in some cases to the resale of all or part of a publishing house, to resignations or the firing of editors, and to increased emphasis on the bottom line.
Whether the dominance of commerce mattered to books and their readers is an open question. Innovative authors continued to break new ground, while established authors reliably turned out the same kinds of books that had won them an audience in earlier years. Readers continued to have a broad choice of literary novels, serious nonfiction, political diatribes, poetry, self-help books, religious and spiritual inspiration, health advice, romance novels, science fiction, mysteries, humor, and even crossword puzzles. Books may have become commercial commodities, but they remained objects of desire, inspiration, and imagination.
References and Further Reading
Benjamin, Curtis G. (1984)
US Books Abroad: Neglected Ambassadors.
Washington: Library of Congress.
Benton, Megan L. (2000)
Beauty and the Book: Fine Editions and Cultural Distinction in America.
New Haven: Yale University Press.
Berg, A. Scott (1978)
Max Perkins: Editor of Genius.
New York: E. P. Dutton.
Bonn, Thomas L. (1989)
Heavy Traffic and High Culture: New American Library as Literary Gatekeeper in the Paperback Revolution.
Carbondale: Southern Illinois University Press.
— (1994) “Henry Holt A-spinning in his Grave: Literary Agenting Today and Yesterday.”
Publishing Research Quarterly,
10 (1): 55–65.
Cerf, Bennett (1977)
At Random: The Reminiscences of Bennett Cerf.
New York: Random House.
Cole, John Y. (ed.) (1984)
Books in Action: The Armed Services Editions.
Washington: Library of Congress.
Dardis, Tom (1995)
Firebrand: The Life of Horace Liveright, the Man who Changed American Publishing.
New York: Random House.
Davis, Kenneth C. (1984)
Two-bit Culture: The Paperbacking of America.
Boston: Houghton Mifflin.
Hart, James D. (1950)
The Popular Book: A History of America’s Literary Taste.
New York: Oxford University Press.
Hawes, Gene R. (1967)
To Advance Knowledge: A Handbook on American University Press Publishing.
New York: Association of American University Presses.
Korda, Michael (1999)
Another Life: A Memoir of Other People.
New York: Random House.
Madison, Charles A. (1966)
Book Publishing in America.
New York: McGraw-Hill.
Miller, Laura J. (2000) “The Best-seller List as Marketing Tool and Historical Fiction.”
Book History,
3: 286-304.
Portrait of a Publisher, 1915–1965
[Alfred Knopf] (1965) New York: Typophiles.
Rubin, Joan Shelley (1992)
The Making of Middlebrow Culture.
Chapel Hill: University of North Carolina Press.
Samuels, Edward (2002)
The Illustrated Story of Copyright.
New York: St. Martin’s.
Satterfield, Jay (2002)
“The World’s Best Books”: Taste, Culture, and the Modern Library.
Amherst: University of Massachusetts Press.
Schwed, Peter (1984)
Turning the Pages: An Insider’s of Simon
Schuster, 1924–1984.
New York: Macmillan.
Strouse, Jean (1999)
Morgan: American Financier.
New York: Random House.
Van Slyck, Abigail (1995)
Free to All: Carnegie Libraries and American Culture, 1890–1920.
Chicago: University of Chicago Press.
West, James L. W., III (1988)
American Authors and the Literary Marketplace since 1900.
Philadelphia: University of Pennsylvania Press.
Wright, Louis B. (1976)
Of Books and Men.
Columbia: University of South Carolina Press.
28
Books and Bits: Texts and Technology 1970–2000
Paul Luna
Looking back on book production in the period 1970–2000, it is clear that changes in text composition were driven by economic imperatives, first to reduce the cost of turning an author’s text into publishable data, and subsequently to extract maximum value from that data. The gradual standardization of computer systems prompted the convergence of the typesetting and printing industries, previously with separate and highly specific technologies, with the larger, business-driven world of document creation, transmission, and retrieval. While more lay people than ever before have access to the tools and terminology that were previously the preserve of typesetters and printers, those industries are no longer the sole determinants of the development of the technologies they use. The boundaries between the tools that printers use to typeset and make books, and the tools that authors use to write, and publishers use to edit, have dissolved. This convergence means that typesetting in particular has been dethroned, or democratized. Authors, copy-editors, and designers have become implementers of editorial and typesetting decisions when previously they had specified their requirements to typesetters (Hendel 1998: 105–25; Morgan 2003; Mitchell and Wightman 2005: xi). This chapter will consider individual books produced in this period as a way of reflecting on aspects of these changes.
At the start of our period, the hot-metal letterpress tradition was still alive, and used alongside the photo-composition methods and lithographic printing that would supersede it. The roles of copy-editor, designer, compositor, and proofreader were still distinct and separate. For the Oxford and Cambridge university presses, the production and publication of the various editions of the complete New English Bible (NEB) in 1970 were major events. (The New Testament had been published separately in 1961.) These were businesses with investment and expertise in traditional composition and printing methods, but their close ties to the Monotype Corporation gave them access to its latest equipment, and they acted as test sites for each new generation of typesetting device. Cambridge produced the three-volume library edition of the NEB; it was set with hot metal in Monotype Ehrhardt, and printed letterpress, continuing the design that had been established for the New Testament. Oxford produced the one-volume standard edition and an illustrated schools edition (1972). The standard edition was set in Mono-photo Plantin, while the illustrated edition was set in Monophoto Times Semibold; both were printed lithographically. For these volumes, with expectations of long print-runs and a continuing reprint life, photo-composition and lithographic printing was the forward-looking choice. However, the popular paperback of the NEB (Penguin 1974) was printed by rotary letterpress, using relief plates made from the Oxford setting. Rotary letterpress was still the norm for mass-market paperback imprints.
For books to be printed letterpress, composition by Monotype (and occasionally Linotype, never a large player in the British book-composition field) was still practical. Firms had invested over many years in equipment and typefaces in a stable technological environment, meaning that, for academic work in particular, resources existed for specialist language, mathematical, and technical setting that would have needed much investment to replicate for photo-composition. By 1970, photo-composition was cost-effective for straightforward composition, such as magazine text, or for high-volume, repetitive work, such as telephone directories, and would gradually become practical for all book work.
Photo-mechanical composition systems had started as attempts to replace the hot-metal casting mechanism with a photographic exposure mechanism, but retaining most of the rest of the machinery. The Monophoto Filmsetter introduced in 1952 used a keyboard almost identical to its hot-metal equivalent. The Mark 4 Filmsetter (1967) used for the NEB was driven by a 31-channel tape, and was configured in much the same way as a composition caster. Instead of brass matrices held in a grid, which was positioned over the casting mechanism on instruction from the punched-paper tape, the Filmsetter had a grid of glass negatives, positioned over a light source and shutter. Film or paper output replaced metal type (Wallis 1997).
The adaptation of well-tested mechanical principles provided some continuity in engineering and maintenance, but film or paper output was initially difficult to correct. The individual types cast by the Monotype could be corrected easily by hand, letter by letter, from the case; a single-line slug of Linotype setting could also be replaced, with rather more labor, by having it recomposed and cast. Correction to film was more troublesome. The tape produced by the keyboard was effectively uncorrectable, so the Monophoto Filmsetter lost the advantage of single-type correction that its predecessor had over the Linotype. The line containing the error had to be re-keyed, re-exposed, and the resultant piece of film or paper carefully stripped into the original. Metal type is inherently self-squaring and self-aligning. Aligning pieces of film requires a light-box, a grid, and a careful eye to ensure alignment. Moreover, if the chemicals used to develop the material, or the strength of the light source, varied from the original setting, then it would result in text of a different density (Heath and Faux 1978).
The first-generation photo-composition machines were slow. For higher-volume work, second-generation machines, such as Higonnet and Moyroud’s Lumitype (1949, also known as the Photon), had photo-matrices as glass discs or strips, and instead of using a stationery light source, had a timer and flash to freeze the image of each letter while the disc was still spinning (Southall 2005: 79ff). By the end of the 1960s, these machines could be driven by keyboards producing correctable punched-paper tape, allowing complete jobs to be re-run with corrections. At the lower end of the market, Compugraphic were able to produce much more affordable photo-composition machines from 1968. While the NEB had been set on film, with a resulting high-definition image, low-cost composition was based on bromide-paper output. This rapidly advanced the use of photo-composition in jobbing typesetting – it was easy to combine page elements using paper output, which could be pasted up and re-photographed. An even cheaper alternative to the Compugraphic was the IBM Selectric Composer (1961). The IBM can be considered as the earliest word-processing machine, a term coined for it by its manufacturers in 1974, when it was combined with a magnetic tape drive to store keystrokes as they were typed. Information could now be stored, retyped automatically from the stored information, corrected, reprinted as many times as needed, and then erased. However, storage capacity on the reusable tapes, and from 1969 on magnetic cards, was very limited (Seybold 1977: 317). In composition terms, the IBM was an electric typewriter with interchangeable golf-ball heads containing variable-width characters in designs based (with considerable loss of subtlety) on leading hot-metal typefaces (Steinberg 1996: 221–2). Display type could be provided by Letraset dry-transfer lettering or hand-set photo-lettering.
Composition by hot metal, early photo-composition systems, and typewriter shared one feature: all text had to be specifically keyboarded for composition, and those keystrokes were useless for any further work; they were effectively lost. The reusability of keystrokes, the manipulation rather than sequential recording of data, and high speeds of final output were the main goals of developers.
The Random House Dictionary of the English Language: The Unabridged Edition
illustrates early approaches to computer-assisted composition. Conceived in 1959, it was the first dictionary prepared this way. Text was data-captured and tagged to represent both kind of content and appearance. Laurence Urdang, its inspirer and editor, wrote:
The coding of different levels of information – main, entry word, pronunciation, definition(s), variant(s), etymology, run-in entry, illustration – and more than 150 fields to which definitions were assigned – botany, chemistry, computer science, etc. – made it possible to prepare information for each level and in each field independently, thus ensuring better uniformity of treatment and far greater consistency among related pieces of information than had been achieved on other dictionaries. With all the data appropriately coded, programs enabled the computer to sort all of the bits and pieces into dictionary order. Once that had been accomplished, it remained only to read through the entire dictionary to make certain of the continuity and integrity of the text. (Urdang 1984: 155–6)
The intention was to keyboard the text only once. Entering dictionary text into the computer was a problem because the number of characters and character variations available was limited. The punched-paper tape used was based on US newspaper tele-typesetting conventions, which envisaged a 90-character repertoire, similar to the limitations of a manual typewriter. To represent the wider range of typographic possibilities (italic, bold, superiors, accents, Greek, and so on), the keyboarded text had to contain codes to indicate these different alphabets whenever a change from one to another was required. The typewritten proofs produced by the tape-perforating keyboard repeated these codes. The huge newspaper market for straightforward English-language typesetting in the US tended to make it easy for manufacturers to ignore complex composition requirements.
The actual typesetting of the dictionary shows the limitations of the composition equipment at the time (1965). The two possible contenders (both ruled out by the publishers) were the RCA VideoComp and Photon, the first machines to accept magnetic-tape input. While the sheer bulk of the dictionary might have seemed like ideal fodder for these machines, the heavy typographic coding of the text (averaging two-and-a-half style changes in every line) meant that the VideoComp, which had little memory to store variant founts, could not be made to work at peak efficiency. The text was eventually conventionally set by hot metal on the Monotype. To produce copy for the keyboarders, the finally edited text was output to a Datatronix, a CRT (cathode ray tube) screen linked to a microfilm camera, each screen-full of text being photographed to microfilm, which was then printed out on a high-speed Xerox. As in the first proofs, this printout was coded rather than being rendered in true typographic founts.
The convolutions of the
Random House Dictionary’s
production method point out the problems before a relatively uniform set of text-processing methods was developed. The missing component in the 1970s and early 1980s was any kind of common platform or device independence. This lack was not new. While a hand-compositor could combine the types of any foundry in his stick as long as they had a common height-to-paper, all mechanized composition systems were proprietary: Monotype hot-metal matrices could not be used on Linotype machines; Monophoto film matrices could not be used on Compugraphic machines. Device independence became important when data exchange began to be considered, and when the globalization of print production meant that data-capture, text-processing, and final output might happen on different continents.
Following Random House’s lead, early computer-assisted composition focused on books with a large data-manipulation requirement: dictionaries, catalogues, directories, and encyclopedias, all text-heavy and with relatively straightforward, static columnar layouts. Oxford University Press, like its rival Collins, experimented with computer-assisted composition in the 1970s. Spare capacity on mainframe computers was used for text-processing: Collins used British Leyland’s IBM mainframe in Cowley to run pagination programs on edited dictionary text (Luna 2000); Oxford used the ICL mainframe at its London warehouse in Neasden to process text for the
African Encyclopaedia, Crockford’s Clerical Directory,
and the
Advanced Learner’s Dictionary.
These projects involved new relationships within the publishing firm between publisher and computer suppliers (Urdang relates how in 1959 IBM salesmen were baffled by the idea of handling the text of a book on a computer); new roles for computer personnel within firms; and new frustrations for production people, used to placing work with printers with clear expectations of deadlines and costs, who gradually realized that new programs took time to develop, test, and de-bug, even if the actual processing they did took only hours to run.
Editors had to learn a whole new way of approaching proofs. Line-printer proofs provided either only the crudest typographic variation, or were simply print-outs of the text as it had been keyed, complete with codes, without any typographic formatting. Proofreaders had to decipher codes, and line-ends and dubious hyphenation decisions had to be reviewed separately, before expensive film or bromide was run through the typesetter. Proofing, which had always integrated a read for textual accuracy with an assessment of visual accuracy, disintegrated into a series of separate checks. Designers were often on the periphery of early computerized typesetting – with so much coding, keying, and processing to worry about, intrusions from someone who might want to change the way things looked were not always welcome. Oxford tried hard to make its computer-set catalogues and directories look as similar to their hot-metal predecessors as possible, but physical separation (some forty miles away at Neasden in OUP’s case) from the traditional locations of composing room and layout studio helped reinforce the differences of the process.
By now, typesetting devices could break single-column text into pages and add headlines and folios; or could set a multi-column page by reversing at the end of each column and then setting the subsequent column beside it. If illustrations were involved, page make-up (the combination of different elements that make a page) remained a manual operation. True interactivity in page make-up had to await the development of page description languages in the 1980s. Before then, computerized composition moved from the “heroic” age of ad hoc programming and machine-specific configurations to the more systematic phase of front-end systems (Seybold 1984: 170). These multi-user systems supported the third generation of fast photo-composition machines based on imaging type on a CRT screen, which was transferred to film or paper by an optical system (Seybold 1984: 112). These included the Linotron 505 (1968), and various Autologic APS machines. These latter stored founts digitally, rather than as photographic masters (Southall 2005: 143–7).