Screens Shot | Jacob Gaboury | Freitag, 21.06.2019

What You See Is What You Get

Having described some of the technical means by which screenshots were produced prior to the development of the modern computer screen, it seems important to note that my use of this term – screenshot – has been almost entirely anachronistic. In practice the term does not appear until the 1980s, and is used primarily to describe the practice of photographing the graphical displays of early computer screens for use in early gaming and PC magazines, as well as in visual fields like graphic design. At the time this was a comparatively minor practice, used only when the exact appearance of the screen needed to be reproduced in its entirety. In other words, it was a technique for capturing the appearance of the screen more than the content or information it displayed. This is because in this period the vast majority of computing was text-based and non-graphical, such that if a user wished to preserve the information on their screens they could simply output alphanumeric text to a printer. This distinction between appearance and information indicates an important shift in the way we describe and understand what a screenshot is and is used for, and points us to a number of adjacent terms that compete with the screenshot in this period, but which describe a similar process of reproducing or preserving the act of computation and its outputs.
Print screen dump plotted on an Apple II computer with an Axiom EX-820 MicroPlotter (ca. 1978).
Until the mid-1990s, the most common term for capturing the contents of a computer screen was “screen dump.” Perhaps a surprising term to us, it refers to dumping the content of a text-only screen into a text file, or even dumping the content of a graphical frame buffer to a printer. The action here is not the photographic capture or the weaponized shot but the emptying of content or data, the offloading of information from one object to another. The term begins to make sense if we consider it in its historical context. While in the 1960s and 1970s researchers were primarily concerned with establishing the algorithms, software, and hardware that would make possible interactive computing as a technical practice, by the end of the decade we begin to see this work commercialized, first for office and industry, and later for the growing home computing market. The principal player in this turn toward commercialization is arguably the Xerox Corporation, whose Palo Alto Research Center – known as Xerox PARC – effectively invented the personal computer in the first half of the 1970s before famously failing to commercialize its efforts. 1Douglas K. Smith and Robert C. Alexander, Fumbling the Future: How Xerox Invented, then Ignored, the First Personal Computer (Lincoln, NE: iUniverse, 1999).
At this time Xerox was invested in a number of related efforts. Perhaps most significant was the work of researchers developing what is arguably the first modern graphical user interface for use with the Xerox Alto personal computer, a unique machine developed in 1972 that in retrospect feels decades ahead of its time. As can be seen in the Xerox Alto commercial (1979) above, the Alto was meant to run in portrait mode using a screen the size of a sheet of paper. This design was intentional, as a secondary objective for Xerox at this time was developing a series of printers that could communicate with the computer using a custom page description language. The goal of this work was to create a system whereby a user could print a page exactly as it appeared on the screen, a technique that came to be called “What you see is what you get,” abbreviated as WYSIWYG. 2The term is derived, surprisingly enough, from a phrase popularized by Geraldine Jones,⁠ the drag persona of popular comedian Flip Wilson, who is explicitly referenced by employees at Xerox PARC in oral histories and interviews about this period.⁠ Today this process is something we all take for granted, that documents printed from the screen of a computer will look identical – or nearly identical – to the software object from which they are derived; but in practice this was an enormous task, requiring a sea change in the way we understood and treated text as graphical objects for computation. It is unclear precisely when in the 1970s the phrase made this leap into the computing community, but its acronym explodes in popular use from the mid-1980s through to the early 2000s – precisely the period when the gap between the appearance of the computer screen and the artifacts it could be made to produce was most clearly felt. Yet this is also the period when computing most explicitly transformed the design and aesthetic of printed documents themselves, due largely to this early work at Xerox PARC.
By the end of the 1970s many of the company’s key researchers began to leave Xerox in order to commercialize the technologies they helped to develop during their tenure in Palo Alto. In 1980 researcher John Warnock – the man largely responsible for the page description language that made WYSIWYG possible – leaves Xerox to co-found Adobe Systems, developing a new language called Postscript that completely transforms graphic design and print publishing, allowing words and letters to be scaled to any size and rotated to any angle, and making possible complex and artistic textual graphics unlike any existing printing method. Indeed, much of the look of contemporary magazines and print are in part attributable to this change, perhaps most visible in the experimental typeface and layout of magazines like Emigre – founded in Berkeley in 1984 – which paved the way for new textual aesthetics in print and graphic design that are visible in most any contemporary graphical publication.
Emigre #11 spread (1989).
In the short period of a decade we move from trying to reproduce the look and function of paper documents with a computer, such that “what you see is what you get,” to an entirely new method for producing and arranging text as graphical objects, such that what you see could only be got from a computer. In doing so we move both metaphorically and materially from the screenshot as a hardware function – the “dump” of a PRTSCRN button – to the screen as an interface for the production and manipulation of all images, both physical and digital.