Hacking Away at the Counterculture

Andrew Ross

Princeton University

 

Ever since the viral attack engineered in November of 1988 by Cornell University hacker Robert Morris on the national network system Internet, which includes the Pentagon’s ARPAnet data exchange network, the nation’s high-tech ideologues and spin doctors have been locked in debate, trying to make ethical and economic sense of the event. The virus rapidly infected an estimated six thousand computers around the country, creating a scare that crowned an open season of viral hysteria in the media, in the course of which, according to the Computer Virus Industry Association in Santa Clara, the number of known viruses jumped from seven to thirty during 1988, and from three thousand infections in the first two months of that year to thirty thousand in the last two months. While it caused little in the way of data damage (some richly inflated initial estimates reckoned up to $100m in down time), the ramifications of the Internet virus have helped to generate a moral panic that has all but transformed everyday “computer culture.”

 

Following the lead of DARPA’s (Defence Advance Research Projects Agency) Computer Emergency Response Team at Carnegie-Mellon University, anti-virus response centers were hastily put in place by government and defence agencies at the National Science Foundation, the Energy Department, NASA, and other sites. Plans were made to introduce a bill in Congress (the Computer Virus Eradication Act, to replace the 1986 Computer Fraud and Abuse Act, which pertained solely to government information), that would call for prison sentences of up to ten years for the “crime” of sophisticated hacking, and numerous government agencies have been involved in a proprietary fight over the creation of a proposed Center for Virus Control, modelled, of course, on Atlanta’s Centers for Disease Control, notorious for its failures to respond adequately to the AIDS crisis.

 

In fact, media commentary on the virus scare has run not so much tongue-in-cheek as hand-in-glove with the rhetoric of AIDS hysteria–the common use of terms like killer virus and epidemic; the focus on hi-risk personal contact (virus infection, for the most part, is spread on personal computers, not mainframes); the obsession with defense, security, and immunity; and the climate of suspicion generated around communitarian acts of sharing. The underlying moral imperative being this: You can’t trust your best friend’s software any more than you can trust his or her bodily fluids–safe software or no software at all! Or, as Dennis Miller put it on Saturday Night Live, “Remember, when you connect with another computer, you’re connecting to every computer that computer has ever connected to.” This playful conceit struck a chord in the popular consciousness, even as it was perpetuated in such sober quarters as the Association for Computing Machinery, the president of which, in a controversial editorial titled “A Hygiene Lesson,” drew comparisons not only with sexually transmitted diseases, but also with a cholera epidemic, and urged attention to “personal systems hygiene.”1 In fact, some computer scientists who studied the symptomatic path of Morris’s virus across Internet have pointed to its uneven effects upon different computer types and operating systems, and concluded that “there is a direct analogy with biological genetic diversity to be made.”2 The epidemiology of biological virus, and especially AIDS, research is being closely studied to help implement computer security plans, and, in these circles, the new witty discourse is laced with references to antigens, white blood cells, vaccinations, metabolic free radicals, and the like.

 

The form and content of more lurid articles like Time‘s infamous (September 1988) story, “Invasion of the Data Snatchers,” fully displayed the continuity of the media scare with those historical fears about bodily invasion, individual and national, that are often considered endemic to the paranoid style of American political culture.3 Indeed, the rhetoric of computer culture, in common with the medical discourse of AIDS research, has fallen in line with the paranoid, strategic style of Defence Department rhetoric. Each language-repertoire is obsessed with hostile threats to bodily and technological immune systems; every event is a ballistic manoeuver in the game of microbiological war, where the governing metaphors are indiscriminately drawn from cellular genetics and cybernetics alike. As a counterpoint to the tongue-in-cheek AI tradition of seeing humans as “information-exchanging environments,” the imagined life of computers has taken on an organicist shape, now that they too are subject to cybernetic “sickness” or disease. So, too, the development of interrelated systems, such as Internet itself, has further added to the structural picture of an interdependent organism, whose component members, however autonomous, are all nonetheless affected by the “health” of each individual constituent. The growing interest among scientists in developing computer programs that will simulate the genetic behavior of living organisms (in which binary numbers act like genes) points to a future where the border between organic and artificial life is less and less distinct.

 

In keeping with the increasing use of biologically derived language to describe mutations in systems theory, conscious attempts to link the AIDS crisis with the information security crisis have pointed out that both kinds of virus, biological and electronic, take over the host cell/program and clone their carrier genetic codes by instructing the hosts to make replicas of the viruses. Neither kind of virus, however, can replicate themselves independently; they are pieces of code that attach themselves to other cells/programs– just as biological viruses need a host cell, computer viruses require a host program to activate them. The Internet virus was not, in fact, a virus, but a worm, a program that can run independently and therefore appears to have a life of its own. The worm replicates a full version of itself in programs and systems as it moves from one to another, masquerading as a legitimate user by guessing the user passwords of locked accounts. Because of this autonomous existence, the worm can be seen to behave as if it were an organism with some kind of purpose or teleology, and yet it has none. Its only “purpose” is to reproduce and infect. If the worm has no inbuilt antireplication code, or if the code is faulty, as was the case with the Internet worm, it will make already-infected computers repeatedly accept further replicas of itself, until their memories are clogged. A much quieter worm than that engineered by Morris would have moved more slowly, as one supposes a “worm” should, protecting itself from detection by ever more subtle camouflage, and propagating its cumulative effect of operative systems inertia over a much longer period of time.

 

In offering such descriptions, however, we must be wary of attributing a teleology/intentionality to worms and viruses which can be ascribed only, and, in most instances, speculatively, to their authors. There is no reason why a cybernetic “worm” might be expected to behave in any fundamental way like a biological worm. So, too, the assumed intentionality of its author distinguishes the human-made cybernetic virus from the case of the biological virus, the effects of which are fated to be received and discussed in a language saturated with human-made structures and narratives of meaning and teleological purpose. Writing about the folkloric theologies of significance and explanatory justice (usually involving retribution) that have sprung up around the AIDS crisis, Judith Williamson has pointed to the radical implications of this collision between an intentionless virus and a meaning-filled culture: Nothing could be more meaningless than a virus. It has no point, no purpose, no plan; it is part of no scheme, carries no inherent significance. And yet nothing is harder for us to confront than the complete absence of meaning. By its very definition, meaninglessness cannot be articulated within our social language, which is a system of meaning: impossible to include, as an absence, it is also impossible to exclude– for meaninglessness isn’t just the opposite of meaning, it is the end of meaning, and threatens the fragile structures by which we make sense of the world.4

 

No such judgment about meaninglessness applies to the computer security crisis. In contrast to HIV’s lack of meaning or intentionality, the meaning of cybernetic viruses is always already replete with social significance. This meaning is related, first of all, to the author’s local intention or motivation, whether psychic or fully social, whether wrought out of a mood of vengeance, a show of bravado or technical expertise, a commitment to a political act, or in anticipation of the profits that often accrue from the victims’ need to buy an antidote from the author. Beyond these local intentions, however, which are usually obscure or, as in the Morris case, quite inscrutable, there is an entire set of social and historical narratives that surround and are part of the “meaning” of the virus: the coded anarchist history of the youth hacker subculture; the militaristic environments of search-and-destroy warfare (a virus has two components–a carrier and a “warhead”), which, because of the historical development of computer technology, constitute the family values of information techno-culture; the experimental research environments in which creative designers are encouraged to work; and the conflictual history of pure and applied ethics in the science and technology communities, to name just a few. A similar list could be drawn up to explain the widespread and varied response to computer viruses, from the amused concern of the cognoscenti to the hysteria of the casual user, and from the research community and the manufacturing industry to the morally aroused legislature and the mediated culture at large. Every one of these explanations and narratives is the result of social and cultural processes and values; consequently, there is very little about the virus itself that is “meaningless.” Viruses can no more be seen as an objective, or necessary, result of the “objective” development of technological systems than technology in general can be seen as an objective, determining agent of social change.

 

For the sake of polemical economy, I would note that the cumulative effect of all the viral hysteria has been twofold. Firstly, it has resulted in a windfall for software producers, now that users’ blithe disregard for makers’ copyright privileges has eroded in the face of the security panic. Used to fighting halfhearted rearguard actions against widespread piracy practices, or reluctantly acceding to buyers’ desire for software unencumbered by top-heavy security features, software vendors are now profiting from the new public distrust of program copies. So, too, the explosion in security consciousness has hyperstimulated the already fast-growing sectors of the security system industry and the data encryption industry. In line with the new imperative for everything from “vaccinated” workstations to “sterilized” networks, it has created a brand new market of viral vaccine vendors who will sell you the virus (a one-time only immunization shot) along with its antidote–with names like Flu Shot +, ViruSafe, Vaccinate, Disk Defender, Certus, Viral Alarm, Antidote, Virus Buster, Gatekeeper, Ongard, and Interferon. Few of the antidotes are very reliable, however, especially since they pose an irresistible intellectual challenge to hackers who can easily rewrite them in the form of ever more powerful viruses. Moreover, most corporate managers of computer systems and networks know that by far the great majority of their intentional security losses are a result of insider sabotage and monkeywrenching.

 

In short, the effects of the viruses have been to profitably clamp down on copyright delinquency, and to generate the need for entirely new industrial production of viral suppressors to contain the fallout. In this respect, it is easy to see that the appearance of viruses could hardly, in the long run, have benefited industry producers more. In the same vein, the networks that have been hardest hit by the security squeeze are not restricted-access military or corporate systems but networks like Internet, set up on trust to facilitate the open academic exchange of data, information and research, and watched over by its sponsor, DARPA. It has not escaped the notice of conspiracy theorists that the military intelligence community, obsessed with “electronic warfare,” actually stood to learn a lot from the Internet virus; the virus effectively “pulsed the system,” exposing the sociological behaviour of the system in a crisis situation.5 The second effect of the virus crisis has been more overtly ideological. Virus-conscious fear and loathing have clearly fed into the paranoid climate of privatization that increasingly defines social identities in the new post-Fordist order. The result– a psycho-social closing of the ranks around fortified private spheres–runs directly counter to the ethic that we might think of as residing at the architectural heart of information technology. In its basic assembly structure, information technology is a technology of processing, copying, replication, and simulation, and therefore does not recognize the concept of private information property. What is now under threat is the rationality of a shareware culture, ushered in as the achievement of the hacker counterculture that pioneered the personal computer revolution in the early seventies against the grain of corporate planning.

 

There is another story to tell, however, about the emergence of the virus scare as a profitable ideological moment, and it is the story of how teenage hacking has come to be increasingly defined as a potential threat to normative educational ethics and national security alike. The story of the creation of this “social menace” is central to the ongoing attempts to rewrite property law in order to contain the effects of the new information technologies that, because of their blindness to the copyrighting of intellectual property, have transformed the way in which modern power is exercised and maintained. Consequently, a deviant social class or group has been defined and categorised as “enemies of the state” in order to help rationalize a general law-and-order clampdown on free and open information exchange. Teenage hackers’ homes are now habitually raided by sheriffs and FBI agents using strong-arm tactics, and jail sentences are becoming a common punishment. Operation Sundevil, a nationwide Secret Service operation in the spring of 1990, involving hundreds of agents in fourteen cities, is the most recently publicized of the hacker raids that have produced several arrests and seizures of thousands of disks and address lists in the last two years.6

 

In one of the many harshly punitive prosecutions against hackers in recent years, a judge went so far as to describe “bulletin boards” as “hi-tech street gangs.” The editors of 2600, the magazine that publishes information about system entry and exploration that is indispensable to the hacking community, have pointed out that any single invasive act, such as that of trespass, that involves the use of computers is considered today to be infinitely more criminal than a similar act undertaken without computers.7 To use computers to execute pranks, raids, frauds or thefts is to incur automatically the full repressive wrath of judges urged on by the moral panic created around hacking feats over the last two decades. Indeed, there is a strong body of pressure groups pushing for new criminal legislation that will define “crimes with computers” as a special category of crime, deserving “extraordinary” sentences and punitive measures. Over that same space of time, the term hacker has lost its semantic link with the journalistic hack, suggesting a professional toiler who uses unorthodox methods. So, too, its increasingly criminal connotation today has displaced the more innocuous, amateur mischief-maker-cum-media-star role reserved for hackers until a few years ago.

 

In response to the gathering vigor of this “war on hackers,” the most common defences of hacking can be presented on a spectrum that runs from the appeasement or accommodation of corporate interests to drawing up blueprints for cultural revolution. (a) Hacking performs a benign industrial service of uncovering security deficiencies and design flaws. (b) Hacking, as an experimental, free-form research activity, has been responsible for many of the most progressive developments in software development. (c) Hacking, when not purely recreational, is an elite educational practice that reflects the ways in which the development of high technology has outpaced orthodox forms of institutional education. (d) Hacking is an important form of watchdog counterresponse to the use of surveillance technology and data gathering by the state, and to the increasingly monolithic communications power of giant corporations. (e) Hacking, as guerrilla know-how, is essential to the task of maintaining fronts of cultural resistance and stocks of oppositional knowledge as a hedge against a technofascist future. With all of these and other arguments in mind, it is easy to see how the social and cultural management of hacker activities has become a complex process that involves state policy and legislation at the highest levels. In this respect, the virus scare has become an especially convenient vehicle for obtaining public and popular consent for new legislative measures and new powers of investigation for the FBI.8

 

Consequently, certain celebrity hackers have been quick to play down the zeal with which they pursued their earlier hacking feats, while reinforcing the deviant category of “technological hooliganism” reserved by moralizing pundits for “dark-side” hacking. Hugo Cornwall, British author of the bestselling Hacker’s Handbook, presents a Little England view of the hacker as a harmless fresh-air enthusiast who “visits advanced computers as a polite country rambler might walk across picturesque fields.” The owners of these properties are like “farmers who don’t mind careful ramblers.” Cornwall notes that “lovers of fresh-air walks obey the Country Code, involving such items as closing gates behind one and avoiding damage to crops and livestock” and suggests that a similar code ought to “guide your rambles into other people’s computers; the safest thing to do is simply browse, enjoy and learn.” By contrast, any rambler who “ventured across a field guarded by barbed wire and dotted with notices warning about the Official Secrets Act would deserve most that happened thereafter.”9 Cornwall’s quaint perspective on hacking has a certain “native charm,” but some might think that this beguiling picture of patchwork-quilt fields and benign gentleman farmers glosses over the long bloody history of power exercised through feudal and postfeudal land economy in England, while it is barely suggestive of the new fiefdoms, transnational estates, dependencies, and principalities carved out of today’s global information order by vast corporations capable of bypassing the laws and territorial borders of sovereign nation-states. In general, this analogy with “trespass” laws, which compares hacking to breaking and entering other people’s homes restricts the debate to questions about privacy, property, possessive individualism, and, at best, the excesses of state surveillance, while it closes off any examination of the activities of the corporate owners and institutional sponsors of information technology (the almost exclusive “target” of most hackers).10

 

Cornwall himself has joined the lucrative ranks of ex-hackers who either work for computer security firms or write books about security for the eyes of worried corporate managers.11 A different, though related, genre is that of the penitent hacker’s “confession,” produced for an audience thrilled by tales of high- stakes adventure at the keyboard, but written in the form of a computer security handbook. The best example of the “I Was a Teenage Hacker” genre is Bill (aka “The Cracker”) Landreth’s Out of the Inner Circle: The True Story of a Computer Intruder Capable of Cracking the Nation’s Most Secure Computer Systems, a book about “people who can’t `just say no’ to computers.” In full complicity with the deviant picture of the hacker as “public enemy,” Landreth recirculates every official and media cliche about subversive conspiratorial elites by recounting the putative exploits of a high-level hackers’ guild called the Inner Circle. The author himself is presented in the book as a former keyboard junkie who now praises the law for having made a good moral example of him: If you are wondering what I am like, I can tell you the same things I told the judge in federal court: Although it may not seem like it, I am pretty much a normal American teenager. I don’t drink, smoke or take drugs. I don’t steal, assault people, or vandalize property. The only way in which I am really different from most people is in my fascination with the ways and means of learning about computers that don’t belong to me.12 Sentenced in 1984 to three years probation, during which time he was obliged to finish his high school education and go to college, Landreth concludes: “I think the sentence is very fair, and I already know what my major will be….” As an aberrant sequel to the book’s contrite conclusion, however, Landreth vanished in 1986, violating his probation, only to face later a stiff five-year jail sentence–a sorry victim, no doubt, of the recent crackdown. Cyber-Counterculture?

 

At the core of Steven Levy’s bestseller Hackers (1984) is the argument that the hacker ethic, first articulated in the 1950s among the famous MIT students who developed multiple-access user systems, is libertarian and crypto-anarchist in its right-to know principles and its advocacy of decentralized technology. This hacker ethic, which has remained the preserve of a youth culture for the most part, asserts the basic right of users to free access to all information. It is a principled attempt, in other words, to challenge the tendency to use technology to form information elites. Consequently, hacker activities were presented in the eighties as a romantic countercultural tendency, celebrated by critical journalists like John Markoff of the New York Times, by Stewart Brand of Whole Earth Catalog fame, and by New Age gurus like Timothy Leary in the flamboyant Reality Hackers. Fuelled by sensational stories about phone phreaks like Joe Egressia (the blind eight- year old who discovered the tone signal of phone company by whistling) and Cap’n Crunch, groups like the Milwaukee 414s, the Los Angeles ARPAnet hackers, the SPAN Data Travellers, the Chaos Computer Club of Hamburg, the British Prestel hackers, 2600‘s BBS, “The Private Sector,” and others, the dominant media representation of the hacker came to be that of the “rebel with a modem,” to use Markoff’s term, at least until the more recent “war on hackers” began to shape media coverage.

 

On the one hand, this popular folk hero persona offered the romantic high profile of a maverick though nerdy cowboy whose fearless raids upon an impersonal “system” were perceived as a welcome tonic in the gray age of technocratic routine. On the other hand, he was something of a juvenile technodelinquent who hadn’t yet learned the difference between right and wrong—a wayward figure whose technical brilliance and proficiency differentiated him nonetheless from, say, the maladjusted working-class J.D. street-corner boy of the 1950s (hacker mythology, for the most part, has been almost exclusively white, masculine, and middle- class). One result of this media profile was a persistent infantilization of the hacker ethic–a way of trivializing its embryonic politics, however finally complicit with dominant technocratic imperatives or with entrepreneurial-libertarian ideology one perceives these politics to be. The second result was to reinforce, in the initial absence of coercive jail sentences, the high educational stakes of training the new technocratic elites to be responsible in their use of technology. Never, the given wisdom goes, has a creative elite of the future been so in need of the virtues of a liberal education steeped in Western ethics!

 

The full force of this lesson in computer ethics can be found laid out in the official Cornell University report on the Robert Morris affair. Members of the university commission set up to investigate the affair make it quite clear in their report that they recognize the student’s academic brilliance. His hacking, moreover, is described, as a “juvenile act” that had no “malicious intent” but that amounted, like plagiarism, the traditional academic heresy, to a dishonest transgression of other users’ rights. (In recent years, the privacy movement within the information community–a movement mounted by liberals to protect civil rights against state gathering of information–has actually been taken up and used as a means of criminalizing hacker activities.) As for the consequences of this juvenile act, the report proposes an analogy that, in comparison with Cornwall’s mature English country rambler, is thoroughly American, suburban, middle-class and juvenile. Unleashing the Internet worm was like “the driving of a golf-cart on a rainy day through most houses in the neighborhood. The driver may have navigated carefully and broken no china, but it should have been obvious to the driver that the mud on the tires would soil the carpets and that the owners would later have to clean up the mess.”13

 

In what stands out as a stiff reprimand for his alma mater, the report regrets that Morris was educated in an “ambivalent atmosphere” where he “received no clear guidance” about ethics from “his peers or mentors” (he went to Harvard!). But it reserves its loftiest academic contempt for the press, whose heroization of hackers has been so irresponsible, in the commission’s opinion, as to cause even further damage to the standards of the computing profession; media exaggerations of the courage and technical sophistication of hackers “obscures the far more accomplished work of students who complete their graduate studies without public fanfare,” and “who subject their work to the close scrutiny and evaluation of their peers, and not to the interpretations of the popular press.”14 In other words, this was an inside affair, to be assessed and judged by fellow professionals within an institution that reinforces its authority by means of internally self-regulating codes of professionalist ethics, but rarely addresses its ethical relationship to society as a whole (acceptance of defence grants, and the like). Generally speaking, the report affirms the genteel liberal ideal that professionals should not need laws, rules, procedural guidelines, or fixed guarantees of safe and responsible conduct. Apprentice professionals ought to have acquired a good conscience by osmosis from a liberal education rather than from some specially prescribed course in ethics and technology.

 

The widespread attention commanded by the Cornell report (attention from the Association of Computing Machinery, among others) demonstrates the industry’s interest in how the academy invokes liberal ethics in order to assist in the managing of the organization of the new specialized knowledge about information technology. Despite or, perhaps, because of the report’s steadfast pledge to the virtues and ideals of a liberal education, it bears all the marks of a legitimation crisis inside (and outside) the academy surrounding the new and all-important category of computer professionalism. The increasingly specialized design knowledge demanded of computer professionals means that codes that go beyond the old professionalist separation of mental and practical skills are needed to manage the division that a hacker’s functional talents call into question, between a purely mental pursuit and the pragmatic sphere of implementing knowledge in the real world. “Hacking” must then be designated as a strictly amateur practice; the tension, in hacking, between interestedness and disinterestedness is different from, and deficient in relation to, the proper balance demanded by professionalism. Alternately, hacking can be seen as the amateur flip side of the professional ideal–a disinterested love in the service of interested parties and institutions. In either case, it serves as an example of professionalism gone wrong, but not very wrong.

 

In common with the two responses to the virus scare described earlier–the profitable reaction of the computer industry and the self-empowering response of the legislature– the Cornell report shows how the academy uses a case like the Morris affair to strengthen its own sense of moral and cultural authority in the sphere of professionalism, particularly through its scornful indifference to and aloofness from the codes and judgements exercised by the media–its diabolic competitor in the field of knowledge. Indeed, for all the trumpeting about excesses of power and disrespect for the law of the land, the revival of ethics, in the business and science disciplines in the Ivy League and on Capitol Hill (both awash with ethical fervor in the post-Boesky and post-Reagan years), is little more than a weak liberal response to working flaws or adaptational lapses in the social logic of technocracy.

 

To complete the scenario of morality play example- making, however, we must also consider that Morris’s father was chief scientist of the National Computer Security Center, the National Security Agency’s public effort at safeguarding computer security. A brilliant programmer and codebreaker in his own right, he had testified in Washington in 1983 about the need to deglamorise teenage hacking, comparing it to “stealing a car for the purpose of joyriding.” In a further Oedipal irony, Morris Sr. may have been one of the inventors, while at Bell Labs in the 1950s, of a computer game involving self-perpetuating programs that were a prototype of today’s worms and viruses. Called Darwin, its principles were incorporated, in the eighties, into a popular hacker game called Core War, in which autonomous “killer” programs fought each other to the death.15

 

With the appearance, in the Morris affair, of a patricidal object who is also the Pentagon’s guardian angel, we now have many of the classic components of countercultural cross-generational conflict. What I want to consider, however, is how and where this scenario differs from the definitive contours of such conflicts that we recognize as having been established in the sixties; how the Cornell hacker Morris’s relation to, say, campus “occupations” today is different from that evoked by the famous image of armed black students emerging from a sit-in on the Cornell campus; how the relation to technological ethics differs from Andrew Kopkind’s famous statement “Morality begins at the end of a gun barrel” which accompanied the publication of the do-it-yourself Molotov cocktail design on the cover of a 1968 issue of the New York Review of Books; or how hackers’ prized potential access to the networks of military systems warfare differs from the prodigious Yippie feat of levitating the Pentagon building. It may be that, like the J.D. rebel without a cause of the fifties, the disaffiliated student dropout of the sixties, and the negationist punk of the seventies, the hacker of the eighties has come to serve as a visible public example of moral maladjustment, a hegemonic test case for redefining the dominant ethics in an advanced technocratic society. (Hence the need for each of these deviant figures to come in different versions– lumpen, radical chic, and Hollywood-style.)

 

What concerns me here, however, are the different conditions that exist today for recognizing countercultural expression and activism. Twenty years later, the technology of hacking and viral guerrilla warfare occupies a similar place in countercultural fantasy as the Molotov Cocktail design once did. While I don’t, for one minute, mean to insist on such comparisons, which aren’t particularly sound anyway, I think they conveniently mark a shift in the relation of countercultural activity to technology, a shift in which a software-based technoculture, organized around outlawed libertarian principles about free access to information and communication, has come to replace a dissenting culture organized around the demonizing of abject hardware structures. Much, though not all, of the sixties counterculture was formed around what I have elsewhere called the technology of folklore–an expressive congeries of preindustrialist, agrarianist, Orientalist, antitechnological ideas, values, and social structures. By contrast, the cybernetic countercultures of the nineties are already being formed around the folklore of technology–mythical feats of survivalism and resistance in a data-rich world of virtual environments and posthuman bodies– which is where many of the SF-and technology-conscious youth cultures have been assembling in recent years.16

 

There is no doubt that this scenario makes countercultural activity more difficult to recognize and therefore to define as politically significant. It was much easier, in the sixties, to identify the salient features and symbolic power of a romantic preindustrialist cultural politics in an advanced technological society, especially when the destructive evidence of America’s supertechnological invasion of Vietnam was being daily paraded in front of the public eye. However, in a society whose technopolitical infrastructure depends increasingly upon greater surveillance, cybernetic activism necessarily relies on a much more covert politics of identity, since access to closed systems requires discretion and dissimulation. Access to digital systems still requires only the authentication of a signature or pseudonym, not the identification of a real surveillable person, so there exists a crucial operative gap between authentication and identification. (As security systems move toward authenticating access through biological signatures– the biometric recording and measurement of physical characteristics such as palm or retinal prints, or vein patterns on the backs of hands–the hacker’s staple method of systems entry through purloined passwords will be further challenged.) By the same token, cybernetic identity is never used up, it can be recreated, reassigned, and reconstructed with any number of different names and under different user accounts. Most hacks, or technocrimes, go unnoticed or unreported for fear of publicising the vulnerability of corporate security systems, especially when the hacks are performed by disgruntled employees taking their vengeance on management. So, too, authoritative identification of any individual hacker, whenever it occurs, is often the result of accidental leads rather than systematic detection. For example, Captain Midnight, the video pirate who commandeered a satellite a few years ago to interrupt broadcast TV viewing, was traced only because a member of the public reported a suspicious conversation heard over a crossed telephone line.

 

Eschewing its core constituency among white males of the pre-professional-managerial class, the hacker community may be expanding its parameters outward. Hacking, for example, has become a feature of the young adult mystery-and-suspense novel genre for girls.17 The elitist class profile of the hacker prodigy as that of an undersocialized college nerd has become democratized and customized in recent years; it is no longer exclusively associated with institutionally acquired college expertise, and increasingly it dresses streetwise. In a recent article which documents the spread of the computer underground from college whiz kids to a broader youth subculture termed “cyberpunks,” after the movement among SF novelists, the original hacker phone phreak Cap’n Crunch is described as lamenting the fact that the cyberculture is no longer an “elite” one, and that hacker-valid information is much easier to obtain these days.18

 

For the most part, however, the self-defined hacker underground, like many other protocountercultural tendencies, has been restricted to a privileged social milieu, further magnetised by the self-understanding of its members that they are the apprentice architects of a future dominated by knowledge, expertise, and “smartness,” whether human or digital. Consequently, it is clear that the hacker cyberculture is not a dropout culture; its disaffiliation from a domestic parent culture is often manifest in activities that answer, directly or indirectly, to the legitimate needs of industrial R&D. For example, this hacker culture celebrates high productivity, maverick forms of creative work energy, and an obsessive identification with on-line endurance (and endorphin highs)–all qualities that are valorised by the entrepreneurial codes of silicon futurism. In a critique of the myth of the hacker-as-rebel, Dennis Hayes debunks the political romance woven around the teenage hacker: They are typically white, upper-middle-class adolescents who have taken over the home computer (bought, subsidized, or tolerated by parents in the hope of cultivating computer literacy). Few are politically motivated although many express contempt for the “bureaucracies” that hamper their electronic journeys. Nearly all demand unfettered access to intricate and intriguing computer networks. In this, teenage hackers resemble an alienated shopping culture deprived of purchasing opportunities more than a terrorist network.19

 

While welcoming the sobriety of Hayes’s critique, I am less willing to accept its assumptions about the political implications of hacker activities. Studies of youth subcultures (including those of a privileged middle-class formation) have taught us that the political meaning of certain forms of cultural “resistance” is notoriously difficult to read. These meanings are either highly coded or expressed indirectly through media–private peer languages, customized consumer styles, unorthodox leisure patterns, categories of insider knowledge and behavior–that have no fixed or inherent political significance. If cultural studies of this sort have proved anything, it is that the often symbolic, not wholly articulate, expressivity of a youth culture can seldom be translated directly into an articulate political philosophy. The significance of these cultures lies in their embryonic or protopolitical languages and technologies of opposition to dominant or parent systems of rules. If hackers lack a “cause,” then they are certainly not the first youth culture to be characterized in this dismissive way. In particular, the left has suffered from the lack of a cultural politics capable of recognizing the power of cultural expressions that do not wear a mature political commitment on their sleeves. So, too, the escalation of activism-in-the- professions in the last two decades has shown that it is a mistake to condemn the hacker impulse on account of its class constituency alone. To cede the “ability to know” on the grounds that elite groups will enjoy unjustly privileged access to technocratic knowledge is to cede too much of the future. Is it of no political significance at all that hackers’ primary fantasies often involve the official computer systems of the police, armed forces, and defence and intelligence agencies? And that the rationale for their fantasies is unfailingly presented in the form of a defence of civil liberties against the threat of centralized intelligence and military activities? Or is all of this merely a symptom of an apprentice elite’s fledgling will to masculine power? The activities of the Chinese student elite in the pro-democracy movement have shown that unforeseen shifts in the political climate can produce startling new configurations of power and resistance. After Tiananmen Square, Party leaders found it imprudent to purge those high-tech engineer and computer cadres who alone could guarantee the future of any planned modernization program. On the other hand, the authorities rested uneasy knowing that each cadre (among the most activist groups in the student movement) is a potential hacker who can have the run of the communications house if and when he or she wants.

 

On the other hand, I do agree with Hayes’s perception that the media have pursued their romance with the hacker at the cost of underreporting the much greater challenge posed to corporate employers by their employees. It is in the arena of conflicts between workers and management that most high-tech “sabotage” takes place. In the mainstream everyday life of office workers, mostly female, there is a widespread culture of unorganized sabotage that accounts for infinitely more computer downtime and information loss every year than is caused by destructive, “dark-side” hacking by celebrity cybernetic intruders. The sabotage, time theft, and strategic monkeywrenching deployed by office workers in their engineered electromagnetic attacks on data storage and operating systems might range from the planting of time or logic bombs to the discrete use of electromagnetic Tesla coils or simple bodily friction: “Good old static electricity discharged from the fingertips probably accounts for close to half the disks and computers wiped out or down every year.”20 More skilled operators, intent on evening a score with management, often utilize sophisticated hacking techniques. In many cases, a coherent networking culture exists among female console operators, where, among other things, tips about strategies for slowing down the temporality of the work regime are circulated. While these threats from below are fully recognized in their boardrooms, corporations dependent upon digital business machines are obviously unwilling to advertize how acutely vulnerable they actually are to this kind of sabotage. It is easy to imagine how organised computer activism could hold such companies for ransom. As Hayes points out, however, it is more difficult to mobilize any kind of labor movement organized upon such premises: Many are prepared to publicly oppose the countless dark legacies of the computer age: “electronic sweatshops,” Military technology, employee surveillance, genotoxic water, and ozone depletion. Among those currently leading the opposition, however, it is apparently deemed “irresponsible” to recommend an active computerized resistance as a source of worker’s power because it is perceived as a medium of employee crime and “terrorism.” 21 Processed World, the “magazine with a bad attitude” with which Hayes has been associated, is at the forefront of debating and circulating these questions among office workers, regularly tapping into the resentments borne out in on-the-job resistance.

 

While only a small number of computer users would recognize and include themselves under the label of “hacker,” there are good reasons for extending the restricted definition of hacking down and across the caste system of systems analysts, designers, programmers, and operators to include all high-tech workers, no matter how inexpert, who can interrupt, upset, and redirect the smooth flow of structured communications that dictates their positions in the social networks of exchange and determines the temporality of their work schedules. To put it in these terms, however, is not to offer any universal definition of hacker agency. There are many social agents, for example, in job locations that are dependent upon the hope of technological reskilling, for whom sabotage or disruption of communicative rationality is of little use; for such people, definitions of hacking that are reconstructive, rather than deconstructive, are more appropriate. A good example is the crucial role of worker technoliteracy in the struggle of labor against automation and deskilling. When worker education classes in computer programming were discontinued by management at the Ford Rouge plant in Dearborn, Michigan, union (UAW) members began to publish a newsletter called the Amateur Computerist to fill the gap.22 Among the columnists and correspondents in the magazine have been veterans of the Flint sit-down strikes who see a clear historical continuity between the problem of labor organization in the thirties and the problem of automation and deskilling today. Workers’ computer literacy is seen as essential not only to the demystification of the computer and the reskilling of workers, but also to labor’s capacity to intervene in decisions about new technologies that might result in shorter hours and thus in “work efficiency” rather than worker efficiency.

 

The three social locations I have mentioned above all express different class relations to technology: the location of an apprentice technical elite, conventionally associated with the term “hacking”; the location of the female high-tech office worker, involved in “sabotage”; and the location of the shop- floor worker, whose future depends on technological reskilling. All therefore exhibit different ways of claiming back time dictated and appropriated by technological processes, and of establishing some form of independent control over the work relation so determined by the new technologies. All, then, fall under a broad understanding of the politics involved in any extended description of hacker activities. [This file is continued in ROSS-2 990]

 

The Culture and Technology Question

 

Faced with these proliferating practices in the workplace, on the teenage cult fringe, and increasingly in mainstream entertainment, where, over the last five years, the cyberpunk sensibility in popular fiction, film, and television has caught the romance of the popular taste for the outlaw technology of human/machine interfaces, we are obliged, I think, to ask old kinds of questions about the new silicon order which the evangelists of information technology have been deliriously proclaiming for more than twenty years. The postindustrialists’ picture of a world of freedom and abundance projects a sunny millenarian future devoid of work drudgery and ecological degradation. This sunny social order, cybernetically wired up, is presented as an advanced evolutionary phase of society in accord with Enlightenment ideals of progress and rationality. By contrast, critics of this idealism see only a frightening advance in the technologies of social control, whose owners and sponsors are efficiently shaping a society, as Kevin Robins and Frank Webster put it, of “slaves without Athens” that is actually the inverse of the “Athens without slaves” promised by the silicon positivists.23

 

It is clear that one of the political features of the new post-Fordist order–economically marked by short-run production, diverse taste markets, flexible specialization, and product differentiation–is that the New Right has managed to appropriate not only the utopian language and values of the alternative technology movements but also the marxist discourse of the “withering away of the state” and the more compassionate vision of local, decentralized communications first espoused by the libertarian left. It must be recognized that these are very popular themes and visions, (advanced most famously by Alvin Toffler and the neoliberal Atari Democrats, though also by leftist thinkers such as Andre Gortz, Rudolf Bahro, and Alain Touraine)–much more popular, for example, than the tradition of centralized technocratic planning espoused by the left under the Fordist model of mass production and consumption.24 Against the postindustrialists’ millenarian picture of a postscarcity harmony, in which citizens enjoy decentralized, access to free-flowing information, it is necessary, however, to emphasise how and where actually existing cybernetic capitalism presents a gross caricature of such a postscarcity society.

 

One of the stories told by the critical left about new cultural technologies is that of monolithic, panoptical social control, effortlessly achieved through a smooth, endlessly interlocking system of networks of surveillance. In this narrative, information technology is seen as the most despotic mode of domination yet, generating not just a revolution in capitalist production but also a revolution in living–“social Taylorism”–that touches all cultural and social spheres in the home and in the workplace.25 Through routine gathering of information about transactions, consumer preferences, and creditworthiness, a harvest of information about any individual’s whereabouts and movements, tastes, desires, contacts, friends, associates, and patterns of work and recreation becomes available in the form of dossiers sold on the tradable information market, or is endlessly convertible into other forms of intelligence through computer matching. Advanced pattern recognition technologies facilitate the process of surveillance, while data encryption protects it from public accountability.26

 

While the debate about privacy has triggered public consciousness about these excesses, the liberal discourse about ethics and damage control in which that debate has been conducted falls short of the more comprehensive analysis of social control and social management offered by left political economists. According to one marxist analysis, information is seen as a new kind of commodity resource which marks a break with past modes of production and that is becoming the essential site of capital accumulation in the world economy. What happens, then, in the process by which information, gathered up by data scavenging in the transactional sphere, is systematically converted into intelligence? A surplus value is created for use elsewhere. This surplus information value is more than is needed for public surveillance; it is often information, or intelligence, culled from consumer polling or statistical analysis of transactional behavior, that has no immediate use in the process of routine public surveillance. Indeed, it is this surplus, bureaucratic capital that is used for the purpose of forecasting social futures, and consequently applied to the task of managing the behavior of mass or aggregate units within those social futures. This surplus intelligence becomes the basis of a whole new industry of futures research which relies upon computer technology to simulate and forecast the shape, activity, and behavior of complex social systems. The result is a possible system of social management that far transcends the questions about surveillance that have been at the discursive center of the privacy debate.27

 

To further challenge the idealists’ vision of postindustrial light and magic, we need only look inside the semiconductor workplace itself, which is home to the most toxic chemicals known to man (and woman, especially since women of color often make up the majority of the microelectronics labor force), and where worker illness is measured not in quantities of blood spilled on the shop floor but in the less visible forms of chromosome damage, shrunken testicles, miscarriages, premature deliveries, and severe birth defects. In addition to the extraordinarily high stress patterns of VDT operators, semiconductor workers exhibit an occupational illness rate that even by the late seventies was three times higher than that of manufacturing workers, at least until the federal rules for recognizing and defining levels of injury were changed under the Reagan administration. Protection gear is designed to protect the product and the clean room from the workers, and not vice versa. Recently, immunological health problems have begun to appear that can be described only as a kind of chemically induced AIDS, rendering the T-cells dysfunctional rather than depleting them like virally induced AIDS.28 In corporate offices, the use of keystroke software to monitor and pace office workers has become a routine part of job performance evaluation programs. Some 70 percent of corporations use electronic surveillance or other forms of quantitative monitoring on their workers. Every bodily movement can be checked and measured, especially trips to the toilet. Federal deregulation has meant that the limits of employee work space have shrunk, in some government offices, below that required by law for a two-hundred pound laboratory pig.29 Critics of the labor process seem to have sound reasons to believe that rationalization and quantification are at last entering their most primitive phase.

 

These, then, are some of the features of the critical left position–or what is sometimes referred to as the “paranoid” position–on information technology, which imagines or constructs a totalizing, monolithic picture of systematic domination. While this story is often characterized as conspiracy theory, its targets–technorationality, bureaucratic capitalism–are usually too abstract to fit the picture of a social order planned and shaped by a small, conspiring group of centralized power elites. Although I believe that this story, when told inside and outside the classroom, for example, is an indispensable form of “consciousness-raising,” it is not always the best story to tell.

 

While I am not comfortable with the “paranoid” labelling, I would argue that such narratives do little to discourage paranoia. The critical habit of finding unrelieved domination everywhere has certain consequences, one of which is to create a siege mentality, reinforcing the inertia, helplessness, and despair that such critiques set out to oppose in the first place. What follows is a politics that can speak only from a victim’s position. And when knowledge about surveillance is presented as systematic and infallible, self-censoring is sure to follow. In the psychosocial climate of fear and phobia aroused by the virus scare, there is a responsibility not to be alarmist or to be scared, especially when, as I have argued, such moments are profitably seized upon by the sponsors of control technology. In short, the picture of a seamlessly panoptical network of surveillance may be the result of a rather undemocratic, not to mention unsocialistic, way of thinking, predicated upon the recognition of people solely as victims. It is redolent of the old sociological models of mass society and mass culture, which cast the majority of society as passive and lobotomized in the face of the cultural patterns of modernization. To emphasize, as Robins and Webster and others have done, the power of the new technologies to despotically transform the “rhythm, texture, and experience” of everyday life, and meet with no resistance in doing so, is not only to cleave, finally, to an epistemology of technological determinism, but also to dismiss the capacity of people to make their own uses of new technologies.30

 

The seamless “interlocking” of public and private networks of information and intelligence is not as smooth and even as the critical school of hard domination would suggest. In any case, compulsive gathering of information is no guarantee that any interpretive sense will be made of the files or dossiers, while some would argue that the increasingly covert nature of surveillance is a sign that the “campaign” for social control is not going well. One of the most pervasive popular arguments against the panoptical intentions of the masters of technology is that their systems do not work. Every successful hack or computer crime in some way reinforces the popular perception that information systems are not infallible. And the announcements of military-industrial spokespersons that the fully automated battlefield is on its way run up against an accumulated stock of popular skepticism about the operative capacity of weapons systems. These misgivings are born of decades of distrust for the plans and intentions of the military-industrial complex, and were quite evident in the widespread cynicism about the Strategic Defense Initiative. Just to take one empirical example of unreliability, the military communications system worked so poorly and so farcically during the U.S. invasion of Grenada that commanders had to call each other on pay phones: ever since then, the command-and- control code of Arpanet technocrats has been C5– Command, Control, Communication, Computers, and Confusion.31 It could be said, of course, that the invasion of Grenada did, after all, succeed, but the more complex and inefficiency-prone such high-tech invasions become (Vietnam is still the best example), the less likely they are to be undertaken with any guarantee of success.

 

I am not suggesting that alternatives can be forged simply by encouraging disbelief in the infallibility of existing technologies (pointing to examples of the appropriation of technologies for radical uses, of course, always provides more visibly satisfying evidence of empowerment), but technoskepticism, while not a sufficient condition of social change, is a necessary condition. Stocks of popular technoskepticism are crucial to the task of eroding the legitimacy of those cultural values that prepare the way for new technological developments: values and principles such as the inevitability of material progress, the “emancipatory” domination of nature, the innovative autonomy of machines, the efficiency codes of pragmatism, and the linear juggernaut of liberal Enlightenment rationality–all increasingly under close critical scrutiny as a wave of environmental consciousness sweeps through the electorates of the West. Technologies do not shape or determine such values. These values already exist before the technologies, and the fact that they have become deeply embodied in the structure of popular needs and desires then provides the green light for the acceptance of certain kinds of technology. The principal rationale for introducing new technologies is that they answer to already existing intentions and demands that may be perceived as “subjective” but that are never actually within the control of any single set of conspiring individuals. As Marike Finlay has argued, just as technology is only possible in given discursive situations, one of which being the desire of people to have it for reasons of empowerment, so capitalism is merely the site, and not the source, of the power that is often autonomously attributed to the owners and sponsors of technology.32

 

In fact, there is no frame of technological inevitability that has not already interacted with popular needs and desires, no introduction of new machineries of control that has not already been negotiated to some degree in the arena of popular consent. Thus the power to design architecture that incorporates different values must arise from the popular perception that existing technologies are not the only ones, nor are they the best when it comes to individual and collective empowerment. It was this kind of perception–formed around the distrust of big, impersonal, “closed” hardware systems, and the desire for small, decentralized, interactive machines to facilitate interpersonal communication–that “built” the PC out of hacking expertise in the early seventies. These were as much the partial “intentions” behind the development of microcomputing technology as deskilling, monitoring, and information gathering are the intentions behind the corporate use of that technology today. The growth of public data networks, bulletin board systems, alternative information and media links, and the increasing cheapness of desktop publishing, satellite equipment, and international data bases are as much the result of local political “intentions” as the fortified net of globally linked, restricted-access information systems is the intentional fantasy of those who seek to profit from centralised control. The picture that emerges from this mapping of intentions is not an inevitably technofascist one, but rather the uneven result of cultural struggles over values and meanings.

 

It is in this respect–in the struggle over values and meanings–that the work of cultural criticism takes on its special significance as a full participant in the debate about technology. In fact, cultural criticism is already fully implicated in that debate, if only because the culture and education industries are rapidly becoming integrated within the vast information service conglomerates. The media we study, the media we publish in, and the media we teach within are increasingly part of the same tradable information sector. So, too, our common intellectual discourse has been significantly affected by the recent debates about postmodernism (or culture in a postindustrial world) in which the euphoric, addictive thrill of the technological sublime has figured quite prominently. The high-speed technological fascination that is characteristic of the postmodern condition can be read, on the one hand, as a celebratory capitulation on the part of intellectuals to the new information technocultures. On the other hand, this celebratory strain attests to the persuasive affect associated with the new cultural technologies, to their capacity (more powerful than that of their sponsors and promoters) to generate pleasure and gratification and to win the struggle for intellectual as well as popular consent.

 

Another reason for the involvement of cultural critics in the technology debates has to do with our special critical knowledge of the way in which cultural meanings are produced–our knowledge about the politics of consumption and what is often called the politics of representation. This is the knowledge which demonstrates that there are limits to the capacity of productive forces to shape and determine consciousness. It is a knowledge that insists on the ideological or interpretive dimension of technology as a culture which can and must be used and consumed in a variety of ways that are not reducible to the intentions of any single source or producer, and whose meanings cannot simply be read off as evidence of faultless social reproduction. It is a knowledge, in short, which refuses to add to the “hard domination” picture of disenfranchised individuals watched over by some by some scheming panoptical intelligence. Far from being understood solely as the concrete hardware of electronically sophisticated objects, technology must be seen as a lived, interpretive practice for people in their everyday lives. To redefine the shape and form of that practice is to help create the need for new kinds of hardware and software.

 

One of the latter aims of this essay has been to describe and suggest a wider set of activities and social locations than is normally associated with the practice of hacking. If there is a challenge here for cultural critics, then it might be presented as the challenge to make our knowledge about technoculture into something like a hacker’s knowledge, capable of penetrating existing systems of rationality that might otherwise be seen as infallible; a hacker’s knowledge, capable of reskilling, and therefore of rewriting the cultural programs and reprogramming the social values that make room for new technologies; a hacker’s knowledge, capable also of generating new popular romances around the alternative uses of human ingenuity. If we are to take up that challenge, we cannot afford to give up what technoliteracy we have acquired in deference to the vulgar faith that tells us it is always acquired in complicity, and is thus contaminated by the poison of instrumental rationality, or because we hear, often from the same quarters, that acquired technological competence simply glorifies the inhuman work ethic. Technoliteracy, for us, is the challenge to make a historical opportunity out of a historical necessity.

 

Notes

 

1. Bryan Kocher, “A Hygiene Lesson,” Communications of the ACM, 32.1 (January 1989): 3.

 

2. Jon A. Rochlis and Mark W. Eichen, “With Microscope and Tweezers: The Worm from MIT’s Perspective,” Communications of the ACM, 32.6 (June 1989): 697.

 

3. Philip Elmer-DeWitt, “Invasion of the Body Snatchers,” Time (26 September 1988); 62-67.

 

4. Judith Williamson, “Every Virus Tells a Story: The Meaning of HIV and AIDS,” Taking Liberties: AIDS and Cultural Politics, ed. Erica Carter and Simon Watney (London: Serpent’s Tail/ICA, 1989): 69.

 

5. “Pulsing the system” is a well-known intelligence process in which, for example, planes deliberately fly over enemy radar installations in order to determine what frequencies they use and how they are arranged. It has been suggested that Morris Sr. and Morris Jr. worked in collusion as part of an NSA operation to pulse the Internet system, and to generate public support for a legal clampdown on hacking. See Allan Lundell, Virus! The Secret World of Computer Invaders That Breed and Destroy (Chicago: Contemporary Books, 1989), 12-18. As is the case with all such conspiracy theories, no actual conspiracy need have existed for the consequences–in this case, the benefits for the intelligence community–to have been more or less the same.

 

6. For details of these raids, see 2600: The Hacker’s Quarterly, 7.1 (Spring 1990): 7.

 

7. “Hackers in Jail,” 2600: The Hacker’s Quarterly, 6.1 (Spring 1989); 22-23. The recent Secret Service action that shut down Phrack, an electronic newsletter operating out of St. Louis, confirms 2600‘s thesis: a nonelectronic publication would not be censored in the same way.

 

8. This is not to say that the new laws cannot themselves be used to protect hacker institutions, however. 2600 has advised operators of bulletin boards to declare them private property, thereby guaranteeing protection under the Electronic Privacy Act against unauthorized entry by the FBI.

 

9. Hugo Cornwall, The Hacker’s Handbook 3rd ed. (London: Century, 1988) 181, 2-6. In Britain, for the most part, hacking is still looked upon as a matter for the civil, rather than the criminal, courts.

 

10. Discussions about civil liberties and property rights, for example, tend to preoccupy most of the participants in the electronic forum published as “Is Computer Hacking a Crime?” in Harper’s, 280.1678 (March 1990): 45-57.

 

11. See Hugo Cornwall, Data Theft (London: Heinemann, 1987).

 

12. Bill Landreth, Out of the Inner Circle: The True Story of a Computer Intruder Capable of Cracking the Nation’s Most Secure Computer Systems (Redmond, Wash.: Tempus, Microsoft, 1989), 10.

 

13. The Computer Worm: A Report to the Provost of Cornell University on an Investigation Conducted by the Commission of Preliminary Enquiry (Ithaca, N.Y.: Cornell University, 1989).

 

14. The Computer Worm: A Report to the Provost,8.

 

15. A. K. Dewdney, the “computer recreations” columnist at Scientific American, was the first to publicize the details of this game of battle programs in an article in the May 1984 issue of the magazine. In a follow-up article in March 1985, “A Core War Bestiary of Viruses, Worms, and Other Threats to Computer Memories,” Dewdney described the wide range of “software creatures” which readers’ responses had brought to light. A third column, in March 1989, was written, in an exculpatory mode, to refute any connection between his original advertisement of the Core War program and the spate of recent viruses.

 

16. Andrew Ross, No Respect: Intellectuals and Popular Culture (New York: Routledge, 1989), 212. Some would argue, however, that the ideas and values of the sixties counterculture were only fully culminated in groups like the People’s Computer Company, which ran Community Memory in Berkeley, or the Homebrew Computer Club, which pioneered personal microcomputing. So, too, the Yippies had seen the need to form YIPL, the Youth International Party Line, devoted to “anarcho- technological” projects, which put out a newsletter called TAP (alternately the Technological American Party and the Technological Assistance Program). In its depoliticised form, which eschewed the kind of destructive “dark-side” hacking advocated in its earlier incarnation, TAP was eventually the progenitor of 2600. A significant turning point, for example, was TAP‘s decision not to publish plans for the hydrogen bomb (which the Progressive did)–bombs would destroy the phone system, which the TAP phone phreaks had an enthusiastic interest in maintaining.

 

17. See Alice Bach’s Phreakers series, in which two teenage girls enjoy adventures through the use of computer technology. The Bully of Library Place, Parrot Woman, Double Bucky Shanghai, and Ragwars (all published by Dell, 1987-88).

 

18. John Markoff, “Cyberpunks Seek Thrills in Computerized Mischief,” New York Times, November 26,1988.

 

19. Dennis Hayes, Behind the Silicon Curtain: The Seductions of Work in a Lonely Era (Boston, South End Press, 1989), 93. One striking historical precedent for the hacking subculture, suggested to me by Carolyn Marvin, was the widespread activity of amateur or “ham” wireless operators in the first two decades of the century. Initially lionized in the press as boy-inventor heroes for their technical ingenuity and daring adventures with the ether, this white middle-class subculture was increasingly demonized by the U.S. Navy (whose signals the amateurs prankishly interfered with), which was crusading for complete military control of the airwaves in the name of national security. The amateurs lobbied with democratic rhetoric for the public’s right to access the airwaves, and although partially successful in their case against the Navy, lost out ultimately to big commercial interests when Congress approved the creation of a broadcasting monopoly after World War I in the form of RCA. See Susan J. Douglas, Inventing American Broadcasting 1899-1922 (Baltimore: Johns Hopkins University Press, 1987), 187-291.

 

20. “Sabotage,” Processed World, 11 (Summer 1984), 37-38.

 

21. Hayes, Behind the Silicon Curtain, 99.

 

22. The Amateur Computerist, available from R. Hauben, PO Box, 4344, Dearborn, MI 48126.

 

23. Kevin Robins and Frank Webster, “Athens Without Slaves…Or Slaves Without Athens? The Neurosis of Technology,” Science as Culture, 3 (1988): 7-53.

 

24. See Boris Frankel, The Post-Industrial Utopians (Oxford: Basil Blackwell, 1987).

 

25. See, for example, the collection of essays edited by Vincent Mosco and Janet Wasko, The Political Economy of Information (Madison: University of Wisconsin Press, 1988), and Dan Schiller, The Information Commodity (Oxford UP, forthcoming).

 

26. Tom Athanasiou and Staff, “Encryption and the Dossier Society,” Processed World, 16 (1986): 12-17.

 

27. Kevin Wilson, Technologies of Control: The New Interactive Media for the Home (Madison: University of Wisconsin Press, 1988), 121-25.

 

28. Hayes, Behind the Silicon Curtain, 63-80.

 

29. “Our Friend the VDT,” Processed World, 22 (Summer 1988): 24-25.

 

30. See Kevin Robins and Frank Webster, “Cybernetic Capitalism,” in Mosco and Wasko, 44-75.

 

31. Barbara Garson, The Electronic Sweatshop (New York: Simon & Schuster, 1988), 244-45.

 

32. See Marike Finlay’s Foucauldian analysis, Powermatics: A Discursive Critique of New Technology (London: Routledge & Kegan Paul, 1987). A more conventional culturalist argument can be found in Stephen Hill, The Tragedy of Technology (London: Pluto Press, 1988).