Excerpt: "It is a myth that most scientists working under competitive pressures can address the implications of their own work with dispassion and establish appropriately stringent controls - any more than an unregulated Bill Gates can give competing browsers equal access to Windows. Sure enough, some five years later, the controls proposed at Asilomar and developed by the NIH were dismantled without anything like adequate knowledge of the hazards. Along with the controls, the
fundamental safety principle that genetically engineered organisms should be contained was jettisoned -- thereby generating the new problems of genetic contamination that we face today."
Legitimating Genetic Engineering
Susan Wright - Foundation for the Study of Independent Social Ideas, Inc. - March 2001
The field of biotechnology was launched in the early 1970s when the ability to perform controlled genetic engineering was first demonstrated at Stanford University by a graduate student in biochemistry, Peter Lobban, and independently by the chair of his department, Paul Berg, and colleagues.
Responses spanned the spectrum from optimistic anticipations of a cornucopia of new products to horror that anyone would contemplate putting tumor virus DNA into a common bacterium (a further experiment contemplated by Berg). Applications from novel plants and animals to novel biological weapons were predicted.
As these possibilities were realized in the 1980s and 1990s, resistance elsewhere in the world contrasted with the remarkably passive acceptance of the technology in the United States. The Clinton administration even got away with the argument that labels for genetically engineered foods were unnecessary.
News that the former Soviet Union had used its immense biological warfare resources to develop bioweapons produced only a momentary blip in the collective American consciousness. Now that a public debate is emerging in the United States, it may be useful to examine how genetic engineering originally achieved legitimacy. "If I am to convince you that it is really in your interests for me to be self-interested, then I can only be effectively self-interested by becoming less so." - Terry Eagleton, Ideology: An Introduction
In the early 1970s, the community of molecular biologists responded to the controversial new development by making a series of decisions that resulted in an international scientific conference at the Asilomar Conference Center, California, in February 1975. Whatever the state of debate about genetic engineering -- and twenty-five years later the debate remains substantially unresolved -- this conference established the pattern of American policy for controlling the field and served as an influential precedent for policy making abroad.
There is a curious tension in accounts of the Asilomar conference. The conference has been lauded as an exceptional event in which scientists voluntarily sacrificed immediate progress in their research in order to ensure that the field would develop safely. At the same time, many, perhaps most, of the participants resisted questions raised about the implications of their work and simply wanted to proceed. Self-interest, not altruism, was most evident at Asilomar.
Eyewitness accounts (and the conference tapes) make it clear that all moves to address the social problems posed by this field in advance of its development were firmly suppressed.
The tension disappears, however, if we understand Asilomar as an effort to justify a form of technology that is likely to be socially disruptive. In essence, Asilomar was about fashioning a set of beliefs for the American people and their representatives in Congress that would allow scientists to pursue genetic engineering under a system of self-governance. Equally important, it was about persuading the scientists to accept the degree of self-sacrifice that was needed for such a strategy to be effective.
1 The Politics of Asilomar
The decisions that led up to the Asilomar meeting were made exclusively within the scientific community. Responding to questions emerging among the elite scientists conversant with genetic engineering in the early seventies, the National Academy of Sciences (NAS) and the National Institute of Medicine turned to Paul Berg for advice on short- and long-term policy. Berg in turn assembled a group of leading molecular biologists and biochemists actually or potentially involved in the new field.
Meeting at the Massachusetts Institute of Technology in April 1974, the Berg committee produced three main recommendations: first, a pause in some experiments; second, the Asilomar conference; and finally, in a move whose political significance was largely missed at the time, a proposal to the director of the National Institutes of Health (NIH) to establish an advisory committee to explore the hazards of the new field, develop procedures for minimizing them, and draft guidelines for research. All these recommendations were acted upon.
By the time participants gathered at Asilomar, the NIH committee (composed almost exclusively of people with actual or potential interests in genetic engineering) had been appointed, defined as a "technical committee," and charged with investigating the hazards and, on that basis, developing guidelines for NIH grantees. The committee held its first meeting the day after the conference ended.(2)
The formal task for the international conference proposed by the Berg committee was to "review scientific progress in [genetic engineering] and . . . discuss appropriate ways to deal with" the potential biohazards. The informal agenda that became apparent during the course of the meeting and afterward, when its proposals were challenged by members of Congress, was to build support inside and outside the Asilomar meeting for the course of action already decided on: the treatment of genetic engineering as a technical issue, to be assessed and controlled primarily by those close to the new field, under the auspices of the NIH, the primary funding agency for biomedical research. In essence, the goal was self-governance.
But achieving that goal was a formidable political challenge for several reasons. First, there was close to absolute ignorance about the possible impact of genetic engineering techniques, reflected in the wide spectrum of creative scenario spinning in the early 1970s. No one could predict what the problems were likely to be, and some scientists said as much. This huge uncertainty was also frankly acknowledged in the organizing committee's final report. Ironically, advances made possible by genetic engineering have generated even deeper uncertainties.
Second, although no one could predict the possible hazards of the techniques, there were some clear ideas about future applications, barring insuperable technical obstacles. As well-informed people were aware by this time,genetic engineering was headed toward industrial, agricultural, and military uses. Virtually all those involved in the early experimental work had anticipated that success would lead to practical applications. As early as 1974, various institutions were already making moves to profit from them, if and when they happened. Stanford University had filed for patents on behalf of Stanley Cohen and Herbert Boyer in November 1974; the British transnational corporation ICI had launched a joint program with Edinburgh University to pursue genetic engineering research and design; American pharmaceuticals had established small research programs to follow the field, and so forth.
But such anticipation brought important social consequences. As the British molecular biologist Sydney Brenner (also a member of the Asilomar Organizing Committee) had argued in testimony to the British committee appointed to investigate genetic engineering in 1974, if the technology proved viable, then it would likely be of great interest to institutions like the military and major drug companies, "which can, and probably do, practice secrecy in their activities" -- a prediction that turned out to be completely accurate. (And hence the transformation of genetic engineering from an amazingly transparent field into one shrouded in corporate and military secrecy.)(3)
Third, there was the problem posed by the attentive sectors of the American and other publics around the world, especially those of Western democracies. In the sixties and early seventies, a surge of interest in health, safety, and environmental issues focused public attention on the consequences of scientific and technological advances. In the same period, the antiwar movement alerted the public to the dangers of chemical and biological warfare -- in particular, the immense damage caused by the use of herbicides and incapacitating agents in Vietnam.
In 1972, the molecular biologist Jonathan Beckwith, a member of the public interest organization Science for the People, issued an eloquent warning about the abuse of science, concluding, as did others in this period -- Rachel Carson, Ralph Nader, Barry Commoner -- with a call for greater public participation in decisions on the development and use of science.
Such calls sent chills down the collective spines of science establishments in the United States, the United Kingdom, and elsewhere. Scientific elites used to managing their own affairs reacted with fear and sometimes with outright attacks on statements such as Beckwith's. "Anti-science," "Luddism," "abolitionism," and "pessimism" became code words for the enemy inside and outside the scientific community that had to be neutralized.
In 1971, a gathering of scientific leaders from the United Kingdom and the United States at the CIBA Foundation in London (a private foundation supported by the Swiss multinational then known as CIBA-Geigy) worried over these concerns in the following terms: "If the unaccredited [sic] public becomes involved in debate on matters as close to the boundary between science and trans-science as the direction of biological research, is there some danger that the integrity of the Republic of Science [sic] will be eroded?"
The dilemma facing scientific elites in the West was that if the public was not involved in decisions on the control of science, it might decide to withdraw its extraordinarily generous support. (Funding for science in the 1960s had increased exponentially, but the leveling off in the late sixties and early seventies had shown scientists that politicians could be fickle.) On the other hand, if the public were to be involved actively in the governance of science, it might move to limit or even close down fields that it found problematic.
Alan Bullock, then vice chancellor of Oxford University, expressed this fear at the CIBA gathering: "I can hear people saying: 'We just can't tolerate the problems that are going to be created by genetic engineering, and we will shut it down as a gift too destructive to the . . . ordinary mores of human life.' . . .This is the dilemma."
And finally, there was the problem of restraining runaway scientists who were skeptical of any research controls and wanted to proceed full steam ahead. In 1973, a call for controls on the handling and dissemination of hybrid viruses had met with strong resistance from leading scientists on the grounds that, even though informal and self-imposed, they were an unjustifiable restriction of freedom of inquiry.
Correspondence among molecular biologists in the year or so before the 1975 Asilomar conference reveals similar positions in influential quarters. Particularly in the United States, where research laboratories were generally not unionized, heads of labs tended to resist external
intervention in their research.
The Asilomar organizers were thus faced with the formidable task of forging a consensus -- inside and outside the molecular biology community -- that the road already being taken in the United States was the right way to meet (or avoid) all the actual and potential problems. That task might be described as steering a middle course between the Scylla of a public that could be aroused to close down genetic engineering and the Charybdis of "rogue" scientists who would refuse to accept any controls. But such a description would be misleading, because it suggests that the Asilomar organizers played the politically neutral role of moderating between opposed political camps.
On the contrary, the organizers were pursuing an agenda: to persuade the American public that genetic engineering was under control, that the scientists responsible for developing this technology knew what they were doing, and that responsibility for the future development of the field was best left in their hands.
To achieve these goals, the Asilomar organizing committee had, first, to elicit support from the conference participants and, second, to fashion an image of the problem that would persuade the attentive public, especially watchful members of Congress such as Senator Edward Kennedy (D-Mass.), that their (the organizers') solution was the best possible choice for Americans in general, not just American molecular biologists.
Forging the Internal Consensus
The first challenge, then, was that of unifying the voice with which molecular biologists spoke to the outside world. But there was no unified voice in 1975. Before the conference, the views of molecular biologists on genetic engineering were all over the map: some denied that there was a problem; some held that the field might pose serious risks; some were focused on issues like biological warfare.
The Asilomar organizers addressed this difficulty partly by restricting invitations. Most of the participants were in the mainstream of molecular biology. Relatively few were likely to insist on addressing problematic social applications of their field. Those from abroad saw the conference as an American event, directed largely although not exclusively toward an American audience. Only one member of a public-interest organization was invited, and when he was unable to attend, no one was formally invited in his place. There were no representatives from fields such as ecology or evolutionary genetics who might have contributed different disciplinary perspectives. There were no social scientists, no ethicists, no trade unionists (with the exception of one or two biologists from countries like the United Kingdom, where scientists sometimes opted to join technical unions). There were indeed several lawyers, and they asked some potentially disruptive questions.
Even with these limitations on participation, positions on the genetic engineering problem varied. A second move on the part of the organizers was to restrict the agenda, excluding the awkward questions of biological warfare and human genetic engineering that molecular biologists obviously had no more claim to pronounce on than other people. At the outset, David Baltimore, one of the co-chairs, declared, "This meeting is not designed to deal with [these questions]."
Indeed, it was in the interests of the organizers to keep those thorny questions of social use off the Asilomar agenda, because the answers led far away from the NIH and into antiwar, religious, ethical, social, and political arenas. Excluding social dimensions was essential to the goal of self-governance.
Nevertheless, there were deep tensions at the conference. Science reporters at the meeting wrote that the majority of participants supported the proposed solution of guidelines and safety precautions; they wanted to see the partial moratorium proposed by the Berg committee lifted so that they could get back to work. But there were several nonconformist proposals that worked against such an outcome.
First, the issue of biological warfare refused to go away. A conference panel on plasmid biology chaired by Richard Novick, a veteran of antiwar efforts to dismantle the U.S. biological and chemical weapons programs, issued a statement calling for a ban, as soon as possible, on "the construction of genetically altered organisms for any military purpose."
Second, Science for the People sent an open letter calling for broad public participation in the formulation of public policy for genetic engineering and specifically recommended the involvement of people most at risk -- technicians, students, and custodial staff.
Third, NIH virologist Andrew Lewis, in a minority statement attached to the report of the conference panel on animal viruses, called for a cautious, step-by-step appraisal of the hazards before work on animal virus DNA proceeded. And finally, one of the lawyers, Alexander Capron, argued that although the NIH might make an initial assessment, the public should be represented in policy making and should look beyond the narrow question of safety to broader questions of human impact. These views threatened to disrupt the middle ground held by the majority; their reception was cool.
There was also the problem of how to handle those who resisted the idea of any restriction on their work. Two moves were critical for bringing the recalcitrant into line and, more generally, for strengthening the consensus and ensuring that it would be supported beyond the conference. The first, energetically promoted by Sydney Brenner and taken up by microbiologist Roy Curtiss, was the idea of doing genetic engineering in weakened strains of E.coli, designed to be "unable to survive outside the test-tube." This proposal did several things. It diverted attention away from difficult social problems. It redefined the genetic engineering problem as a technical one. And it defined scientists as the only people who could provide a solution.
Suddenly the talk was of "disarming the bug," and once one accepted that creating a safe bacterial host would solve the safety problem, it became obvious that scientists who knew how to do such things should decide. (No matter that the bug turned out to be rather less disarmed than people had hoped, as well as a tiresome nuisance to use: that came later, and would be resolved eventually by abandoning its use.) Optimistic talk of mutant strains and disarmed vectors took over and made it difficult for nonbiologists to engage in increasingly arcane discourse. The gene engineers' New Atlantis edged toward reality.
The second move was to marginalize the views of those who wanted no controls. The organizers were clear that this could be a direct invitation to politicians to intervene. As co-chair Paul Berg reminded the participants: If our recommendations look self-serving, we will run the risk of having standards imposed. We must start high and work down. We can't say that 150 scientists spent four days at Asilomar and all of them agreed that there was a hazard -- that they couldn't come up with a single suggestion. That's telling the government to do it for us.
These sentiments received strong reinforcement from Brenner: "I think people have got to realize there is no easy way out of this situation: we have not only to say we are going to act, but we must be seen to be acting."
These moves by the organizers provided both a practical agenda (developing the disarmed bug and controls for its use) and a theoretical justification ("safety") that played the crucial role of unifying a group that initially threatened to fragment into unruly factions. It was crucial, as Berg and Brenner emphasized, that the scientists emerge from the meeting united on a coherent agenda.
Persuading the Attentive Public
Beyond unifying the conference, the Asilomar agenda had a second essential function: to persuade the public that the values and interests of the Asilomar conference were really the values and interests of the whole society. Two themes were crucial. As Berg and Brenner admonished the participants, scientists had to be seen to sacrifice their intense interest in moving ahead. The idea of a moratorium on the most dangerous experiments, the idea that the controls would be extremely rigorous (at least, for a while), the idea that scientists would patiently wait until the disarmed bugs became available and that they were willing to subject their proposals to the scrutiny of the NIH
Recombinant DNA Advisory Committee -- these signs of responsibility and sacrifice could be used to reassure politicians that scientists were the most qualified people to address the genetic engineering problem.
That the sacrifice was real was critical for the success of the effort: As Terry Eagleton says, "If I am to convince you that it is really in your interests for me to be self-interested, then I can only be effectively self-interested by becoming less so."
The other theme was that of benefits to society. The public had to be persuaded that if it allowed the scientists to regulate themselves, the fruits of genetic engineering would benefit everyone, not just scientists; and conversely, if the field were restricted by external intervention, everyone would lose.
In the press interviews, Asilomar conferees played benefits against hazards. They projected the benefits as real and described the hazards as "potential" and therefore less real. And the menace of the future use of genetic engineering for biological warfare (which leading molecular biologists discussed at more obscure conferences and with their governments) was pushed off into the shadows.
All these strategies helped to promote the Asilomar consensus. Dissident scientists and informed members of the wider public who raised questions met with resistance from biomedical
researchers. When, following the conference, bio-ethicist Willard Gaylin told the Senate Subcommittee on Health that genetic engineering was too important for decisions to be made exclusively by scientists, he was sharply rebuked by one of the Asilomar organizers for his "antiscientific" position. And when Senator Edward Kennedy proposed legislation that would have taken decision-making authority out of the NIH and given it to a freestanding commission, the biomedical research community launched one of the most intense and widely coordinated
lobbying efforts that Congress had ever seen. As an aide to Senator Kennedy remarked in 1978, biologists behaved like "an affected industry." When persuasion didn't work, biomedical researchers used their considerable lobbying power to maintain the Asilomar consensus.
Benefits and Biological Weapons
A counter-argument to this critique of Asilomar is that, after all, no monsters have crawled out of genetic engineering laboratories or industries, and that, on the contrary, genetic engineering really has yielded benefits. In other words, experience confirms the wisdom of the Asilomar process.
I accept the contention that there have been benign uses of genetic engineering. Lifesaving medical products that are difficult if not impossible to make in other ways are examples. Far more strenuous efforts could be directed toward vaccine development, but unfortunately this is not an application that promises large profits for transnational drug corporations.
Use of genetic engineering and gene sequencing in research has also produced important advances in understanding the nature of genes and genomes. In particular, we now understand a great deal more about the complexity, fluidity, and adaptability of genomes -- to an extent that was unthinkable at the time of the 1975 conference.
An irony of these results is that they challenge the mechanistic linearity of the Watson-Crick model of DNA that provided the conceptual basis of genetic engineering in the first place and is still the basis for the widely influential claim underlying U.S. regulatory policy: that the results of gene engineering are predictable.
That justification for the safety of gene engineering is being severely tested by what we know today about the dangers and actual harm of genetic engineering in humans, by the ecological dangers of genetically engineered agricultural plants, by the demonstrated applications of genetic engineering for biological warfare purposes in the former Soviet Union, and, more subtly, in the United States.
So although there have indeed been benefits from genetic engineering, that does not mean that the course charted at Asilomar was wise. Far from it: the strong moves to legitimate genetic technology allowed harmful applications to advance outside the reach of either national or international controls. The key example comes from the field of biological warfare.
By the late 1960s, some molecular biologists close to the U.S. government were aware of the likely interest of military establishments, if it proved feasible to control the nature and behavior of micro-organisms. Joshua Lederberg, an Asilomar participant who resisted controls, warned the Geneva Conference on Disarmament about this very possibility in 1970.
Other biologists inside and outside the U.S. government recognized this potential early on. The U.K. and U.S. governments were aware of these views, and the sense that molecular biology might eventually overcome some of the existing weaknesses of biological weapons was an important motive for the negotiation of the 1972 Biological Weapons Convention. The United States and the United Kingdom, both nuclear powers, deemed it prudent to try to remove the option of bioweaponry before these weapons proved attractive to lesser powers.
But the political compromises (influenced by military establishments) struck before and during the negotiation of the convention, guided almost exclusively by realpolitik, produced a weak legal instrument. The biologists who followed these developments closely must have been aware of the loopholes in the convention's ban on biological weapons -- the treaty's failure to address military research, the lack of a clearly defined boundary between offensive and defensive development in the treaty's basic prohibition, the absence of verification instruments, and so forth.
Both the military potential of genetic engineering and the loopholes in the Biological Weapons Convention through which research and development might pass were clear to well- informed molecular biologists from the outset. At Asilomar, the participants could have used their prestige and authority to call for a comprehensive, worldwide ban on the use of genetic engineering for enhancing the lethality of biological agents for any military purpose -- the call that Richard Novick and his colleagues on the Plasmid Group had pressed the participants to take up.
Instead, the issue was dismissed or marginalized whenever it arose. To raise the specter of new forms of biological warfare would have undermined the claims for a cornucopia of "benefits" that were so essential to legitimating genetic technology.
We now know that the former Soviet Union pursued a huge biological weapons program in the 1970s and 1980s, including genetic engineering of novel pathogens. (10) There is also good
evidence that the United States has promoted genetic engineering research on novel biological warfare agents that pushes the envelope of permissible biological activity.(11) In any case, such work stimulates military interest in other countries.
Could these programs have been prevented? It's impossible to know, because the attempt to block them was not made. But one thing is clear: the Asilomar leadership preferred not to contaminate a benign image of genetic engineering by addressing its most problematic application.
Myths of the Bargain
In order to move ahead with genetic engineering, the Asilomar scientists proposed a bargain with society: restrictions on their research in exchange for self-governance. They made the bargain palatable to the wider public by making it appear natural (scientists were uniquely qualified to understand the problem and develop a solution), rational (scientists were making a sacrifice and were therefore responsible), and universal (everyone would reap the benefits) -- and by marginalizing problems like the prospect of novel biological weapons or genetically engineered humans that clearly fell outside the scope of scientific ingenuity to address.
But both sides of the bargain are myths. It is a myth that most scientists working under competitive pressures can address the implications of their own work with dispassion and establish appropriately stringent controls - any more than an unregulated Bill Gates can give competing browsers equal access to Windows. Sure enough, some five years later, the controls proposed at Asilomar and developed by the NIH were dismantled without anything like adequate knowledge of the hazards.
Along with the controls, the fundamental safety principle that genetically engineered organisms should be contained was jettisoned -- thereby generating the new problems of genetic contamination that we face today.
It is equally a myth that scientists in this field are self-governing. Even in the sixties and early seventies, the heyday of bountiful government support for molecular biology, research goals were substantially shaped by the needs of federal sponsors. Since the 1970s, as investments in genetic engineering by military agencies and pharmaceutical and agrichemical corporations have risen steeply, both research agendas and the conduct of research have been increasingly influenced (if not dictated) by these huge interests.
Needless to say, the legitimation of genetic engineering has taken a whole new turn.
Susan Wright is a historian of science at the University of Michigan. She directs an international research project entitled "Forming a North-South Alliance on Current Problems of Biological Warfare and Disarmament" supported by the John D. and Catherine T. MacArthur Foundation, the Ford Foundation, and the New England Biolabs Foundation.
1.For discussion of the process of legitimation, see Terry Eagleton, Ideology: An Introduction (London: Verso, 1991).
2.For further analysis and sources for the history of policy making for genetic engineering, see Susan Wright, Molecular Politics: Developing American and British Regulatory Policy for Genetic Engineering, 1972-1982 (Chicago University Press, 1994).
3.Susan Wright and David Wallace, "Varieties of Secrets and SecretVarieties: The Case of Biotechnology," in Judith Reppy, ed., Secrecy and Knowledge Production (Cornell University Peace Studies Program Occasional Paper #23, October 1999); revised version forthcoming in Politics and the Life Sciences.
4.Jonathan Beckwith, "The Scientist in Opposition in the United States," in The Biological Revolution: Social Good or Social Evil?, Watson Fuller, ed. (London:Routledge and Kegan Paul, 1972), pp. 296-97.
5.Gordon Wolstenholme and Maeve O'Connor, eds., Civilization and Science: In Conflict or Collaboration? (Amsterdam: Elsevier, 1972), p. 113.
6.Sheldon Krimsky, Genetic Alchemy (MIT Press, 1982), chs. 3-4;Wright, Molecular Politics, pp. 126-136.
7.For eyewitness accounts of the 1975 conference, see Michael Rogers, "The Pandora's Box Congress," Rolling Stone (June 19, 1975), pp. 15-19, 37-38; Nicholas Wade, "Genetics: Conference Sets Strict Controls to Replace Moratorium," Science 187 (March 4, 1975), pp.931-35. The Recombinant DNA History Collection, Institute Archives, MIT, assembled by Charles Weiner is a rich collection on the early history of the controversy, including the 1975 conference.
8.The claims in this section draw on archival research for "The Origins of the 1972 Biological Weapons Convention," forthcoming in Susan Wright, ed., Meeting the Challenges of Biological Warfare and Disarmament in the 21st Century.
9.For discussion of these issues, see Richard Falk, "Inhibiting Reliance on Biological Weaponry: The Role and Relevance of International Law," in Susan Wright, ed., Preventing a Biological Arms Race (MIT Press, 1990), pp. 241-266.
10.Ken Alibek, Biohazard (Random House, 1999); Anthony Rimmington, "Invisible Weapons of Mass Destruction: The Soviet Union's BW Program and its Implications for Contemporary Arms Control," forthcoming in Wright, Meeting the Challenges of Biological Warfare and Disarmament in the 21st Century.
11.See, for example, Charles Piller and Keith Yamamoto, Gene Wars: Military Control Over the New Genetic Technologies (Beech Tree Books, 1988); Steven Aftergood, "The Soft-Kill Fallacy," Bulletin of the Atomic Scientists (September/October 1994), pp. 40-45.