GMOs pose systemic risks to ecologies and human health, writes the statistician, former derivatives trader, and risk expert Nassim Nicholas Taleb
The precautionary principle should be used to prescribe severe limits on GMOs, writes Nassim Nicholas Taleb, Distinguished Professor of Risk Engineering at New York University Polytechnic School of Engineering, in a paper published by the NYU School of Engineering's Extreme Risk Initiative.
Taleb's paper is an excellent read and cuts through GMO lobby nonsense like a hot knife through butter.
1. Ruin is forever: When the precautionary principle is justified
2. The precautionary principle: Fragility and black swans from policy actions
1. Ruin is forever: When the precautionary principle is justified
by Kurt Cobb
Resilience.org (originally published by Resource Insights), Aug 31, 2014
http://www.resilience.org/stories/2014-08-31/ruin-is-forever-when-the-precautionary-principle-is-justified
If you are dead, you cannot mount a comeback. If all life on Earth were destroyed by, say, a large comet impact, there would be no revival. Ruin is forever.
The destruction of all life on Earth is not 10 times worse than the destruction of one-tenth of all life on Earth. It is infinitely worse. A fall of 1 foot is not one-tenth as damaging to the human body as a fall of 10 feet, nor is it one-hundredth as damaging as a fall of 100 feet (which is very likely to be lethal). Walking down a stairway with one-foot-high steps, we are typically immune to any damage at all. Thus, we can say in both instances above that the harm rises dramatically (nonlinearly) as we move toward any 100 percent lethal limit.
It is just these properties - scope and severity - that most humans seem blind to when introducing innovations into society and the environment according to a recent paper entitled "The Precautionary Principle: Fragility and Black Swans from Policy Actions". The paper comes from the Extreme Risk Initiative at the New York University School of Engineering and one of its authors, Nassim Nicholas Taleb, is well-known to my readers.
The concepts in the paper are applicable to systemic problems such as climate change. But the paper addresses only two specific issues, genetically modified organisms (GMOs) and nuclear power, to illustrate its main points.
The precautionary principle refers to a policy that demands proof that an innovation in not broadly harmful to humans or the environment before it is deployed. We are referring here to public policy issues, not decisions by individuals. The question the paper tries to answer is: When should this principle be invoked in public policy?
The answer the authors give is surprisingly simple: when the risk of ruin is systemic. That doesn't mean that they suggest no steps to mitigate risk when ruin might only be local, say, the explosion of a fireworks factory. But, they feel that such an event falls within the realm of risk management. An explosion at one fireworks factory cannot set off a chain reaction around the world. Individuals in and around the plant might be ruined. But all of humanity would not ruined.
In the two examples covered in the paper, GMOs and nuclear power, the authors come to the surprising conclusion that nuclear power on a small scale does not warrant invoking the precautionary principle. Small-scale nuclear power does warrant careful risk management and cost/benefit analysis. Whether the damaged reactors at Fukushima would fall into the category of small-scale nuclear power isn't clear. Their effects were worldwide, even if small in most places.
GMOs, however, offer a classic case of unforeseeable systemic ruin. We will know we are ruined by this untried technology after the ruin happens (perhaps in the form of famine or widespread human health and/or environmental effects). The authors categorically reject the notion that modern genetic engineering of plants is no more dangerous than traditional selective breeding.
This is because traditional methods are tried on a small scale and only achieve large scale acceptance and use over time if they are successful, that is, demonstrate no drastic side-effects or failures. This mimics nature's bottom-up approach to evolution; the changes affected this way are gradual, not drastic - and, of course, they don't involve transferring genetic material from completely different species, say, from a fish into a tomato.
Proponents will say that cross-species transfer of genetic material takes place in nature as well. But its scope is limited and its survivability and evolutionary fitness are tested over long periods during which these changes either thrive or disappear.
The top-down approach of the GMO industry introduces GMO crops everywhere across the world in a short period and combines one risk - untested genetic combinations--with another grave risk--monoculture. The long-term product of these two risks is unknown. But it is rightly categorized as systemic. GMO crops are now deployed worldwide and they can and do also contaminate non-GMO crops and wild plants through pollination.
Crops created through selective breeding have long histories of success and toxicities that are well understood and unlikely to change suddenly. As each new GMO crop is deployed, we cannot know ahead of time whether it will lead to systemic health and/or environment problems because there is little testing and, in any case, the amount of experience we have with GMO crops is far, far shorter than for the products of traditional selective breeding.
With each step we take in the production and deployment of new GMO seeds, we are playing a game of Russian roulette. The first few times we've pulled the trigger, we did not get catastrophic systemic effects--not yet, at least. But, since there is a nonzero risk of such effects, the probability of creating catastrophic outcomes becomes certain over time. The risk is virtually 100 percent that we will ultimately reach the chamber in the Russian roulette gene gun that causes catastrophic and widespread damage to humans and/or the environment.
Saying that there is no evidence so far that this will happen is a failure to understand that hidden systemic risk can often only show up on very long time scales. And, of course, when that risk does show up, it's too late to do anything. Remember: when we manipulate a gene or genes inside a plant, we are not doing just one thing. Without knowing it, we are affecting multiple systems in the plant and in the environment the plant lives in. We are creating multiple possible pathways to ruin.
This is just a short preview of the article cited above. The article is quite accessible to a lay reader and, in places, even entertaining. I encourage you to read the whole thing. It is the most rigorous statement to date concerning the precautionary principle and risk in that it outlines clear criteria for judging when that principle should be invoked and when it should not be.
2. The precautionary principle: Fragility and black swans from policy actions
Nassim Nicholas Taleb, Rupert Read, Raphael Douady, Joseph Norman, Yaneer Bar-Yam
Extreme Risk Initiative – NYU School of Engineering Working Paper Series
July 24, 2014
https://docs.google.com/file/d/0B8nhAlfIk3QIbGFzOXF5UUN3N2c/edit?pli=1
Abstract
The precautionary principle (PP) states that if an action or policy has a suspected risk of causing severe harm to the public domain (affecting general health or the environment globally), the action should not be taken in the absence of scientific near-certainty about its safety. Under these conditions, the burden of proof about absence of harm falls on those proposing an action, not those opposing it. PP is intended to deal with uncertainty and risk in cases where the absence of evidence and the incompleteness of scientific knowledge carries profound implications and in the presence of risks of "black swans", unforeseen and unforeseeable events of extreme consequence.
This non-naive version of the PP allows us to avoid paranoia and paralysis by confining precaution to specific domains and problems.
Here we formalize PP, placing it within the statistical and probabilistic structure of “ruin” problems, in which a system is at risk of total failure, and in place of risk we use a formal "fragility" based approach. In these problems, what appear to be small and reasonable risks accumulate inevitably to certain irreversible harm. Traditional cost-benefit analyses, which seek to quantitatively weigh outcomes to determine the best policy option, do not apply, as outcomes may have infinite costs. Even high-benefit, high-probability outcomes do not outweigh the existence of low probability, infinite cost options — i.e. ruin. Uncertainties result in sensitivity analyses that are not mathematically well behaved. The PP is increasingly relevant due to man-made dependencies that propagate impacts of policies across the globe. In contrast, absent humanity the
biosphere engages in natural experiments due to random variations with only local impacts.
Our analysis makes clear that the PP is essential for a limited set of contexts and can be used to justify only a limited set of actions.
We discuss the implications for nuclear energy and GMOs. GMOs represent a public risk of global harm, while harm from nuclear energy is comparatively limited and better characterized. PP should be used to prescribe severe limits on GMOs.