The highly publicized publication of Snell et al. has been widely touted as a definitive proof of the safety of GMOs, but some have strongly criticized the validity of the authors’ conclusions.
Below are two deconstructions of Snell et al. The first (item 1) is by the biologist, chair of the cross-disciplinary research organisation GIET and member of France's Haut Conseil sur les Biotechnologies (HCB), Frédéric Jacquemart. The article is slightly technical in parts but should nonetheless be understandable by the lay person. The second (item 2) is from GMO Myths and Truths, by genetic engineers John Fagan and Michael Antoniou, and researcher and editor Claire Robinson. The full report can be downloaded from here:
http://earthopensource.org/index.php/reports/gmo-myths-and-truths
1. The safety of GMO: studies are based on non scientific conclusions - Frédéric Jacquemart
2. The Snell review - John Fagan, Michael Antoniou, and Claire Robinson
---
---
1. The safety of GMO: studies are based on non scientific conclusions
Frédéric Jacquemart
Inf'OGM, May 2014
http://www.infogm.org/The-safety-of-GMO-studies-are
[References located at the link above]
The highly publicized publication of Snell et al.[1] has been widely touted as a definitive proof of the safety of Genetically Modified Plants (GM Plants) and the adequacy of toxicological evaluation methods applied to them. Others have strongly criticized the validity of the authors’ conclusions.
Closely following the GMO dossier for many years, Senator Marie-Christine Blandin (Green party) wanted an official point to be made on the findings of this publication in order to inform the public debate. She therefore asked the HCB for a scientific opinion, asking some very specific questions, sufficiently enough to be difficult to circumvent.
The result is of major significance: not only have the main conclusions of the mediatized article been invalidated, but has therefore shown that no known publication to date said to be scientific, purporting to show the safety of GM Plants through long-term or multigenerational studies is conclusive! We knew it, but to see it officially confirmed really changes the face of public debate.
On December 15, 2011, Agnès Ricroch, of Agro Paris Tech and of the French Academy of Agriculture, reported on the radio Europe 1, referring to the article by Snell et al. which she co-authored: “the debate on sanitary issues of GMOs is closed”. This peremptory and definitive statement, used many times, was based on an article which was published shortly after in a scientific journal : Food and Chemical Toxicology (FCT), the same journal which will later publish the famous article of the Gilles-Eric Séralini’s team on NK603 and Roundup[2]. In addition to this common fortuitous point, these two articles are closely related, due to the then controversy surrounding the publication of the Séralini’s team. The article co-authored by Ricroch was highlighted as a counter example, and more or less explicitly, as a model of good scientific conduct, to the extent that Gérard Pascal and Agnès Ricroch were invited as «prosecution witnesses» during the famous session of OPECST[3] dedicated to the article of Gilles-Eric Séralini. The MP, Le Déhaut, Vice-President of OPECST and organizer of this session presented Agnès Ricroch on this occasion as”co-author of a meta-analysis on the same topic which opposed the study of Gilles-Eric Séralini”. Following this latter publication, a petition was launched by members of the CNRS[4], to denounce the work of Séralini, which stated: "Let’s remember that last March a summary of 24 studies all concluding the safety of GMOs in food was published in the same journal”[5].
It is therefore logical that Marie-Christine Blandin questions the scientific validity of the conclusions of this “meta-analysis” and accordingly requests the High Council of Biotechnologies (HCB)[6] for its opinion.
By the way, the first question asked by the senator concerns this qualification: is this work a “meta-analysis”? This qualification was not used in the title of the article, but is found in the text and Mr. Le Déaut, among others, as we have seen, uses it without being contradicted by Mrs. Ricroch (or by Gérard Pascal, present as well). It is a bit technical, but that is important because a meta-analysis corresponds to a methodology that gives weight to the findings. Here, the Scientific Committee (SC) of the HCB gives a clear answer: no, it is not a meta-analysis.
The second question is worth quoting in full: “Mrs. Ricroch says: “17 of the 24 studies are of good quality, i.e. they have good statistical power” (p.36 OPECST report). My second question is this: of these 17 studies, how many provide calculation of statistical power, in what interval are these values and according to which criteria can they be called good?”. The question is more troublesome than the first and the doublespeak already present in the introduction of the HCB’s report resurfaces. We’ll come back to this point, but in a nutshell, a negative result (no toxic effects found, for example) is of interest only if the statistical power is sufficient to prove that one has the necessary conditions which permits to see the effect if it exists[7]. If one is not able to see what one is searching for, it is of no interest to affirm that one sees nothing. Let us mention immediately that this is the case of the famous 17 studies, described as “good”, hence the embarrassment of the experts of the scientific committee who prefer avoiding the question concerning this qualification, saying that it does not belong to their missions to comment on remarks made at public meetings (remarks made however by one of the authors of the article and let us remember in a parliamentary setting).
Concerning the question of statistical power, it is difficult not to respond. Anyone with access to scientific publications can verify that none of those mentioned provides statistical power, contrary to what was stated publicly by Ricroch. So this is how the scientific committee responds, but, and this is where the sensitive issue lies: in trying to attempt to minimize the scope of this information: “In the studies with a goal to demonstrate an effect or a lack of effect, statistical analysis was necessary. Upon review, the scientific committee of the HCB considers that 5 studies among the 24 could have benefited from statistical power calculations or tests of equivalence to justify the conclusions advanced by these studies.”
Could have benefited!
This deliberately misleading way of expressing themselves, in response to questions by an elected representative of the French Republic, contrasts starkly with the one used by the expert statistician of the scientific committee of the HCB in his personal blog[8], to describe the same publication by Snell et al. during its printing, referring to the title in Le Figaro:
"This front page news is based also on a scientific publication of Snell et al. (published ... yet again in the same journal of Food and Chemical Toxicology). This article reviews 24 studies on the subject and concluded:
“The studies reviewed present evidence to show that GM plants are nutritionally equivalent to their non-GM counterparts and can be safely used in food and feed.”[9]
Yet again, the conclusion as expressed by the authors of the article goes far beyond what studies are permitted to tell. Let us recall that many studies deal with groups of only 10 animals (sometimes even 5 or 3). One can criticize the authors in the same way as Séralini was criticized: to conclude systematically and in such a definitive way on the basis of such limited information does not make sense! Furthermore, the tests used in these studies are comparative statistical tests that allow in no way to conclude anything concerning the total absence of risk or the notion of biological equivalence. The inferential statistical tool theoretically adapted to this question is the test of statistic equivalence».
A little later in this blog, the statistician clarifies:
“A biologically significant difference may not be statistically significant if the data available are insufficient. Power analysis is therefore essential to assess what size of effect can be detected with a given size of sample.”
We are talking about the same expert who validated the conclusion of the review of the HCB: “among the studies listed by Snell et al, none includes power calculation, as shown in their literature bibliography. However, depending on the question posed in each article, the scientific committee of the HCB emphasizes that such calculations are not systematically necessary”.
The expert therefore EMPHASIZES that what is essential is not systematically necessary...
This needs to be meditated.
Mrs. Blandin’s questions, of considerable importance, lead to major conclusions:
1) the publications relied on by Snell and colleagues and therefore the scientists who are well known pro-GMO activists, Agnès Ricroch, Gérard Pascal, and Marcel Kuntz, are inconclusive;
2) this review of the literature being in principle exhaustive, means that there is no publication in scientific literature on long-term or multigenerational toxicology studies that is scientifically based;
3) the conclusion consistent with the wishes of the industrial producers of GMOs according to which it is not necessary to burden toxicology protocols, is unfounded;
4) it is clear that there is a double standard when comparing the treatments used in publications to show toxicity related to GMOs with those used in publications to show the safety of GMOs. This double standard exists in the media field, in the field of scientific publishing (Food and Chemical Toxicology, which withdrew the article by Professor Séralini but did it retract Snell’s article? Note that it did not retract the article by Zhu et al.[10] either, despite requests by GIET, though clearly inconclusive and containing statements scientifically unfounded[11]), in the area of official expertise (one only has to compare the opinion of the HCB regarding the article by G.-E. Séralini with that of the article by Snell...).
The questions of Mrs Blandin have permitted a considerable clarification on the debate of GMOs.
EFSA responds clearly to two decisive questions
When the GMO panel of EFSA was questioned by the chairman of GIET who is also co-pilot of the Biotechnologies France Nature Environnement mission, it did not respond with doublespeak!
Judge for yourselves:
a - in a test of mean comparison (difference test), a negative result (no significant difference) can only be considered if accompanied by a power calculation showing that the latter is at least superior to 80% and that the minimum size of effect to detect, if any, is specified and justified.
Does EFSA agree with this proposal?
Answer: YES
b - a conclusion of equivalence between two samples (eg GMO maize versus conventional one) can only be drawn if a test of equivalence was performed (null hypothesis: the samples are different). In these cases, the demonstrated equivalence applies only to tested parameters and can not be generalized beyond.
Does EFSA agree with this proposal?
Answer: YES
This means two things in particular:
1) because, as evidenced by the scientific committee of the HCB, none of the publications used by Snell et al. produces calculation of statistical power and none contain equivalence testing, the assertions made by Snell et al. concerning the safety of GMOs are unfounded and inadmissible;
2) the fact that no application for authorization of GMOs produces calculation of statistical power and that none of these dossiers include a test of equivalence, even though the conclusions are based on assertions of equivalence (substantial equivalence, equivalence of animals of treated and control groups, nutritional equivalence...), makes it then clear, even in the opinion of the body that has nevertheless validated them, no application of an authorization dossier of a GMO product is so far scientifically acceptable.
Translation from the French by Martina Westover and Frédéric Jacquemart
---
---
2. The Snell review
GMO Myths and Truths
John Fagan, Michael Antoniou, and Claire Robinson
May 2014
http://earthopensource.org/gmomythsandtruths/sample-page/3-health-hazards-gm-foods/3-3-myth-many-long-term-studies-show-gm-safe/
[References at the link above]
A review by Snell and colleagues (2011) purports to examine the health impacts of GM foods as revealed by long-term and multi-generational studies. The Snell review concludes that the GM foods examined are safe. However, this cannot be justified from the data presented in the review.
Some of the studies examined by Snell and colleagues are not even toxicological studies that look at health effects. Instead they are so-called animal production studies that look at aspects of interest to food producers, such as feed conversion (the amount of weight the animal puts on relative to the amount of food it eats) or milk production in cows.
Several of the studies examined are not long-term studies, in that they do not follow the animal over anything approaching its natural lifespan. For example, Snell and colleagues categorized a 25-month feeding study with GM Bt maize in dairy cows by Steinke and colleagues as a long-term study. But although most dairy cows are sent to slaughter at four to five years old because their productivity and commercial usefulness decreases after that age, a cow’s natural lifespan is 17–20 years. A 25-month study in dairy cows is equivalent to around eight years in human terms. So from a toxicological point of view, Steinke’s study is not long-term and could at most be described as medium-term (subchronic).
Similarly, Snell and colleagues categorized a seven-month study in salmon as long-term. But a farmed salmon lives for between 18 months and three years before being killed and eaten and a wild salmon can live for seven to eight years. Snell and colleagues also categorized as long-term some studies in chickens lasting 35 and 42 days, even though a chicken’s natural lifespan is between seven and 20 years, depending on breed and other factors. So again, these are not long-term studies.
Moreover, in Steinke’s study, nine cows from the treatment group and nine from the control group – half of the 18 animals in each group – fell ill or proved infertile, for reasons that were not investigated or explained. In a scientifically unjustifiable move, these cows were simply removed from the study and replaced with other cows. No analysis is presented to show whether the problems that the cows suffered had anything to do with either of the two diets tested.
It is never acceptable to replace animals in a feeding experiment. For this reason alone, this study is irrelevant to an assessment of health effects from GM feed and Snell and colleagues should not have included it in their analysis.
Many of the studies reviewed are on animals that have a very different digestive system and metabolism to humans and are so are not considered relevant to assessing human health effects. These include studies on broiler chickens, cows, sheep, and fish.
Some of the studies reviewed did in fact find toxic effects in the GM-fed animals, but these were dismissed by Snell and colleagues. For example, findings of damage to liver and kidneys and alterations in blood biochemistry in rats fed GM Bt maize over three generations are dismissed, as are the findings of Manuela Malatesta’s team, of abnormalities in the liver, pancreatic, and testicular cells of mice fed on GM soy, in both cases on the basis that the researchers used a non-isogenic soy variety as the non-GM comparator. This was unavoidable, given the refusal of GM seed companies to release their patented seeds to independent researchers.
An objective assessment of Malatesta’s findings would have concluded that while the results do not show that GM soy was more toxic than the non-GM isogenic variety (because the isogenic variety was not used), they do show that GM herbicide-tolerant soy was more toxic than the wild-type soybean tested, either because of the herbicide used, or the effect of the genetic engineering process, or the different environmental conditions in which the two soy types were grown, or a combination of two or more of these factors.
In an extraordinary move, Snell and colleagues offered as the main counter to Malatesta’s experimental findings a poster presentation offering no new or existing data and with no references, presented at a Society of Toxicology conference by two employees of the chemical industry consultancy firm Exponent. Though at first glance the reference given by Snell and colleagues has the appearance of a peer-reviewed paper, it is not. An abstract of the presentation was also published by the Society of Toxicology in its collection of conference proceedings.
Presentations given at conferences are not usually subjected to the scrutiny given to peer-reviewed publications. They certainly do not carry sufficient weight to counter original research findings from laboratory animal feeding experiments with GM foods, such as Malatesta’s. This is particularly true when they do not base their argument on hard data, as in the case of this opinion piece.
Snell and colleagues use double standards
Snell and colleagues dismiss studies findings of risk on the basis that the researchers did not use the non-GM isogenic comparator, while accepting findings of safety in studies with the same methodological weakness.
They also accept as proof of GMO safety some studies in which the number of animals is not stated, meaning that it is not possible to analyze the statistics to see if the findings are statistically significant (and therefore meaningful).
Other studies accepted by Snell and colleagues as proof of GMO safety include some with such small group sizes (for example, of six animals) that no conclusions can be drawn from them.
It is educational to recall how critics of Séralini’s 2012 study[1] claimed that his groups of ten animals per sex per group were too small to draw any conclusions. Though this allegation is incorrect according to the standards of the Organization for Economic Cooperation and Development for chronic toxicity studies, which requires that only ten animals per sex per group are analyzed for blood and urine chemistry, it is clear that studies using sample sizes of only six animals cannot be used to claim safety for GM foods.
As a result of these double standards, Snell and colleagues’ review is fatally biased and no conclusions can be drawn from it.