In forwarding to the list an e-mail from Prof Terje Traavik, we prefaced it with a comment which referred to Dr Arpad Pusztai as the scientific director of Prof Traavik's institute (Genök - Norwegian Institute of Gene Ecology).
This is not in fact the case. Arpad Pusztai has done work there but Prof Traavik is the scientific director which is why he says in the e-mail to Pusztai and Bardocz, "I neither knew I was sacked, nor that Arpad was out for my job." (item 1)
So why were both GM WATCH and Sir Peter Lachmann confused on this issue? It was reported on Prakash's AgBioView that Pusztai was the scientific director on the basis of an interesting piece on risk by Pusztai (see item 2) that appeared on the website Spiked-online. http://www.spiked-online.com/articles/0000000CA446.htm
Pusztai has forwarded us what he sent to Spiked, which read, "Arpad Pusztai; GenOk (Norwegian Institute of Gene Ecology), Tromso, Norway" but this was changed by Spiked to read, "Ãrpád Pusztai is scientific director at GenØk - the Norwegian Institute of Gene Ecology, Tromso, Norway".
Just why Spiked did this is unclear but we apologise for any confusion caused.
As Spiked are part of the LM network, and Sir Peter Lachmann is on the Advisory Council of Sense About Science and is also an advisor to the Genetic Interest Group, both of which LM network members and Spiked contributors as key staff members, Spiked also seems the probable source of Lachmann's confusion.
For more on Spiked: http://www.gmwatch.org/profile1.asp?PrId=124&page=S
For more on Lachmann: http://www.gmwatch.org/profile1.asp?PrId=74&page=L
---
1. The Traavik e-mail
From: "Terje Traavik" This email address is being protected from spambots. You need JavaScript enabled to view it.
To: Arpad Pusztai and Susan Bardocz
Dear both
The Pres of the Royal Norwegian Academy of Sciences, prof. Lars Walløe, rang me up yesterday and warned me that Peter Lachman was out hunting
He had contacted Walløe and wanted him to take action with me:
1. Because of the Kuala Lumpur talk etc.
2. Because we had made Arpad scientific director of Genök (sic!).
I explained to him that we proudly had included Arpad in our KL delegation, but otherwise I neither knew I was sacked, nor that Arpad was out for my job.
Best regards
Terje
---
2.Observations and comments on the spiked debate of society's risk-aversion
Having read the comments and responses on "are we too risk averse" in the spiked debate up to 10th February I was struck by a number of interesting points.
Of these, the first and most startling was that many respondents were not clear about the definition of hazards, risks and risk assessment. It was not much helped by the red herring of fear thrown in by people, such as John Ryan (Oxford). In fact, most scientists and technologists know what the hazards are for a particular new product. In many instances these can be determined and even measured. However, the probability of their happening is more difficult to forecast, particularly when probability in most instances is based on assumptions about which people can sharply differ. Also only some of the many possible consequences of the hazards can be reliably assessed. Peoples' views again will differ widely. It is quite difficult to predict the unknown, particularly as this depends on whose crystal ball it is being viewed through. It appears what industry may regard as unnecessary hindrance to progress others in society may view as an important safety measure.
A good example of the confusion can be found in the first para of Helene Guldberg's, the managing editor of spiked "lead response". She states that "science is, by its very nature risky, experimentation involves investigating the unknown". This can be misunderstood because it is not the experimentation (usually done in the confinement of a laboratory) that is risky but the unforeseen and possibly dangerous consequences of the technology transfer of the research results into practical applications. Quite rightly, the governments have enacted laws regulating and controlling this transfer, particularly when the new products or technologies could have major potentially harmful effects on public health and the environment.
When she describes all the wonderful inventions, such as antibiotics, drugs generally, in vitro fertilization, etc which, according to her, could have never come about if we had not taken risks, she misses one crucial point i.e. that the law stipulates that these inventions had first be rigorously tested in animal and then in human clinical trials before most people would ever hear about them.
A good point is made by Sean Davidson about the opposite case when breakthroughs in fact backfired. Although zero risk does not exist, as we are constantly and painfully reminded by cases like the thalidomide catastrophe, this should not absolve us from doing everything possible to eliminate or minimize it. People’s attitudes are in fact less risk averse when the assessment of the new products, such as drugs is carried out by a well-trusted public regulatory body with a good track record, such as the FDA in the USA. People are quite ready to accept even drugs produced by recombinant genetic technology, such as insulin because it is known that these had to be first subjected to human clinical trials. In contrast, people have a negative attitude to GM foods as they know that, unlike new drugs, the FDA does not require their rigorous testing.
Unfortunately, while in most instances it is the society that has to face the consequences of the hazards, the industry will take most of the benefits. In this light it was interesting that those responding from the industry were much more keen on the notion that for progress' sake "we" ought to take more risks while others from the general public and from the academic community, particularly from social sciences, were less convinced.
Another major point missed by some respondents in the debate on the society’s risk aversion was in fact connected with the question of trust in the regulators and regulation as discussed above. It serves no purpose or may even be counterproductive for the industry to lament the irrational and emotional responses from the public or even from "a nervous and insecure elite" if this is meant to replace the proper but costly testing of new products with bland assurances about their safety.
Based on their previous experiences of disasters people nowadays have a sophisticated understanding of what the risks are. Condescending and patronizing phrases from the industrial and political establishments, such as that according to "sound scientific evidence" people have nothing to fear, will not allay public fears. They understand perfectly that the lack of evidence of harm is no evidence of safety, particularly when they see no signs of intention to carry out proper, transparent safety studies on the new products that are confirmed by independent scientists.
The contrast with the past when publicly-funded research institutes were regarded by people as public watchdogs is quite revealing. When in the sixties a new director took over our institute’s scientific direction his first words to his staff were: we are not to accept any direct money from the industry as not to jeopardise our independence in the eye of the public. Trust is a very special commodity and scientists must earn it. It cannot be bought by expensive advertisements.
People are more likely to accept risks when it is taken by them themselves and when they feel that they are in the driving seat. Forcing them to accept something that they do not want is futile as the GM food fiasco clearly shows. It is an open secret that even now, just about ten years after their introduction, there are only about a dozen or so safety studies published in peer-reviewed science journals. Labelling sceptical scientists as Luddites and exhortations by the industry will not change that. Tony Juniper (FoE) makes a good point when asks the question: "who decides what risk is acceptable for whom?"
It was also quite revealing that even someone like Gill Samuels, the science policy supremo of Pfizer, did misunderstand the application of the precautionary principle in therapeutic decision-making. Most people will understand when explained to them what are the risks inherent in the medical treatment proposed for them even when the outcome is uncertain. The point is again that the risk will be taken by the patient and not by the doctor and when the patient trusts the doctor, he/she is more likely to make the decision to face the risk.
Julian Little (Bayer CropScience UK) argues for weighting the evidence to be put to the public. This patronizing attitude is not likely to get him far in persuading the people to take risks because they, quite rightly in my opinion, will ask the obvious question, who is going to do the weighting? I have a feeling that in his opinion this ought to be done by the industry’s representatives because they know the facts.
I am afraid, I tend to agree with Adrian Holme that there is no simple answer to the question whether we are more or less risk averse than previous societies. The final conclusion must be that our answer mainly depends on which side of society we come from. For the industry the best solution would be to have a public that explicitly trusts them and willingly take all the risks inherent in our technologically speeded up age. On the other side there will of course always be a minority of rigidly conservative-minded people who will object to practically any innovation but in a democratic society they may not be decisive.
However, if and when the industry and its regulators will behave honestly and openly and when they not only appear to go through the motions of establishing what are the best and safest products and technologies which truly advance peoples’ interests and their well-being, they will find that most people are willing to take risks for the common good.
As someone who has always been interested in history I personally do not believe that our society is now more risk averse than any that preceded us.
Arpad Pusztai; GenOk (Norwegian Institute of Gene Ecology), Tromso, Norway.