Friday, November 24, 2006

Is there such a thing as a 'safe technology'?

[This is the pre-edited text of my latest muse for Nature, which relates to a paper published in the 16 November issue on health and safety issues in nanotechnology.]

Discussions about the risk of emerging technologies must acknowledge that their major impacts have rarely been spotted in advance.

In today's issue of Nature, an international team of scientists presents a five-point scheme for "the safe handling of nanotechnology"[1]. "If the global research community can rise to the challenges we have set", they say, "then we can surely look forward to the advent of safe nanotechnologies".

The five targets that the team sets for addressing potential health risks of nanotechnologies are excellent ones, involving the assessment of toxicities, prediction of impacts on the environment, and establishment of a general strategy for risk-focused research. In particular, the goals are aimed at determining the risks of engineered nanoparticles – how they might enter and move in the environment, to what extent humans might be exposed, and what the consequences of that will be. We need to know all these things with some urgency.

But what is a "safe technology"? According to this criterion, manufacturing nuclear warheads would be "safe" if no human was exposed to dangerous levels of radiation in the process that leads from centrifuge to silo.

To be fair, no one denies that a technology's 'safety' depends on how it is used. The proposals for mapping nanotech's risks are clearly aimed at a very specific aspect of the overall equation, concerned only with the fundamental issues of whether (and how much) exposure to nanotechnological products is bad for our health. But it highlights the curious circumstance that new technologies now seem required to carry out a risk assessment at their inception, ideally in parallel with public consultation and engagement to decide what should and shouldn't be permitted.

There is no harm in that. And there's plenty of scope for being creative about it. Some of the broader ethical issues associated with nanotech, for example, are being explored in the US through a series of public seminars organized by the public-education company ICAN Productions. Funded by the US National Science Foundation, ICAN is creating three one-hour seminars in which participants, including scientists, business leaders and members of the public, explore scenarios that illuminate plausible impacts of nanotech on daily life. The results will be presented on US television by Oregon Public Broadcasting in spring of 2007.

Yet history must leave us with little confidence that either research programs or public debates will anticipate all, or even the major, social impacts of a new technology. We smile now at how anyone believed that road safety could be addressed by having every automobile preceded by a man waving a red flag. In those early days, the pollution caused by cars was barely on the agenda, while the notion that this might affect global climate would have seemed positively bizarre.

Of course, it is something of a clich̩ now to say that neither the internal combustion engine nor smoking would ever have been permitted if we knew then what we know now about their dangers. But the point is that we never do Рit is hard to identify any important technology for which the biggest risks have been clear in advance.

And even if some of them are, scientists generally lose the ability to do anything about it once the technology reacts with society. Nuclear proliferation was forecast and feared by many of the Manhattan Project physicists, but politicians and generals treated their proposals for avoiding it with contempt (give away secrets to the Russians, indeed!). It took no deep understanding of evolution to foresee the emergence of antibiotic-resistant bacteria, but that didn't prevent profligate over-prescription of the drugs. The dangers of global warming have been known since at least the 1980s, and… well, say no more.

In the case of nanotechnology, there have been discussions of, for example, its likelihood of increasing the gap between rich and poor nations, its impacts on surveillance and privacy, and the social effects of nanotech-enhanced longevity. These are all noble attempts to look beyond the pure science, but it's not at all clear that they will turn out to be the most relevant issues.

Part of the impetus for aiming to address the 'risks' of nanotech so early in the game comes from a fear that potentially valuable applications could be derailed by a public backlash like that which led to a rejection in Europe of genetically modified organisms – some (though by no means all) of which resulted from a general lack of information or understanding about the technology, as well as an arrogant assumption of consumer acquiescence.

The GMO experience has sensitized scientists to the need for early public engagement, and again that is surely a good thing. It's also encouraging to find scientists and even industries hurrying along governments to do more to support research into safety issues, and to draft regulations.

What they must avoid, however, is giving the impression that emerging technologies are like toys that can be 'made safe' before being handed to a separate entity called society to play with as it will. Technologies are one of the key drivers of social change, for better or worse. They simply do not exist in isolation of the society that generates them. Not only can we not foresee all their consequences, but some of those consequences aren't present even in principle until culture, sociology, economics and politics (not to mention faith) enter the arena.

Some technologies are no doubt intrinsically 'safer' or 'riskier' than others. But the more powerful they are, the less able we are to distinguish which is which, or to predict how that will play out in practice. Let's by all means look for obvious dangers at the outset – but scientists must also look for ways to become more engaged in the shaping of a technology as it unfolds, while dismantling the now-pervasive notion that all innovations must come with a 'risk-free' label.

Reference
1. Maynard, A. et al. Nature 444, 267 - 269 (2006).

No comments: