I have a piece in Aeon about instruments in science. Here’s how it looked at the outset.
_____________________________________________________________
Whenever I visit scientists to discuss their research, there always comes a moment when they say, with pride they can barely conceal, “Do you want a tour of the lab?” It is invariably slightly touching – like Willy Wonka dying to show off his factory. I’m always glad to accept, knowing what lies in store: shelves bright with bottles of coloured liquid and powders, webs of gleaming glass tubing, slabs of perforated steel holding lasers and lenses, cryogenic chambers like ornate bathyspheres whose quartz windows protect slivers of material about to be raked by electron beams.
It’s rarely less than impressive. Even if the kit is off-the-shelf, it will doubtless be wired into a makeshift salmagundi of wires, tubes, cladding, computer-controlled valves and rotors and components with more mysterious functions. Much of the gear, however, is likely to be home-made, custom-built for the research at hand. The typical lab set-up is, among other things, a masterpiece of impromptu engineering – you’d need degrees in electronics and mechanics just to put it all together, never mind how you make sense of the graphs and numbers it produces.
All this usually stays behind the scene in science. Headlines announcing “Scientists have found…” rarely bother to tell you how those discoveries were made. And would you care? The instrumentation of science is so highly specialized that it must often be accepted as a kind of occult machinery for producing knowledge. We figure they must know how it all works.
It makes sense in a way that histories of science tend to focus on the ideas and not the methods – surely what matters most is what was discovered about the workings of the world? But most historians of science today recognize that the relationship of scientists to their instruments is an essential part of the story. It is not simply that the science is dependent on the devices; rather, the devices determine what is known. You explore the things that you have the means to explore, and you plan your questions accordingly. That’s why, when a new instrument comes along – the telescope and the microscope are the most thoroughly investigated examples, but this applies as much today as it did in the seventeenth century – entirely new fields of science can be opened up. Less obviously, such developments demand a fresh negotiation between the scientists and their machines, and it’s not fanciful to see there some of the same characteristics as are found in human relationships. Can you be trusted? What are you trying to tell me? You’ve changed my life! Look, isn’t she beautiful? I’m bored with you, you don’t tell me anything new any more. Sorry, I’m swapping you for a newer model.
That’s why it is possible to speak of interactions between scientists and their instruments that are healthy or dysfunctional. How do we tell one from the other?
The telescope and microscope were celebrated even by their first users as examples of the value of enhancing the powers of human perception. But the most effective, not to mention elegant, scientific instruments serve also as a kind of prosthesis for the mind: they emerge as an extension of the experimenter’s thinking. That is exemplified in the work of the New Zealand physicist Ernest Rutherford, perhaps the finest experimental scientist of the twentieth century. Rutherford famously preferred the sealing-wax-and-string approach to science: it was at a humble benchtop with cheap, improvised and homespun equipment that he discovered the structure of the atom and then split it. This meant that Rutherford would devise his apparatus to tell him precisely what he wanted to know, rather than being limited by someone else’s view of what one needed to know. His experiments thus emerged organically from his ideas: they could almost be seen as theories constructed out of glass and metal foil.
Ernest Rutherford’s working space in the Cavendish Laboratory, Cambridge, in the 1920s.
In one of the finest instances, at Manchester University in 1908 Rutherford and his coworkers figured out that the alpha particles of radioactive decay are the nuclei of helium atoms. If that’s so, then one needs to collect the particles and see if they behave like helium. Rutherford ordered from his glassblower Otto Baumbach a glass capillary tube with extraordinarily thin walls, so that alpha particles emitted from radium could pass right through. Once they had accumulated in an outer chamber, Rutherford connected it up to become a gas-discharge tube, revealing the helium from the fingerprint wavelength of its glow. It was an exceedingly rare example of a piece of apparatus that answers a well defined question – are alpha particles helium? – with a simple yes/no answer, almost literally by whether or not a light switches on.
A more recent example of an instrument embodying the thought behind it is the scanning tunnelling microscope, invented by the late Heinrich Rohrer and Gerd Binnig at IBM’s Zurich research lab in the 1980s. They knew that electrons within the surface of an electrically conducting sample should be able to cross a tiny gap to reach another electrode held just above the surface, thanks to a quantum-mechanical effect called tunnelling. Because tunnelling is acutely sensitive to the width of the gap, a needle-like metal tip moving across the sample, just out of contact, could trace out the sample’s topography. If the movement was fine enough, the map might even show individual atoms and molecules. And so it did.
A ring of iron atoms on the surface of copper, as shown by the scanning tunnelling microscope. The ripples on the surface are electron waves. Image: IBM Almaden Research Center.
Between the basic idea and a working device, however, lay an incredible amount of practical expertise – of sheer craft – allied to rigorous thought. Against all expectation (they were often told the instrument “should not work” on principle), Rohrer and Binnig got it going, invented perhaps the central tool of nanotechnology, and won a Nobel prize in 1986 for their efforts.
So that’s when it goes right. What about when it doesn’t?
Scientific instruments have always been devices of power: those who possess the best can find out more than the others. Galileo recognized this: he conducted a cordial correspondence with Johannes Kepler in Prague, but when Kepler requested the loan of one of Galileo’s telescopes the Italian found excuses, knowing that with one of these instruments Kepler would be an even more serious rival. Instruments, Galileo already knew, confer authority.
But now instruments – newer, bigger, better – have become symbols of prestige as never before. I have several times been invited to admire the most state-of-the-art device in a laboratory purely for its own sake, as though I am being shown a Lamborghini. Historian of medical technology Stuart Blume of the University of Amsterdam has argued that, as science has started to operate according to the rules of a quasi-market, the latest equipment serves as a token of institutional might that enhances one’s competitive position in the marketplace. When I spoke to several chemists recently about their use of second-hand equipment, often acquired from the scientific equivalent of eBay, they all asked to remain anonymous, as though this would mark them out as second-rate scientists.
One of the dysfunctional consequences of this sort of relationship with an instrument is that the machine becomes its own justification, its own measure of worth – a kind of totem rather than a means to and end. A result is then “important” not because of what it tells us but because of how it was obtained. The Hubble Space Telescope is (despite its initial myopia) one of the most glorious instruments ever made, a genuinely new window on the universe. But when it first began to send back images of the cosmos in the mid 1990s, Nature would regularly receive submissions reporting the first “Hubble image” of this or that astrophysical object. The authors would be bemused and affronted when told that what the journal wanted was not the latest pretty picture, but some insight into the process it was observing – a matter that required rather more thought and research.
This kind of instrument-worship is, however, at least relatively harmless in the long run. More problematic is the notion of instrument as “knowledge machine”, an instrument that will churn out new understanding as long as you keep cranking the handle. The European particle-physics centre CERN has flirted with this image for the Large Hadron Collider, which the former director-general Robert Aymar called a “discovery machine.” This idea harks back (usually without knowing it) to a tradition begun by Francis Bacon in his Novum Organum (1620). Here Bacon drew on Atistotle’s notion of an organon, a mechanism for logical deduction. Bacon’s “new organon” was a new method of analysing facts, a systematic procedure (what we would now call an algorithm) for distilling observations of the world into underlying causes and mechanisms. It was a gigantic logic machine, accepting facts at one end and ejecting theorems at the other.
In the event, Bacon’s “organon” was a system so complex and intricate that he never even finished describing it, let alone ever put it into practice. Even if he had, it would have been to not avail, because it is now generally agreed among philosophers and historians of science that this is now how knowledge comes about. The preference of the early experimental scientists, like those who formed the Royal Society, to pile up facts in a Baconian manner while postponing indefinitely the framing of hypotheses to explain them, will get you nowhere. (It’s precisely because they couldn’t in fact restrain their impulse to interpret that men like Isaac Newton and Robert Boyle made any progess.) Unless you begin with some hypothesis, you don’t know which facts you are looking for, and you’re liable to end up with a welter of data, mostly irrelevant and certainly incomprehensible.
This seems obvious, and most scientists would agree. But that doesn’t mean the Baconian “discovery machine” has vanished. As it happens, the LHC doesn’t have this defect after all: the reams of data it has collected are being funnelled towards a very few extremely well defined (even over-refined) hypotheses, in particular the existence of the Higgs particle. But the Baconian impulse is alive and well elsewhere, driven by the allure of “knowledge machines”. The ability to sequence genomes quickly and cheaply will undoubtedly prove valuable for medicine and fundamental genetics, but these experimental techniques have already far outstripped not only our understanding of how genomes operate but our ability to formulate questions about that. As a result, some gene-sequencing projects seem conspicuously to lack a suite of ideas to test. The hope seems to be that, if you have enough data, understanding will somehow fall out of the bottom of the pile. As a result, biologist Robert Weinberg of the Massachusetts Institute of Technology has said, “the dominant position of hypothesis-driven research is under threat.”
And not just in genomics. The United States and Europe have recently announced two immense projects, costing hundreds of millions of dollars, to use the latest imaging technologies to map out the human brain, tracing out every last one of the billions of neural connections. Some neuroscientists are drooling at the thought of all that data. “Think about it,” said one. “The human brain produces in 30 seconds as much data as the Hubble Space Telescope has produced in its lifetime.”
If, however, one wanted to know how cities function, creating a map of every last brick and kerb would be an odd way to go about it. Quite how these brain projects will turn all their data into understanding remains a mystery. One researcher in the European project, simply called the Human Brain Project, inadvertently revealed the paucity of any theoretical framework for navigating this information glut: “It is a chicken and egg situation. Once we know how the brain works, we'll know how to look at the data.” The fact that the Human Brain Project is not quite that clueless hardly mitigates the enormity of this flippant statement. Science has never worked by shooting first and asking questions later, and it never will.
Biology, in which the profusion of evolutionary contingencies makes it particularly hard to formulate broad hypotheses, has long felt the danger of a Baconian retreat to pure data-gathering, substituting instruments for thinking. Austrian biochemist Erwin Chargaff, whose work helped elucidate how DNA stores genetic information, commented on this tendency as early as 1977:
“Now I go through a laboratory… and there they all sit before the same high speed centrifuges or scintillation counters, producing the same superposable graphs. There has been very little room left for the all important play of scientific imagination.”
Thanks to this, Chargaff said, “a pall of monotony has descended on what used to be the liveliest and most attractive of all scientific professions.” Like Chargaff, the pioneer of molecular biology Walter Gilbert saw in this reduction of biology to a set of standardized instrumental procedures repeated ad nauseam an encroachment of corporate strategies into the business of science. It was becoming an industrial process, manufacturing data on the production line: data produced, like consumer goods, because we have the instrumental means to do so, not because anyone knows what to do with it all. Nobel laureate biochemist Otto Loewi saw this happening in the life sciences even in 1954:
“Sometimes one has the impression that in contrast with former times, when one searched for methods in order to solve a problem, frequently nowadays workers look for problems with which they can exploit some special technique.”
High-energy physics now works on a similar industrial scale, with big machines at the centre. It doesn’t suffer the same lack of hypotheses as areas of biology, but arguably it can face the opposite problem: a consensus around a single idea, into which legions of workers burrow single-mindedly. Donald Glaser, the inventor of the bubble chamber, saw this happening in the immediate postwar period, once the Manhattan Project had provided the template:
“I knew that large accelerators were going to be built and they were going to make gobs of strange particles. But I didn’t want to join an army of people working at big machines.”
For Glaser the machines were taking over, and only by getting out of it did he devise his Nobel-prizewinning technique.
The challenge for the scientist, then, particularly in the era of Big Science, is to keep the instrument in its place. The best scientific kit comes from thinking about how to solve a problem. But once they become a part of the standard repertoire, or once they acquire a lumbering momentum of their own, instruments might not assist thinking but start to constrain it. As historians of science Albert van Helden and Thomas Hankins have said, “Because instruments determine what can be done, they also determine to some extent what can be thought.”
1 comment:
Nice piece, but:
"Science has never worked by shooting first and asking questions later, and it never will."
What of the laser? When invented, was it not described as "A solution looking for a problem". Now the laser is part of the definition of the 'metre', simply due to its high precision for measuring time and distance.
And isn't 'anthropogenic global warming', or what ever its re-branded name is these days, an example of hypotheses leading the physical data? Only here, it's the 'scientists' themselves that have become the mysterious source of 'power' knowledge, right down to the sacrilegious defence of heresy against those that would cast any doubts upon the 'computer modelling' they choose to evangelise.
Post a Comment