This was a damned difficult story to write for Nature news, and the published version is a fair bit different to this original text. I can’t say which works best – perhaps it’s just one of those stories for which it’s helpful to have more than one telling. Part of the difficulty is that, to be honest, the real interest is fundamental, not in terms of what this idea can do in any applied sense. Anyway, I’m going to append to this some comments from coauthor David Sivak of the Lawrence Berkeley National Laboratory, which help to explain the slightly counter-intuitive notion of proteins being predictive machines with memories.
__________________________________________________
Machines are efficient only if they collect information that helps them predict the future
The most efficient machines remember what’s happened to them, and use that memory to predict what the future holds. This conclusion of a new study by Susanne Still of the University of Hawaii at Manoa and her coworkers [1] should apply equally to ‘machines’ ranging from molecular enzymes to computers and even scientific models. It not only offers a new way to think about processes in molecular biology but might ultimately lead to improved computer model-building.
“[This idea] that predictive capacity can be quantitatively connected to thermodynamic efficiency is particularly striking”, says chemist Christopher Jarzynski of the University of Maryland.
The notion of constructing a model of the environment and using it for prediction might feel perfectly familiar for a scientific model – a computer model of weather, say. But it seems peculiar to think of a biomolecule such as a motor protein doing this too.
Yet that’s just what it does, the researchers say. A molecular motor does its job by undergoing changes in the conformation of the proteins that comprise it.
“Which conformation it is in now is correlated with what states the environment passed through previously”, says Still’s coworker Gavin Crooks of the Lawrence Berkeley National Laboratory in California. So the state of the molecule at any instant embodies a memory of its past.
But the environment of a biomolecule is full of random noise, and there’s no gain in the machine ‘remembering’ the fine details of that buffeting. “Some information just isn't useful for making predictions”, says Crooks. “Knowing that the last coin toss came up heads is useless information, since it tells you nothing about the next coin toss.”
If a machine does store such useless information, eventually it has to erase it, since its memory is finite – for a biomolecule, very much so. But according to the theory of computation, erasing information costs energy - it results in heat being dissipated, which makes the machine inefficient.
On the other hand, information that has predictive value is valuable, since it enables the machine to ‘prepare’ – to adapt to future circumstances, and thus to work optimally. “My thinking is inspired by dance, and sports in general, where if I want to move more efficiently then I need to predict well”, says Still.
Alternatively, think of a vehicle fitted with a smart driver-assistance system that uses sensors to anticipate its imminent environment and react accordingly – to brake in an optimal manner, and so maximize fuel efficiency.
That sort of predictive function costs only a tiny amount of processing energy compared with the total energy consumption of a car. But for a biomolecule it can be very costly to store information, so there’s a finely balanced tradeoff between the energetic cost of information processing against the inefficiencies caused by poor anticipation.
“If biochemical motors and pumps are efficient, they must be doing something clever”, says Still. “Something in fact tied to the cognitive ability we pride ourselves with: the capacity to construct concise representations of the world we have encountered, which allow us to say something about things yet to come.”
This balance, and the search for concision, is precisely what scientific models have to negotiate too. Suppose you are trying to devise a computer model of a complex system, such as how people vote. It might need to take into account the demographics of the population concerned, and networks of friendship and contact by which people influence each other. Might it also need a representation of mass media influences? Of individuals’ socioeconomic status? Their neural circuitry?
In principle, there’s no end to the information the model might incorporate. But then you have an almost one-to-one mapping of the real world onto the model: it’s not really a model at all, but just a mass of data, much of which might end up being irrelevant to prediction.
So again the challenge is to achieve good predictive power without remembering everything. “This is the same as saying that a model should not be overly complicated – that is, Occam's Razor”, says Still. She hopes this new connection between prediction and memory might guide intuition in improving algorithms that minimize the complexity of a model for a specific desired predictive power, used for example to study phenomena such as climate change.
References
1. Still, S., Sivak, D. A., Bell, A. J. & Crooks, G. E. Phys. Rev. Lett. 109, 120604 (2012).
David Sivak’s comments:
On the level of a single biomolecule, the basic idea is that a given protein under given environmental conditions (temperature, pH, ionic concentrations, bound/unbound small molecules, conformation of protein binding partners, etc.) will have a particular equilibrium probability distribution over different conformations. Different protein sequences will have different equilibrium distributions for given environmental conditions. For example, an evolved protein sequence is more likely to adopt a folded globular structure at ambient temperature, as compare to a random polypeptide. If you look over the distribution of possible environmental conditions, different protein sequences will differ in the correlations between their conformational state and particular environmental variables, i.e., the information their conformational state stores about the particular environmental variables.
When the environmental conditions change, that equilibrium distribution changes, but the actual distribution of the protein shifts to the new equilibrium distribution gradually. In particular, the dynamics of interconversion between different protein conformations dictates how long it takes for particular correlations with past environmental variables to die out, i.e., for memory of particular aspects of the environment to persist. Thus the conformational preferences (as a function of environmental conditions) and the interconversion dynamics determine the memory of particular protein sequences for various aspects of their environmental history.
One complication is that this memory, this correlation with past environmental states, may be a subtle phenomenon, distributed over many detailed aspects of the protein conformation, rather than something relatively simple like the binding of a specific ion. So, we like to stress that the model is implicit. But it certainly is the case that an enzyme mutated at its active site could differ from the wild-type protein in its binding affinity for a metal ion, and could also have a different rate of ion dissociation. Since the presence or absence of this bound metal ion embodies a memory of past ion concentrations, the mutant and wild-type enzymes would differ in their memory.
For a molecular motor, there are lots of fluctuating quantities in the environment, but only some of these fluctuations will be predictive of things the motor needs for its function. An efficient motor should not, for example, retain memory of every water molecule that collides with it, even if it could, because that will provide negligible information of use in predicting future fluctuations of those quantities that are relevant for the motor's functioning.
In vivo, the rotary F0F1-ATP synthase is driven by protonmotive flow across the mitochondrial membrane. The motor could retain conformational correlations with many aspects of its past history, but this analysis says that the motor will behave efficiently if it remembers molecular events predictive of when the next proton will flow down its channel, and loses memory of other molecular events irrelevant to its function. In order to efficiently couple that flow to the functional role of the motor, synthesizing ATP, the motor should retain information about the past that is predictive of such protonmotive flow, but lose any correlation with irrelevant molecular events, such as incidental collisions by water molecules.
But we are hesitant to commit to any particular example. We are all thinking about good concrete instantiations of these concepts for future extensions of this work. Right now, the danger of very specific examples like the F0F1 motor is that people who know much more about the particular system than we do might get bogged down in arguing the details, such as what exactly drives the motor, whether that driving involves conformational selection or induced fit, how concerted the mechanism is, etc., when the main point is that this framework applies regardless of the exact manner in which the system and environment are instantiated. Not to mention the fact that some subtle solvent rearrangements at the mouth of the channel may in fact be very predictive of future proton flow.
1 comment:
"This is the same as saying that a model should not be overly complicated – that is, Occam's Razor", says Still. She hopes this new connection between prediction and memory might guide intuition in improving algorithms that minimize the complexity of a model for a specific desired predictive power, used for example to study phenomena such as climate change.
The model should be simple enough for the funding bodies to follow the cup hiding the ball, whilst having enough slight of hand to dupe the mugs.
My prediction... KERCHING !!!
Post a Comment