{"title": "Stability and Noise in Biochemical Switches", "book": "Advances in Neural Information Processing Systems", "page_first": 103, "page_last": 109, "abstract": null, "full_text": "Stability and noise in biochemical switches \n\nNEC Research Instit ute \n\n4 Independence Way \n\nWilliam Bialek \n\nPrinceton, New Jersey 08540 \nbialek@research. nj. nec. com \n\nAbstract \n\nMany processes in biology, from the regulation of gene expression in \nbacteria to memory in the brain, involve switches constructed from \nnetworks of biochemical reactions. Crucial molecules are present in \nsmall numbers, raising questions about noise and stability. Analysis \nof noise in simple reaction schemes indicates that switches stable for \nyears and switchable in milliseconds can be built from fewer than \none hundred molecules. Prospects for direct tests of this prediction, \nas well as implications, are discussed. \n\n1 \n\nIntroduction \n\nThe problem of building a reliable switch arises in several different biological con(cid:173)\ntexts. The classical example is the switching on and off of gene expression during \ndevelopment [1], or in simpler systems such as phage .A [2]. It is likely that the cell \ncycle should also be viewed as a sequence of switching events among discrete states, \nrather than as a continuously running clock [3]. The stable switching of a specific \nclass of kinase molecules between active and inactive states is believed to playa role \nin synaptic plasticity, and by implication in the maintenance of stored memories \n[4] . Although many details of mechanism remain to be discovered, these systems \nseem to have several common features. First, the stable states of the switches are \ndissipative, so that they reflect a balance among competing biochemical reactions. \nSecond, the total number of molecules involved in the construction of the switch is \nnot large. Finally, the switch, once flipped, must be stable for a time long compared \nto the switching time, perhaps-\nfor development and for memory- even for a time \ncomparable to the life of the organism. Intuitively we might expect that systems \nwith small numbers of molecules would be subject to noise and instability [5], and \nwhile this is true we shall see that extremely stable biochemical switches can in fact \nbe built from a few tens of molecules. This has interesting implications for how we \nthink about several cellular processes, and should be testable directly. \nMany biological molecules can exist in multiple states, and biochemical switches \nuse this molecular multistability so that the state of the switch can be 'read out' by \nsampling the states (or enzymatic activities) of individual molecules. Nonetheless, \nthese biochemical switches are based on a network of reactions, with stable states \nthat are collective properties of the network dynamics and not of any individual \nmolecule. Most previous work on the properties of biochemical reaction networks \nhas involved detailed simulation of particular kinetic schemes [6], for example in \ndiscussing the kinase switch that is involved in synaptic plasticity [7]. Even the \nproblem of noise has been discussed heuristically in this context t8]. The goal in \n\n\fthe present analysis is to separate the problem of noise and stability from other \nissues, and to see if it is possible to make some general statements about the limits \nto stability in switches built from a small number of molecules. This effort should \nbe seen as being in the same spirit as recent work on bacterial chemotaxis, where \nthe goal was to understand how certain feat ures of the computations involved in \nsignal processing can emerge robustly from the network of biochemical reactions, \nindependent of kinetic details [9]. \n\n2 Stochastic kinetic equations \n\nImagine that we write down the kinetic equations for some set of biochemical re(cid:173)\nactions which describe the putative switch. Now let us assume that most of the \nreactions are fast, so that there is a single molecular species whose concentration \nvaries more slowly than all the others. Then the dynamics of the switch essentially \nare one dimensional, and this simplification allows a complete discussion using stan(cid:173)\ndard analytical methods. In particular , in this limit there are general bounds on \nthe stability of switches, and these bounds are independent of (incompletely known) \ndetails in the biochemical kinetics. It should be possible to make progress on multi(cid:173)\ndimensional versions of the problem, but the point here is to show that there exists \na limit in which stable switches can be built from small numbers of molecules. \nLet the number of molecules of the 'slow species' be n. All the different reactions \ncan be broken into two classes: the synthesis of the slow species at a rate f(n) \nmolecules per second, and its degradation at a rate g(n) molecules per second; the \ndependencies on n can be complicated because they include the effects of all other \nspecies in the system. Then, if we could neglect fluct uations, we would write the \neffective kinetic equation \n\ndn \ndt = f(n) - g(n). \n\n(1) \n\nIf the system is to function as a switch, then the stationarity condition f(n) = g(n) \nmust have multiple solutions with appropriate local stability properties. \nThe fact that molecules are discrete units means that we need to give the chemical \n(1) another interpretation. It is the mean field approximation to a \nkinetic Eq. \nstochastic process in which there is a probability per unit time f(n) of making the \ntransition n ---+ n+1, and a probability per unit time g(n) of the opposite transition \nn ---+ n - 1. Thus if we consider the probability P(n , t) for there being n molecules \nat time t , this distribution obeys the evolution (or 'master') equation \n\nap~~, t) = f (n - l )P(n - 1, t) + g(n + l)P(n + 1, t) -\n\n[f(n) + g(n)]P(n, t),(2) \n\nwith obvious corrections for n = 0, 1. We are interested in the effects of stochasticity \nfor n not too small. Then 1 is small compared with typical values of n, and we can \napproximate P(n, t) as being a smooth function of n. We can expand Eq. (2) in \nderivatives of the distribution, and keep the leading terms: \n\nap~~, t) = :n { [g(n) - f(n)]P(n , t) + ~ :n [f(n) + g(n)]P(n, t)} . \n\n(3) \n\nThis is analogous to the diffusion equation for a particle moving in a potential, but \nthis analogy works only if allow the effective temperature to vary with the position \nof the particle. \nAs with diffusion or Brownian motion, there is an alternative to the diffusion equa(cid:173)\ntion for P( n, t) and this is to write an equation of motion for n( t) which supplements \n\n\fEq. (1) by the addition of a random or Langevin force ~(t): \n\ndn \ndt \n\\~(t)~(t') ) \n\nf(n) - g(n) + ~(t), \n[f(n) + g(n)]b(t - t'). \n\n(4) \n\n(5) \n\nFrom the Langevin equation we can also develop the distribution functional for \nthe probability of trajectories n(t). It should be emphasized that all of these ap(cid:173)\nproaches are equivalent provided that we are careful to treat the spatial variations \nof the effective temperature [10].1 In one dimension this complication does not im(cid:173)\npede solving the problem. For any particular kinetic scheme we can compute the \neffective potential and temperature, and kinetic schemes with multiple stable states \ncorrespond to potential functions with multiple minima. \n\n3 Noise induced switching rates \n\nWe want to know how the noise term destabilizes the distinct stable states of the \nswitch. If the noise is small, then by analogy with thermal noise we expect that there \nwill be some small jitter around the stable states, but also some rate of spontaneous \njumping between the states, analogous to thermal activat ion over an energy barrier \nas in a chemical reaction. This jumping rate should be the product of an \"attempt \nfrequency\"-of order the relaxation rate in the neighborhood of one stable state(cid:173)\nand a \"Boltzmann factor\" that expresses the exponentially small probability of \ngoing over the barrier. For ordinary chemical reactions this Boltzmann factor is \njust exp(-Ft /kB T), where Ft is the activat ion free energy. If we want to build \na switch that can be stable for a time much longer than the switching time itself, \nthen the Boltzmann factor has to provide this large ratio of time scales. \nThere are several ways to calculate the analog of the Boltzmann factor for the \ndynamics in Eq. \n(4). The first step is to make more explicit the analogy with \nBrownian motion and thermal activation. Recall that Brownian motion of an over(cid:173)\ndamped particle is described by the Langevin equation \n\ndx \n\"( dt = -V'(x) + 'T/(t) , \n\n(6) \n\nwhere,,( is drag coefficient of the particle, V(x) is the potential, and the noise force \nhas correlations \\'T/(t)'T/(t') ) = 2\"(Tb(t - t') , where T is the absolute temperature \nmeasured in energy units so that Boltzmann's constant is equal to one. Comparing \n(4), we see that our problem is equivalent to a particle with \"( = 1 \nwith Eq. \nin an effective potential If,,ff(n) such that V:ff(n) = g(n) -\nf(n), at an effective \ntemperature Teff(n) = [f(n) + g(n)]/2. \nIf the temperature were uniform then the equilibrium distribution of n would be \nPeq(n) ex exp[-Veff(n)/Teff]' With nonuniform temperature the result is (up to \n\nl In a review written for a biological audience, McAdams and Arkin [11] state that \nLangevin methods are unsound and can yield invalid predictions precisely for the case \nof bistable reaction systems which interests us here; this is part of their argument for \nthe necessity of stochastic simulation methods as opposed to analytic approaches. Their \nreference for the failure of Langevin methods [12], however, seems to consider only Langevin \nterms with constant spectral density, thus ignoring (in the present language) the spatial \nvariations of effective temperature. For the present problem this would mean replacing \nthe noise correlation function [f(n) + g(n)]8(t -\nt') where Q is \na constant. This indeed is wrong, and is not equivalent to the master equation. On the \nother hand, if the arguments of Refs. [11 , 12] were generally correct, they would imply that \nLangevin methods could not used for the description of Brownian motion with a spatially \nvarying temperature, and this would be quite a surprise. \n\nt') in Eq. (5) by Q8(t -\n\n\fweakly varying prefactors) \n\nex exp[-U(n)] \n\nr dy V:ff(y). \n\nJo \n\nTeff (Y) \n\nU(n) \n\n(7) \n\n(8) \n\nOne way to identify the Boltzmann factor for spontaneous switching is then to \ncompute the relative equilibrium occupancy of the stable states (no and nd and the \nunstable \"transition state\" at n*. The result is that the effective activation energy \nfor transitions from a stable state at n = no to the stable state at n = nl > no is \n\nt \n\nF (no \n\n----+ \n\n_ \n\nf(n) \nnl) - 2kBT no dn g(n) + f(n)' \n\nIn. g(n) -\n\n(9) \n\n(10) \n\nwhere n* is the unstable point , and similarly for the reverse transition, \n\nt \n\nF (nl \n\n----+ \n\n_ \n\nf(n) - g(n) \nno) - 2kBT n. dn g(n) + f(n)' \n\nln 1 \n\nAn alternative approach is to note that the distribution of trajectories n(t) includes \nlocally optimal paths that carry the system from each stable point up to the tran(cid:173)\nsition state; the effective activation free energy can then be written as an integral \nalong these optimal paths. The use of optimal path ideas in chemical kinetics has \na long history, going back at least to Onsager. A discussion in the spirit of the \npresent one is Ref. [13]. For equations of the general form \n\n~: = - V:ff(n) + ~(t), \n\n(11) \n\nwith (~(t)~(t')) = 2Teff(t)J(t-t'), the probability distribution for trajectories P[n(t)] \ncan be written as [10] \n\nP [n(t)] \n\nS [n(t)] \n\nexp (-S[n(t)]) \n~ J dtTe~(t) [n(t) + V:ff (n(t))]2 - ~ J dtV:~(n(t)). \n\n(12) \n\n(13) \n\nIf the temperature Teff is small, then the trajectories that minimize the action \nshould be determined primarily by minimizing the first term in Eq. (13), which is \n\"-' I/Teff . Identifying the effective potential and temperature as above, the relevant \nterm is \n\n~ J dt [n -\n2 \n\nf(n) + g(n)]2 \nf(n) + g(n) \n\n1 Jd \nt \n-\nf(n) + g(n) \n2 \n\nn2 \n\n+ -\n1 Jd [f(n) - g(n)j2 \nt -'-'---::-':--'-:------'---'c-'--;----\nf(n) + g(n) \n2 \n\n- J dtn f(n) - g(n) . \nf(n) + g(n) \n\n(14) \n\nWe are searching for trajectories which take n(t) from a stable point no where \nf(no) = g(no) through the unstable point n* where f and 9 are again equal but the \nderivative of their difference (the curvature of the potential) has changed sign. For \na discussion of the analogous quantum mechanical problem of tunneling in a double \nwell, see Ref. [14]. First we note that along any trajectory from no to n* we can \nsimplify the third term in Eq. (14): \n\nJ \n\nd . f(n) - g(n) \ntn f(n) + g(n) = no \n\nIn' d f(n) - g(n) \n\nn f(n) + g(n)' \n\n(15) \n\n\fThis term thus depends on the endpoints of the trajectory and not on the path, and \ntherefore cannot contribute to the structure of the optimal path. In the analogy \nto mechanics, the first two terms are equivalent to the (Euclidean) action for a \nparticle with position dependent mass in a potential; this means that along extremal \ntrajectories there is a conserved energy \n\nE = ~ \n\nn 2 \n\n1 [f(n) - g(n}F \n2 f(n) + g(n) - 2 f(n) + g(n) . \n\n(16) \n\nAt the endpoints of the trajectory we have n = a and f(n) = g(n) , and so we are \nlooking for zero energy trajectories, along which \n\nn(t) = \u00b1[f(n(t)) - g(n(t))] . \n\n(17) \n\nSubstituting back into Eq. (14) , and being careful about the signs, we find once \nagain Eq's. (9 ,10). \nBoth the 'transition state' and the optimal path method involve approximations, \nbut if the noise is not too large the approximations are good and the results of \nthe two methods agree. Yet another approach is to solve the master equation (2) \ndirectly, and again one gets the same answer for the switching rate when the noise \nis small, as expected since all the different approaches are all equivalent if we make \nconsistent approximations. It is much more work to find the prefactors of the rates, \nbut we are concerned here with orders of magnitude, and hence the prefactors aren't \nso important. \n\n4 \n\nInterpretation \n\nThe crucial thing to notice in this calculation is that the integrands in Eq's. (9,10) \nare bounded by one, so the activation energy (in units of the thermal energy kBT) \nis bounded by twice the change in the number of molecules. Translating back to \nthe spontaneous switching rates, the result is that the noise driven switching time \nis longer than the relaxation time after switching by a factor that is bounded, \n\nspontaneous switching time \n\n. . \nrelaxatiOn tIme \n\n(A ) \n< exp un , \n\n(18) \n\nwhere ,6.n is the change in the number of molecules required to go from one stable \n'switched' state to the other. Imagine that we have a reaction scheme in which the \ndifference between the two stable states corresponds to roughly 25 molecules. Then \nit is possible to have a Boltzmann factor of up to exp(25) rv 1010. Usually we think \nof this as a limit to stability: with 25 molecules we can have a Boltzmann factor of no \nmore than rv 1010. But here I want to emphasize the positive statement that there \nexist kinetic schemes in which just 25 molecules would be sufficient to have this level \nof stability. This corresponds to years per millisecond: with twenty five molecules, \na biochemical switch that can flip in milliseconds can be stable for years. Real \nchemical reaction schemes will not saturate this bound, but certainly such stability \nis possible with roughly 100 molecules. The genetic switch in A phage operates with \nroughly 100 copies of the repressor molecules, and even in this simple system there \nis extreme stability: the genetic switch is flipped spontaneously only once in 105 \ngenerations of the host bacterium [2]. Kinetic schemes with greater cooperativity \nget closer to the bound, achieving greater stability for the same number of molecules. \nIn electronics, the construction of digital elements provides insulation against fluc(cid:173)\ntuations on a microscopic scale and allows a separation between the logical and \nphysical design of a large system. We see that , once a cell has access to several tens \nof molecules, it is possible to construct 'digital' switch elements with dynamics that \nare no longer significantly affected by microscopic fluctuations. Furthermore, weak \n\n\finteractions of these molecules with other cellular components cannot change the \nbasic 'states' of the switch, although these interactions can couple state changes to \nother events. \nThe importance of this 'digitization' on the scale of 10 -100 molecules is illustrated \nby different models for pattern formation in development. In the classical model \ndue to Turing, patterns are expressed by spatial variations in the concentration of \ndifferent molecules, and patterns arise because uniform concentrations are rendered \nunstable through the combination of nonlinearities in the kinetics with the different \ndiffusion constants of different substances. In this picture, the spatial structure of \nthe pattern is linked directly to physical properties of the molecules. An alternative \nthat each spatial location is labelled by a set of discrete possible states, and patterns \nevolve out of the 'automaton' rules by which each location changes state in relation \nto the neighboring states. In this picture states and rules are more abstract, and the \ndynamics of pattern formation is really at a different level of description from the \nmolecular dynamics of chemical reactions and diffusion. Reliable implementations of \nautomaton rules apparently are accessible as soon as the relevant chemical reactions \ninvolve a few dozen molecules. \nBiochemical switches have been reconstituted in vitro, but I am not aware of any \nattempts to verify that stable switching is possible with small numbers of molecules. \nIt would be most interesting to study model systems in which one could confine and \nmonitor sufficiently few molecules that it becomes possible to observe spontaneous \nswitching, that is the breakdown of stability. Although genetic switches have cer(cid:173)\ntain advantages, even the simplest systems would require full enzymatic apparatus \nfor gene expression (but see Ref. [16] for recent progress on controllable in vitro \nexpression systems).2 Kinase switches are much simpler, since they can be con(cid:173)\nstructed from just a few proteins and can be triggered by calcium; caged calcium \nallows for an optical pulse to serve as input. \nAt reasonable protein concentrations, 10 - 100 molecules are found in a volume of \nroughly 1 (J.tm) 3 . Thus it should be possible to fabricate an array of 'cells' with \nlinear dimensions ranging from 100 nm to 10 J.tm, such that solutions of kinase \nand accessory proteins would switch stably in the larger cells but exhibit instability \nand spontaneous switching in the smaller cells. The state of the switch could be \nread out by including marker proteins that would serve as substrates of the kinase \nbut have, for example, fluorescence lines that are shifted by phosphorylation, or by \nhaving fluorescent probes on the kinase itself; transitions of single enzyme molecules \nshould be observable [15]. \nA related idea would be to construct vesicles containing ligand gate ion channels \nwhich can conduct calcium, and then have inside the vesicle enzymes for synthesis \nand degradation of the ligand which are calcium sensitive. The cGMP channels of \nrod photoreceptors are an example, and in rods the cyclase synthesizing cGMP is \ncalcium sensitive, but the sign is wrong to make a switch [17]; presumably this could \nsolved by appropriate mixing and matching of protein components from different \ncells. In such a vesicle the different stable states would be distinguished by different \n\n2Note also that reactions involving polymer synthesis (mRNA from DNA or protein \nfrom mRNA) are not 'elementary' reactions in the sense described by Eq. (2). Synthesis \nof a single mRN A molecule involves thousands of steps, each of which occurs (conditionally) \nat constant probability per unit time, and so the noise in the overall synthesis reaction is \nvery different. If the synthesis enzymes are highly processive, so that the polymerization \napparatus incoporates many monomers into the polymer before 'backing up' or falling off \nthe template, then synthesis itself involves a delay but relatively little noise; the dominant \nsource of noise becomes the assembly and disassembly of the polymerization complex. \nThus there is some subtlety in trying to relate a simple model to the complex sequence \nof reactions involved in gene expression. On the other hand a detailed simulation is \nproblematic, since there are so many different elementary steps with unknown rates. This \ncombination of circumstances would make experiments on a minimal, in vitro genetic \nswitch espcially interesting. \n\n\flevels of internal calcium (as with adaptation states in the rod), and these could \nbe read out optically using calcium indicators; caged calcium would again provide \nan optical input to flip the switch. Amusingly, a close packed array of such vesicles \nwith rv 100 nm dimension would provide an optically addressable and writable \nmemory with storage density comparable to current RAM, albeit with much slower \nswitching. \nIn summary, it should be possible to build stable biochemical switches from a few \ntens of molecules, and it seems likely that nature makes use of these. To test our \nunderstanding of stability we have to construct systems which cross the threshold for \nobservable instabilities, and this seems accessible experimentally in several systems. \n\nAcknowledgments \n\nThanks to M. Dykman, J. J. Hopfield, and A. J. Libchaber for helpful discussions. \n\nReferences \n\n1. J. M. W. Slack, Fmm Egg to Embryo: Determinative Events in Early De(cid:173)\nvelopment (Cambridge University Press, Cambridge, 1983); P. A. Lawrence, \nThe Making of a Fly: The Genetics of Animal Design (Blackwell Science, \nOxford, 1992). \n\n2. M. Ptashne, A Genetic Switch: Phage ..\\ and Higher Organisms, 2nd Edi(cid:173)\ntion (Blackwell, Cambridge MA, 1992); A. D. Johnson, A. R. Poteete, G. \nLauer, R. T. Sauer, G. K. Ackers, and M. Ptashne, Nature 294, 217-223 \n(1981). \n\n3. A. W. Murray, Nature 359, 599-604 (1992). \n4. S. G. Miller and M. B. Kennedy, Cell 44, 861- 870 (1986); M. B. Kennedy, \n\nAnn. Rev. Biochem. 63, 571- 600 (1994). \n\n5. E. Schrodinger, What is Life? (Cambridge University Press, Cambridge, \n\n1944). \n\n6. H. H. McAdams and A. Arkin, Ann. Rev. Biophys. Biomol. Struct. 27, \n199-224 (1998); U. S. Bhalla and R. Iyengar, Science 283,381-387 (1999). \n\n7. J. E. Lisman, Pmc. Nat. Acad. Sci. (USA) 82, 3055- 3057 (1985). \n8. J. E. Lisman and M. A. Goldring, Pmc. Nat. Acad. Sci. \n\n(USA) 85, \n\n5320- 5324 (1988). \n\n9. N. Barkai and S. Leibler, Nature 387, 913-917 (1997). \n10. J. Zinn-Justin, Quantum Field Theory and Critical Phenomena (Clarendon \n\nPress, Oxford, 1989). \n\n11. H. H. McAdams and A. Arkin, Trends Genet. 15,65- 69 (1999). \n12. F. Baras, M. Malek Mansour and J. E. Pearson, J. Chem. Phys. 105, \n\n8257- 8261 (1996). \n\n13. M. 1. Dykman, E. Mori, J. Ross, and P. M. Hunt, J. Chem. Phys. 100, \n\n5735-5750 (1994). \n\n14. S. Coleman, Aspects of Symmetry (Cambridge University Press, Cambridge, \n\n1975). \n\n15. H. P. Lu, L. Xun, and X. S. Xie, Science 282, 1877- 1882 (1998); T. Ha, \nA. Y. Ting, J. Liang, W. B. Caldwell, A. A. Deniz , D. S. Chemla, P. G. \nSchultz, and S. Weiss, Pmc. Nat. Acad. Sci. (USA) 96, 893- 898 (1999). \n\n16. G. V. Shivashankar, S. Liu & A. J. Libchaber, Appl. Phys. Lett. 76, \n\n3638-3640 (2000). \n\n17. F. Rieke and D. A. Baylor, Revs. Mod. Phys. 70, 1027-1036 (1998). \n\n\f", "award": [], "sourceid": 1847, "authors": [{"given_name": "William", "family_name": "Bialek", "institution": null}]}