{"title": "Spiking and saturating dendrites differentially expand single neuron computation capacity", "book": "Advances in Neural Information Processing Systems", "page_first": 1070, "page_last": 1078, "abstract": "The integration of excitatory inputs in dendrites is non-linear: multiple excitatory inputs can produce a local depolarization departing from the arithmetic sum of each input's response taken separately. If this depolarization is bigger than the arithmetic sum, the dendrite is spiking; if the depolarization is smaller, the dendrite is saturating. Decomposing a dendritic tree into independent dendritic spiking units greatly extends its computational capacity, as the neuron then maps onto a two layer neural network, enabling it to compute linearly non-separable Boolean functions (lnBFs). How can these lnBFs be implemented by dendritic architectures in practise? And can saturating dendrites equally expand computational capacity? To adress these questions we use a binary neuron model and Boolean algebra. First, we confirm that spiking dendrites enable a neuron to compute lnBFs using an architecture based on the disjunctive normal form (DNF). Second, we prove that saturating dendrites as well as spiking dendrites also enable a neuron to compute lnBFs using an architecture based on the conjunctive normal form (CNF). Contrary to the DNF-based architecture, a CNF-based architecture leads to a dendritic unit tuning that does not imply the neuron tuning, as has been observed experimentally. Third, we show that one cannot use a DNF-based architecture with saturating dendrites. Consequently, we show that an important family of lnBFs implemented with a CNF-architecture can require an exponential number of saturating dendritic units, whereas the same family implemented with either a DNF-architecture or a CNF-architecture always require a linear number of spiking dendritic unit. This minimization could explain why a neuron spends energetic resources to make its dendrites spike.", "full_text": "Spiking and saturating dendrites differentially\nexpand single neuron computation capacity.\n\nRomain Caz\u00b4e\n\nMark Humphries\n\nINSERM U960, Paris Diderot, Paris 7, ENS\n\nINSERM U960; University of Manchester\n\n29 rue d\u2019Ulm, 75005 Paris\nromain.caze@ens.fr\n\n29 rue d\u2019Ulm, 75005 Paris; UK\n\nmark.humphries@manchester.ac.uk\n\nBoris Gutkin\n\nINSERM U960, CNRS, ENS\n29 rue d\u2019Ulm, 75005 Paris\nboris.gutkin@ens.fr\n\nAbstract\n\nThe integration of excitatory inputs in dendrites is non-linear: multiple excita-\ntory inputs can produce a local depolarization departing from the arithmetic sum\nof each input\u2019s response taken separately.\nIf this depolarization is bigger than\nthe arithmetic sum, the dendrite is spiking; if the depolarization is smaller, the\ndendrite is saturating. Decomposing a dendritic tree into independent dendritic\nspiking units greatly extends its computational capacity, as the neuron then maps\nonto a two layer neural network, enabling it to compute linearly non-separable\nBoolean functions (lnBFs). How can these lnBFs be implemented by dendritic\narchitectures in practise? And can saturating dendrites equally expand computa-\ntional capacity? To address these questions we use a binary neuron model and\nBoolean algebra. First, we con\ufb01rm that spiking dendrites enable a neuron to com-\npute lnBFs using an architecture based on the disjunctive normal form (DNF).\nSecond, we prove that saturating dendrites as well as spiking dendrites enable\na neuron to compute lnBFs using an architecture based on the conjunctive nor-\nmal form (CNF). Contrary to a DNF-based architecture, in a CNF-based architec-\nture, dendritic unit tunings do not imply the neuron tuning, as has been observed\nexperimentally. Third, we show that one cannot use a DNF-based architecture\nwith saturating dendrites. Consequently, we show that an important family of\nlnBFs implemented with a CNF-architecture can require an exponential number\nof saturating dendritic units, whereas the same family implemented with either a\nDNF-architecture or a CNF-architecture always require a linear number of spiking\ndendritic units. This minimization could explain why a neuron spends energetic\nresources to make its dendrites spike.\n\n1\n\nIntroduction\n\nRecent progress in voltage clamp techniques has enabled the recording of local membrane voltage in\ndendritic branches, and this greatly changed our view of the potential for single neuron computation.\nExperiments have shown that when the local dendritic membrane potential reaches a given threshold\na dendritic spike can be elicited [4, 13]. Based on this type of local dendritic non-linearity, it has\nbeen suggested that a CA1 hippocampal pyramidal neuron comprises multiple independent non-\nlinear spiking units, summating at the soma, and is thus equivalent to a two layer arti\ufb01cial neural\nnetwork [12]. This idea is attractive, because this type of feed-forward network can implement any\nBoolean function, in particular linearly non-separable Boolean functions (lnBFs), and thus radically\n\n1\n\n\fextends the computational power of a single neuron. By contrast, a seminal neuron model, the\nMcCulloch & Pitts unit [10], is restricted to linearly separable Boolean functions.\nHowever attractive this idea, it requires additional investigation. Indeed, spiking dendritic unit may\nenable the computation of lnBFs using an architecture, suggested in [9], where the dendritic tuning\nimplies the neuron tuning (see also Proposition 1). This relation between dendritic and neuron tuning\nhas not been con\ufb01rmed experimentally; on the contrary it has been shown in vivo that dendritic\ntuning does not imply the neuron tuning [6]: calcium imaging in vivo has shown that the local\ncalcium signal in dendrites can maximally increase for visual inputs whereas that do not trigger\nsomatic spiking. We resolve this \ufb01rst issue here by showing how one can implement lnBFs with\nspiking dendritic units, whose tunings do not formally imply the somatic tuning.\nMoreover, the idea of a neuron implementing a two-layer network is based on spiking dendrites.\nDendritic non-linearities have a variety of shapes, and many neuron types may not have the capacity\nto generate dendritic spikes. By contrast, all dendrites can saturate [1, 16, 2]. For instance, gluta-\nmate uncaging on cerebellar stellate cell dendrites and simultaneous somatic voltage recording of\nthese interneurons shows that multiple excitatory inputs on the same dendrite result in a somatic\ndepolarization smaller than the arithmetic sum of the quantal depolarizations [1]. This type of non-\nlinearity has been predicted from Rall\u2019s work [7], a model which explains saturation by an increase\nin membrane conductance and a decrease in driving force. It is unknown whether local dendritic\nsaturation can also enhance the general computational capacity of a single neuron in the same way\nas local dendritic spiking \u2013 but, if so, this would make plausible the implementation of lnBFs in\npotentially any type of neuron. In the present study we show that saturating dendritic units do also\nenable the computation of lnBFs (see Proposition 2).\nOne can wonder why some dendrites support metabolically-expensive spiking if dendritic saturation\nis suf\ufb01cient to compute all Boolean functions. We tackle this issue in the second part of our study.\nWe show that a family of positive lnBFs may require an exponentially growing number of saturat-\ning dendritic units when the number of input variables grow linearly, whereas the same family of\nBoolean functions requires a linearly growing number of spiking dendritic units. Consequently den-\ndritic spikes may minimize the number of units necessary to implement all Boolean functions. Thus,\nas the number of independent units \u2013 spiking or saturating \u2013 in a dendrite remains an open question\n[5], but potentially small [14], it may turn out that certain Boolean functions are only implementable\nusing spiking dendrites.\n\n2 De\ufb01nitions\n\n2.1 The binary two stage neuron\n\nWe introduce here a neuron model analogous to [12]. Our model is a binary two stage neuron,\nwhere X is a binary input vector of length n and y is a binary variable modelling the neuron output.\nFirst, inputs sum locally within each dendritic unit j given a local weight vector Wj; then they pass\nthough a local transfer function Fj accounting for the dendritic non-linear behavior. Second, outputs\nof the d dendritic subunits sum at the soma and passes though the somatic transfer function F0. F0\nis a spiking transfer function whereas Fj are either spiking or saturating transfer functions, these\nfunctions are described in the next section and are displayed on Figure 1A. Formally, the output y is\ncomputed with the following equation:\n\n(cid:16) d(cid:88)\n\n(cid:17)\n\n2.2 Sub-linear and supra-linear transfer functions\n\nj=1\n\ny = F0\n\nFj(Wj.X)\n\nA transfer function F takes as input a local weighted linear sum x and outputs F (x); this output\ndepends on the type of transfer function: spiking or saturating, and on a single positive parameter \u0398\nthe threshold of the transfer function. The two types of transfer functions are de\ufb01ned as follows:\nDe\ufb01nition 1. Spiking transfer function\n\nFspk(x) =\n\nif x \u2265 \u0398\notherwise\n\n(cid:26)1\n\n0\n\n2\n\n\fx1\nx2\nx3\nx4\n\nTable 1: Two examples of positive Boolean functions of 4 variables\n1\n0\n1\n1\n1\n1\n\n1\n0\n0\n0\n0\n0\n\n0\n0\n0\n0\n0\n0\n\n0\n1\n0\n0\n0\n0\n\n1\n1\n0\n0\n1\n0\n\n1\n1\n1\n0\n1\n1\n\n0\n0\n0\n1\n0\n0\n\n1\n0\n0\n1\n0\n1\n\n0\n1\n0\n1\n0\n1\n\n1\n1\n0\n1\n1\n1\n\n0\n0\n1\n1\n1\n0\n\ng(x1, x2, x3, x4)\nh(x1, x2, x3, x4)\n\n0\n0\n1\n0\n0\n0\n\n1\n0\n1\n0\n0\n1\n\n0\n1\n1\n0\n0\n1\n\n0\n1\n1\n1\n1\n1\n\n1\n1\n1\n1\n1\n1\n\nDe\ufb01nition 2. Saturating transfer function\n\nFsat(x) =\n\n(cid:26)1\n\nif x \u2265 \u0398\nx/\u0398 otherwise\n\nThe difference between a spiking and a saturating transfer function is that Fspk(x) = 0 whereas\nFsat(x) = x/\u0398 if x is below \u0398. To formally characterize this difference we de\ufb01ne here sub-\nlinearity and supra-linearity of a transfer function F on a given interval I. These de\ufb01nitions are\nsimilar to the well-known notions of concavity and convexity:\nDe\ufb01nition 3. F is supra-linear on I if and only if F (x1 + x2) > F (x1) + F (x2) for at least one\n(x1, x2) \u2208 I 2\nF is sub-linear on I if and only if F (x1 + x2) < F (x1) + F (x2) for at least one (x1, x2) \u2208 I 2\nF is strictly sub-linear (resp. supra-linear) on I if it is sub-linear (resp. supra-linear) but not\nsupra-linear (resp. sub-linear) on I.\nNote that these de\ufb01nitions also work when using n-tuples instead of couples on the interval (useful\nin Lemma 3).\nNote that whenever \u0398 > 0, Fspk is both supra and sub-linear on I = [0, +\u221e[ whereas\nis not supra-linear on I because\nFsat\nFsat(x1 + x2) \u2264 Fsat(x1) + Fsat(x2) for all (x1, x2) \u2208 I 2, by de\ufb01nition of Fsat. Moreover, Fsat\nis sub-linear on I because Fsat(a + b) = 1 and Fsat(a) + Fsat(b) = 2 for at least one (a, b) \u2208 I 2\nsuch that a \u2265 \u0398 and b \u2265 \u0398. All in all, Fsat is strictly sub-linear on I.\nSimilarly to Fsat, Fspk is sub-linear on I because Fspk(a + b) = 1 and Fspk(a) + Fspk(b) = 2\nfor at least one (a, b) \u2208 I 2 such that a \u2265 \u0398 and b \u2265 \u0398. Moreover, Fspk is supra-linear because\nFspk(c + d) = 1 and Fspk(c) + Fspk(d) = 0 for at least one (c, d) such that c < \u0398 and d < \u0398 but\nc + d \u2265 \u0398. All in all, Fspk is both sub-linear and supra-linear.\n\nis strictly sub-linear on the same interval.\n\nFsat\n\n2.3 Boolean Algebra\n\nIn order to study the range of possible input-output mappings implementable by a two stage neu-\nrons we use Boolean functions, which can ef\ufb01ciently and formally describe all binary input-output\nmappings. Let us recall the de\ufb01nition of this extensively studied mathematical object [3, 17]:\nDe\ufb01nition 4. A Boolean function of n variables is a function on {0, 1}n into {0, 1}, where n is a\npositive integer.\n\nIn Table.1 the truth table for two Boolean functions g and h is presented. These Boolean functions\nare fully and uniquely de\ufb01ned by their truth table. Both g and h are positive lnBFs (see chapter 9 of\n[3] for an extensive study of linear separability); because of its importance we recall the de\ufb01nition\nof positive Boolean functions:\nDe\ufb01nition 5. Let f be a Boolean function on {0, 1}n. f is positive if and only if f (X) \u2265 f (Z)\n\u2200(X, Z) \u2208 {0, 1}n such that X \u2265 Z (meaning that \u2200i : xi \u2265 zi)\nWe also recall the notion of implication as it is important to observe that a dendritic input-output\nfunction (or tuning) may or not imply the neuron\u2019s input-output function:\n\n3\n\n\fDe\ufb01nition 6. Let f and g be two Boolean functions.\n\nf implies g \u21d0\u21d2 f (X) = 1 =\u21d2 g(X) = 1 \u2200X \u2208 {0, 1}n\n\nAs will become clear, we can treat each dendritic unit as computing its own Boolean function on its\ninputs: for a unit\u2019s output to imply the whole neuron\u2019s output then means that if a unit outputs a 1,\nthen the neuron outputs a 1.\nIn order to describe positive Boolean functions, it is useful to decompose them into positive terms\nand positive clauses:\nDe\ufb01nition 7. Let X (j) be a tuple of k < n positive integers referencing the different variables\npresent in a term or a clause.\n\nA positive term j is a conjunction of variables written as Tj(X) =\n\nxi.\n\n(cid:94)\n(cid:95)\n\ni\u2208X (j)\n\ni\u2208X (j)\n\nA positive clause j is a disjunction of variables written as Cj(X) =\n\nxi.\n\nA term or (resp. clause) is prime if it is not implied by (resp. does not imply) any other term (resp.\nclause) in a disjunction (resp. conjunction) of multiple terms (resp. clauses).\n\nThese terms and clauses can then de\ufb01ne the Disjunctive or Conjunctive Normal Form (DNF or CNF)\nexpression of a Boolean function f, particularly:\nDe\ufb01nition 8. A complete positive DNF is a disjunction of prime positive terms T :\n\nDe\ufb01nition 9. A complete positive CNF is a conjunction of prime positive clauses C:\n\nDNF(f ) :=\n\n(cid:95)\n(cid:94)\n\n(cid:16) (cid:94)\n(cid:16) (cid:95)\n\nTj\u2208T\n\ni\u2208X (j)\n\n(cid:17)\n(cid:17)\n\nxi\n\nxi\n\nCNF(f ) :=\n\nCj\u2208C\n\ni\u2208X (j)\n\nIt has been shown that all positive Boolean functions can be expressed as a positive complete DNF\n([3] Theorem 1.24); similarly all positive Boolean functions can be expressed as a positive complete\nCNF. These complete positive DNF or CNF are the shortest possible DNF or CNF descriptions of\npositive Boolean functions. To clarify all these de\ufb01nitions let us introduce a series of examples build\naround g and h.\nExample 1. Let us take X (1) = (1, 2) and X (2) = (3, 4). These tuples de\ufb01ne two positive terms\nT1(X) = x1 \u2227 x2 where T1(X) = 1 only when x1 = 1 and x2 = 1 and T1(X) = 0 otherwise;\nsimilarly T2(X) = x3 \u2227 x4 where T2(X) = 1 only when x3 = 1 and x4 = 1. These tuples can also\nde\ufb01ne two positive clauses C1(X) = x1 \u2228 x2 where C1(X) = 1 as soon as x1 = 1 or x2 = 1, and\nsimilarly C2(X) = x3 \u2228 x4 where C2(X) = 1 as soon as x3 = 1 or x4 = 1. In the disjunction\nof terms T1 \u2228 T2 the terms are prime because T1(X) = 1 is not implied by T2(X) = 1 for all X\n(and vice-versa). Similarly in the conjunction of clauses C1 \u2227 C2 the clauses are prime because\nC1(X) = 1 does not imply that C2(X) = 1 for all X (and vice-versa). T1 \u2228 T2 is the complete\npositive DNF expression of g; alternatively C1 \u2227 C2 is the complete positive CNF expression of h.\nThe truth tables of g and h are displayed in Table 1\n\n3 Results\n\nWe \ufb01rst prove here that a two stage neuron with a suf\ufb01cient number of only spiking or only sat-\nurating dendritic units can implement all positive Boolean functions, particularly lnBFs like g and\nh, whereas a classic McCulloch & Pitts unit is restricted to linearly separable Boolean functions.\nMoreover, we present two construction architectures for building a two stage neuron implementing\na positive Boolean function based on its complete DNF or CNF expression. Finally we show that\nthe DNF-based architecture is only possible with spiking dendritic units and not with saturating\ndendritic units.\n\n4\n\n\fFigure 1: Modeling dendritic spikes, dendritic saturations, and their impact on computation\ncapacity (A) Two types of transfer functions for a unit j with a normalized height to 1 and a variable\nthreshold \u0398j. The input is the local weighted sum Wj.X and the output is yj (A1) A spiking transfer\nfunction models somatic spikes and dendritic spikes (A2) A saturating transfer function models\ndendritic saturations (B) From left to right, a unit implementing the term T (X) = x1 \u2228 x2, and\ntwo units implementing the clause C(X) = x3 \u2228 x4, in circles are synaptic weights and in squares\nare threshold and the type of transfer function (spk:spiking, sat:saturating) (C) Two architectures to\nimplement all positive Boolean functions in a two stage neuron, the d dendritic units correspond\nto all terms of a DNF (left) or to all the clauses of a CNF (right), the somatic unit respectively\nimplements an AND or an OR logic operation\n\n3.1 Computation of positive Boolean functions using non-linear dendritic units\n\nLemma 1. A two stage neuron with non-negative synaptic weights and increasing transfer functions\nnecessarily implements positive Boolean functions\n\nProof. Let f be the Boolean function representing the input-output mapping of a two stage neuron,\nand two binary vectors X and Z such that X \u2265 Z. We have \u2200j \u2208 {1, 2, . . . , d} non-negative local\nweights wi,j \u2265 0, thus for a given dendritic unit j we have:\nwi,jxi \u2265 wi,jzi.\n\nWe can sum inequalities for all i, and Fj are increasing transfer functions thus:\n\nFj(Wj.X) \u2265 Fj(Wj.Z).\n\nWe can sum the d inequalities corresponding to every dendritic unit, and F0 is an increasing transfer\nfunction thus:\n\nf (X) \u2265 f (Z).\n\nLemma 2. A term (resp. a clause) can be implemented by a unit with a supra-linear (resp. sub-\nlinear) transfer function\n\nProof. We need to provide the parameter sets of a transfer function implementing a term (resp. a\nclause) with the constraint that the transfer function is supra-linear (resp. sub-linear). Indeed, a\nsupra-linear transfer function (like the spiking transfer function) with the parameter set wi = 1 if\ni \u2208 X (j) and wi = 0 otherwise and \u0398 = card(X (j)) implements the term Tj. A sub-linear transfer\nfunction (like the saturating transfer function) with the parameter set wi = 1 if i \u2208 X (j) and wi = 0\notherwise and \u0398 = 1 implements the clause Cj. These implementation are illustrated by examples\nin Figure 1B\n\n5\n\n\fLemma 3. A term (resp. a clause) cannot be implemented by a unit with a strictly sub-linear (resp.\nsupra-linear) transfer function\n\nProof. We prove this lemma for a term, the proof is similar for a clause. Let Tj be the term de\ufb01ned\nby X (j), with card(X (j)) \u2265 2. First, for all input vectors X such that xi = 1 with i \u2208 X (j) and\nxk(cid:54)=i = 0 then Tj(X) = 0 implying that F (W.X) = F (wixi) = 0. One can sum all these elements\nF (wixi) = 0. Second, for all input vectors X such that\n\nxi = 1 for all i \u2208 X (j) then Tj(X) = 1 implying that F\n\nwixi\n\n= 1. Putting the two pieces\n\nto obtain the following equality (cid:88)\n(cid:16) (cid:88)\n\ntogether we obtain:\n\ni\u2208X (j)\n\n(cid:17)\n\n(cid:16) (cid:88)\n(cid:88)\n\ni\u2208X (j)\n\n(cid:17)\n\nF\n\nwixi\n\n>\n\nF (wixi)\n\ni\u2208X (j)\n\ni\u2208X (j)\n\nThis inequality shows that the tuple of points (wixi|i \u2208 X (j)) de\ufb01ning a term must have F supra-\nlinear; therefore, by De\ufb01nition 2, F cannot be both strictly sub-linear and implement a term.\n\nUsing these Lemmas we show the possible and impossible implementation architectures of positive\nBoolean functions in two-layer neuron models using either spiking or saturating dendritic units.\nProposition 1. A two stage neuron with non-negative synaptic weights and a suf\ufb01cient number of\ndendritic units with spiking transfer functions can implement only and all positive Boolean functions\nbased on their positive complete DNF\n\nProof. A two stage neuron can only compute positive Boolean functions (Lemma 1). All positive\nBoolean functions can be expressed as a positive complete DNF; because a spiking dendritic unit\nhas a supra-linear transfer function it can implement all possible terms (Lemma 2). Therefore a two\nstage neuron model without inhibition can implement only and all positive Boolean functions with\nas many dendritic units as there are terms in the functions\u2019 positive complete DNF. This architecture\nis represented on Figure 1C (left).\n\nInformally, this simply means that a dendrite is a pattern detector:\nif a pattern is present in the\ninput then the dendritic unit elicits a dendritic spike. This architecture has been repeatedly invoked\nby theoreticians [8] and experimentalists ([9] in supplementary material) to suggest that dendritic\nspikes increase a neuron\u2019s computational capacity. With this architecture, however, the dendritic\ntransfer function, if it is viewed as a Boolean function, formally implies the neuron\u2019s input-output\nmapping. This has not been con\ufb01rmed experimentally yet.\nProposition 2. A two stage neuron with non-negative synaptic weights and a suf\ufb01cient number of\ndendritic units with spiking or saturating transfer functions can implement only and all positive\nBoolean functions based on their positive complete CNF\n\nProof. A two stage neuron can only compute positive Boolean functions (Lemma 1). All positive\nBoolean functions can be expressed as a positive complete CNF; because a spiking or a saturat-\ning dendritic unit has a sub-linear transfer function they both can implement all possible clauses\n(Lemma 2). Therefore a two stage neuron model without inhibition can implement only and all pos-\nitive Boolean functions with as many dendritic units as there are clauses in the functions\u2019 positive\ncomplete CNF. This architecture is represented on Figure 1C (right).\n\nTo our knowledge, this implementation architecture has not yet been proposed in the neuroscience\nliterature. It shows that saturations can increase the computational power of a neuron as much as\ndendritic spikes. It also shows that another implementation architecture is possible using spiking\ndendritic units. Using this architecture, the dendritic units\u2019 transfer functions do not imply the\nsomatic output. This independence of dendritic and somatic response to inputs has been observed in\nLayer 2/3 neurons [6].\nProposition 3. A two stage neuron with non-negative synaptic weights and only dendritic units with\nsaturating transfer functions cannot implement a positive Boolean function based on its complete\nDNF\n\n6\n\n\fProof. The transfer function of a saturating dendritic unit is strictly sub-linear, therefore this unit\ncannot implement a term (Lemma 3).\n\nThis result suggests that spiking dendritic units are more \ufb02exible than saturating dendritic units;\nthey allow the computation of Boolean functions through either DNF or CNF-based architectures\n(illustrated in Figure 2), whereas saturating units are restricted to CNF-based architectures.\n\n3.2\n\nImplementation of a family of positive lnBFs using either spiking or saturating dendrites\n\nFigure 2: Implementation of two linearly non-separable Boolean functions using CNF-based or\nDNF-based architectures. Four parameter sets of two-stage neuron models: in circles are synaptic\nweights and in squares are threshold and the unit type (spk:spiking, sat:saturating). These parameter\nsets implement (A1/A2) g or (B1/B2) h, two lnBFs depicted in Table 1 using: (A1/B1) a DNF-\nbased architecture and spiking dendritic units only; (A2/B2) a CNF-based architecture and saturating\ndendritic units only.\n\nThe Boolean functions g and h form a family of Boolean functions we call feature binding problems\nin reference to [8]. In this section we show how this family can be implemented using either a DNF-\nbased or CNF-based architecture. For some Boolean functions, the DNF and CNF grow at different\nrates as a function of the number of variables [3, 11]. This is the case when g and h are de\ufb01ned for\nn input variables.\nExample 2. Let\u2019s de\ufb01ne g by the complete positive DNF expression \u03c6 :\n\n\u03c6(g(x1, z1, . . . , xn, zn)) := x1z1 \u2228 x2z2 \u2228 \u00b7\u00b7\u00b7 \u2228 xnzn\n\nThe same function g has a unique complete positive CNF expression; let\u2019s call it \u03c8. The clauses of\n\u03c8 are exactly those elementary disjunctions of n variables that involve one variable out of each of\nthe pairs {x1, z1},{x2, z2}, . . . ,{xn, zn}. Thus \u03c8 has 2n clauses.\nExample 3. Let\u2019s de\ufb01ne h by the complete positive CNF expression \u03c8:\n\n\u03c8(h(x1, z1, . . . , xn, zn)) := (x1 \u2228 z1)(x2 \u2228 z2) . . . (xn \u2228 zn)\n\nThe same function h has a unique complete positive DNF expression; let\u2019s call it \u03c6. The terms of \u03c6\nare exactly those elementary conjunctions of n variables that involve one variable out of each of the\npairs {x1, z1},{x2, z2}, . . . ,{xn, zn}. Thus \u03c6 has 2n terms.\nTable 2 shows the number of necessary units for g and h depending on the chosen architecture. From\nPropositions 1 and 2, it is immediately clear that spiking dendritic units always give access to the\n\n7\n\n\fTable 2: Number of necessary units\n\nBoolean function\ng\nh\n\n# of terms in DNF\n\nn\n2n\n\n# of clauses in CNF\n\n2n\nn\n\nminimal possible two-stage neuron implementation. A neuron with spiking dendritic units can thus\nimplement g with n units using DNF-based and h with n units using CNF-based architectures; but\nsaturating units, restricted to CNF-based architectures, can only implement h with 2n units.\n\n4 Discussion\n\nThe main result of our study is that dendritic saturations can play a computational role that is as im-\nportant as dendritic spikes: saturating dendritic units enable a neuron to compute lnBFs (as shown in\nProposition 2). The same Proposition shows that a neuron can compute lnBFs decomposed accord-\ning to the CNF using spiking dendritic units; with this architecture, dendritic tuning does not imply\nthe somatic tuning to inputs. Moreover, we demonstrated that an important family of lnBFs formed\nby g and h can be implemented in a two stage neuron using either spiking or saturating dendritic\nunits. We also showed that lnBFs cannot be implemented in a two stage neuron using a DNF-based\narchitecture with only dendritic saturating units (Proposition 3).\nThese results nicely separate the implications of saturating and spiking dendritic units in single neu-\nron computation. On the one hand, spiking dendritic units are a more \ufb02exible basis for computation,\nas they can be employed in two different implementation architectures (Proposition 1 and 2) where\ndendritic tunings \u2013 the dendritic unit transfer functions \u2013 can imply or not the tuning of the whole\nneuron. The latter may explain why dendrites can have a tuning different from the whole neuron as\nhas been observed in Layer 2/3 pyramidal cells of the visual cortex [6]. On the other hand, saturating\ndendritic units can enhance single neuron computation through implementing all positive Boolean\nfunctions (Proposition 3), while reducing the energetic costs associated with the active ion channels\nrequired for dendritic spikes [4, 13].\nFor an in\ufb01nite number of dendritic units, saturating and spiking units lead to the same increase\nin computation capacity; for a \ufb01nite number of dendritic units our results suggests that spiking\ndendritic units could have advantages over saturating dendritic units. In the second part of our study\nwe showed that a family of lnBFs can be described by an expression containing an exponential\nor a linear number of elements. Namely, the lnBFs de\ufb01ned by g or h can be implemented with\na linear number of spiking dendritic units whereas for g a neuronal implementation using only\nsaturations requires an exponential number of saturating dendritic units. Consequently, spiking\ndendritic units may allow the minimization of dendritic units necessary to implement this family of\nBoolean functions.\nThe Boolean functions g and h formalize feature binding problems [8] which are important and\nchallenging computations (see [15] for review). Some single neuron solutions to feature binding\nproblems have been proposed in [8], but restricted to DNF-based architectures; our results thus\ngeneralize and extend this study by proposing alternative CNF-based solutions. Moreover, we show\nthat this alternative architecture enables the solution of an important family of binding problems\nwith a linear number of spiking dendritic unit. Thus we have proposed more ef\ufb01cient solutions to a\nfamily of challenging computations.\nBecause of their elegance and simplicity stemming from Boolean algebra, we believe our results\nare applicable to more complex situations. They can be extended to continuous transfer functions,\nwhich are more biologically plausible; in this case the notion of sub-linearity and supra-linearity\nare replaced by concavity and convexity. Moreover, all the parameters used here for proofs and\nexamples are integer-valued but the same proofs and examples are easily extendable to continuous\nsteady-state rate models where parameters are real-valued. In conclusion, our results have a solid\nformal basis, moreover, they both explain recent experimental \ufb01ndings and suggest a new way to\nimplement Boolean functions using saturating as well as spiking dendritic units.\n\n8\n\n\fReferences\n[1] T. Abrahamsson, L. Cathala, K. Matsui, R. Shigemoto, and D.A. DiGregorio. Thin Dendrites\nof Cerebellar Interneurons Confer Sublinear Synaptic Integration and a Gradient of Short-Term\nPlasticity. Neuron, 73(6):1159\u20131172, March 2012.\n\n[2] S. Cash and R. Yuste. Linear summation of excitatory inputs by CA1 pyramidal neurons.\n\nNeuron, 22(2):383\u2013394, February 1999.\n\n[3] Y. Crama and P.L. Hammer. Boolean Functions: Theory, Algorithms, and Applications (Ency-\n\nclopedia of Mathematics and its Applications). Cambridge University Press, 2011.\n\n[4] S. Gasparini, M. Migliore, and J.C. Magee. On the initiation and propagation of dendritic\nspikes in CA1 pyramidal neurons. The Journal of Neuroscience, 24(49):11046\u201311056, De-\ncember 2004.\n\n[5] M. Hausser and B.W. Mel. Dendrites: bug or feature? Current Opinion in Neurobiology,\n\n13(3):372\u2013383, June 2003.\n\n[6] H. Jia, N.L. Rochefort, X. Chen, and A. Konnerth. Dendritic organization of sensory input to\n\ncortical neurons in vivo. Nature, 464(7293):1307\u20131312, 2010.\n\n[7] C. Koch. Biophysics of computation : information processing in single neurons. Oxford\n\nUniversity Press, New York, 1999.\n\n[8] R. Legenstein and W. Maass. Branch-Speci\ufb01c Plasticity Enables Self-Organization of Non-\nlinear Computation in Single Neurons. Journal of Neuroscience, 31(30):10787\u201310802, July\n2011.\n\n[9] A. Losonczy, J.K. Makara, and J.C. Magee. Compartmentalized dendritic plasticity and input\n\nfeature storage in neurons. Nature, 452(7186):436\u2013441, March 2008.\n\n[10] W.S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity.\n\nBulletin of mathematical biology, 52(1-2):99\u2013115; discussion 73\u201397, January 1943.\n\n[11] P.B. Miltersen, J. Radhakrishnan, and I. Wegener. On converting CNF to DNF. Theoretical\n\ncomputer science, 347:325\u2013335, November 2005.\n\n[12] P. Poirazi, T. Brannon, and B.W. Mel. Pyramidal neuron as two-layer neural network. Neuron,\n\n37(6):989\u2013999, March 2003.\n\n[13] A. Polsky, B.W. Mel, and J. Schiller. Computational subunits in thin dendrites of pyramidal\n\ncells. Nature Neuroscience, 7(6):621\u2013627, June 2004.\n\n[14] M.W.H. H Remme, M. Lengyel, and B.S. Gutkin. Democracy-independence trade-off in os-\n\ncillating dendrites and its implications for grid cells. Neuron, 66(3):429\u201337, May 2010.\n\n[15] A.L. Roskies. The Binding Problem. Neuron, 24:7\u20139, 1999.\n[16] K. Vervaeke, A. Lorincz, Z. Nusser, and R.A. Silver. Gap Junctions Compensate for Sublinear\nDendritic Integration in an Inhibitory Network. Science, 335(6076):1624\u20131628, March 2012.\n\n[17] I. Wegener. Complexity of Boolean Functions. Wiley-Teubner, 1987.\n\n9\n\n\f", "award": [], "sourceid": 519, "authors": [{"given_name": "Romain", "family_name": "Caz\u00e9", "institution": null}, {"given_name": "Mark", "family_name": "Humphries", "institution": null}, {"given_name": "Boris", "family_name": "Gutkin", "institution": null}]}