{"title": "Reconstructing MEG Sources with Unknown Correlations", "book": "Advances in Neural Information Processing Systems", "page_first": 693, "page_last": 700, "abstract": "", "full_text": "Reconstructing MEG Sources\nwith Unknown Correlations\n\nManeesh Sahani\n\nW. M. Keck Foundation Center\nfor Integrative Neuroscience,\n\nSrikantan S. Nagarajan\n\nBiomagnetic Imaging Laboratory,\n\nDepartment of Radiology,\n\nUC, San Francisco, CA 94143-0732\n\nUC, San Francisco, CA 94143-0628\n\nmaneesh@phy.ucsf.edu\n\nsri@radiology.ucsf.edu\n\nAbstract\n\nExisting source location and recovery algorithms used in magnetoen-\ncephalographic imaging generally assume that the source activity at dif-\nferent brain locations is independent or that the correlation structure is\nknown. However, electrophysiological recordings of local \ufb01eld poten-\ntials show strong correlations in aggregate activity over signi\ufb01cant dis-\ntances. Indeed, it seems very likely that stimulus-evoked activity would\nfollow strongly correlated time-courses in different brain areas. Here,\nwe present, and validate through simulations, a new approach to source\nreconstruction in which the correlation between sources is modelled and\nestimated explicitly by variational Bayesian methods, facilitating accu-\nrate recovery of source locations and the time-courses of their activation.\n\n1 Introduction\n\nThe brain\u2019s neuronal activity generates weak magnetic \ufb01elds (10 fT \u2013 1 pT). Magne-\ntoencephalography (MEG) is a non-invasive technique for detecting and characterising\nthese magnetic \ufb01elds. MEG sensors use super-conducting quantum interference devices\n(SQUIDs) to measure the changes in the brain\u2019s magnetic \ufb01eld on a millisecond time-scale.\nWhen combined with electromagnetic source localisation, magnetic source imaging (MSI)\nbecomes a functional brain imaging method that allows us to characterise macroscopic\ndynamic neural information processing.\n\nIn the past decade, the development of MSI source reconstruction algorithms has pro-\ngressed signi\ufb01cantly [1]. Currently, there are two general approaches to estimating MEG\nsources: parametric methods and imaging methods [2]. With parametric methods, a few\ncurrent dipoles of unknown location and moment are assumed to represent the sources\nof activity in the brain.\nIn this case, solving the inverse problem requires a non-linear\noptimisation to estimate the position and magnitude of an unknown number of dipoles.\nWith imaging methods, a grid of voxels is used to represent the entire brain volume. The\ninverse problem is then to recover whole brain activation images, represented by the time-\ndependent moment and magnitude of an elementary dipole source located at each voxel.\nThis formulation leads to a linear forward model. However, the ill-posed nature of the\nproblem leads to non-unique solutions which must be distinguished by prior information,\nusually in the form of assumptions regarding the correlation between the sources.\n\n\fIn this paper, we formulate a general spatiotemporal imaging model for MEG data. Our for-\nmulation makes no assumptions about the correlation of the sources; instead, we estimate\nthe extent of the correlation by an evidence optimisation procedure within a variational\nBayesian framework [3].\n\n1.1 MEG imaging\n\nMany standard MEG devices measure the radial gradient of the magnetic \ufb01eld at a number,\ndb, of sensor locations (typically arranged on a segment of a sphere). Measurements made\nat a single time can be formed into a db-dimensional vector b; an experiment yields a series\nof N such samples, giving a db \u00d7 N data matrix B.\nThis measured \ufb01eld-gradient is affected by a number of different processes. The compo-\nnent we seek to isolate is stimulus- or event-related, and is presumably contributed to by\nsigni\ufb01cant activity at a relatively small number of locations in the brain. This signal is\ncorrupted by thermal noise at the sensors, and by widespread spontaneous, unrelated brain\nactivity. For our purposes, these are both sources of noise, whose distributions are approx-\nimately normal [2] (in the case of the unrelated brain activity, the normality results from\nthe fact that any one sensor sees the sum of effects from a large number of locations). The\ncovariance matrix of this noise, \u03a8, can be measured approximately by accumulating sensor\nreadings in a quiescent state; simulations suggest that the techniques presented here are\nreasonably tolerant to mis-estimation of the noise level. Measurements are also affected\nby other forms of interference associated with experimental electronics or bio-magnetic\nactivity external to the brain. We will not here treat such interference explicitly, instead\nassuming that major sources have been removed by preprocessing the measured data, e.g.,\nby using blind source separation methods [4].\n\nTo represent the signi\ufb01cant brain sources, we divide the volume of the brain (or a subsec-\ntion of that volume that contains the sources) into a number of voxels and then calculate the\nlead-\ufb01eld matrix L that linearly relates the strength of a current dipole in each orientation\nat each voxel, to the sensor measurements. For simplicity, we assume a spherical volume\nconductor model, which permits analytical calculation of L independent of the tissue con-\nductivity [2], and which is reasonably accurate for most brain regions [1]. (Non-uniform\nvolume conduction properties of the brain and surrounding tissues can be explicitly ac-\ncounted for by elaborating the lead-\ufb01eld matrix calculation, but they do not otherwise affect\nthe analysis presented below.) In the simple model, only the two tangential components of\nthe current dipole which fall orthogonal to the radial direction contribute to b, and so the\nsource vector s has a dimension ds which is twice the number of voxels dv. The source\nmatrix S associated with the N \ufb01eld measurements has dimensions ds \u00d7 N. Thus the\nprobabilistic forward model for MEG measurements is given by\n\nb \u223c N (Ls, \u03a8)\n\n(1)\n\nWithout considerable prior knowledge of the pattern of brain activation, the number of\npossible degrees of freedom in the source vector, ds, will be far greater than the number of\nmeasurements, db; and so there is no unique maximum-likelihood estimate of s. Instead,\nattempts at source recovery depend, either implicitly or explicitly, on the application of\nprior knowledge about the source distribution. Most existing methods constrain the source\nlocations and/or activities in various ways: based on anatomical or fMRI data; by maximum\nentropy, minimum L1 norm, weighted-minimum L2 norm or maximum smoothness priors;\nor to achieve optimal resolution [1]. Most of these constraints can be formulated as priors\nfor maximum a posteriori estimation of the sources (although the original statements do not\nalways make such priors explicit). In addition, some studies have also included temporal\nconstraints on sources such as smoothness or phase-locking between sources [5].\nConsider, for example, linear estimates of s given by \u02c6s = F 0b. The optimal estimate (in a\n\n\fleast-squares sense) is given by the Wiener \ufb01lter:\n\nF = hbb0i\u22121 hbs0i = hbb0i\u22121 h(Ls + n)s0i = hbb0i\u22121 Lhss0i ,\n\n(2)\n(where n \u223c N (0, \u03a8) is a noise vector uncorrelated with s) and therefore requires knowl-\nedge of the source correlation matrix hss0i.\nOne approach to source reconstruction, the minimum-variance adaptive beamformer (or\n\u201cbeamformer\u201d for short), can be viewed as an approximation to the Wiener \ufb01lter in which\nthe correlation matrix of sensor measurements hbb0i is estimated by the observed corre-\nlation BB0/N, and the sources at each location are taken to be uncorrelated [6]. If the\norientation of each source dipole is known or estimated independently (so that s contains\nonly one magnitude at each location), then the source correlation matrix hss0i reduces to a\ndiagonal matrix of gain factors. For the beamformer, these factors are chosen to give a unit\n\u201cloop gain\u201d for each source i.e. such that diag [F 0L] = 1. It can be shown that the beam-\nformer only yields accurate results when the number of active sources is few [7]. Thus, this\napproach makes two assumptions about the sources: an explicit one of decorrelation and\nan implicit one of sparse activation. Other techniques tend to make similar assumptions. A\nrelated algorithm using Multiple Signal Classi\ufb01cation (MUSIC) also assumes sparsity and\nlinear independence in the time-series of the sources [1]. Minimum-norm methods can also\nbe viewed as making speci\ufb01c assumptions about the source correlation matrix [8].\n\nIn sharp contrast to the assumed independence or known correlation of brain activity in\nthese algorithms, electrophysiological studies have shown pronounced and variable corre-\nlations in local potentials measured in different (sometimes widely separated) regions of the\nbrain, and indeed, have argued that these correlations re\ufb02ect relevant aspects of brain pro-\ncessing [9, 10]. This simple observation has profound consequences for most current MEG\nimaging algorithms. Not only are they unable to access this source of temporal information\nabout brain function (despite the temporal \ufb01delity of the technique in other respects), but\nthey may also provide inaccurate source localisations or reconstructions by dint of their\nincorrect assumptions regarding source correlation.\n\nIn this paper, we present a novel approach to source reconstruction. Our technique shares\nwith many of the methods described above the assumption of sparsity in source activa-\ntion. However, it dispenses entirely with assumption of source independence. Instead, we\nestimate the source correlation matrix from the data by hyperparameter optimisation.\n\n2 Model\n\nTo parameterise the source correlation matrix in a manner tractable for learning, we assume\nthat the source activities s are formed by a linear combination, with weight matrix W , of\ndz independent unit-variance normal pre-sources z,\n\nz \u223c N (0, I) ,\n\ns = W z;\n\n(3)\nso that learning the correlation matrix hss0i = W W 0 becomes equivalent to estimation of\nthe weights W .1 The sources are not really expected to have the Gaussian amplitude dis-\ntribution that this construction implies. Instead, the assumption forms a convenient \ufb01ction,\nmaking it easy to estimate the source correlation matrix. We show in simulations below\nthat estimation in this framework can indeed yield accurate estimates of the correlation\nmatrix even for non-normally distributed source activity. Once the correlation matrix has\nbeen established, estimation using the Wiener \ufb01lter of (2) provides the best linear estimate\nof source activity (and would be the exact maximum a posteriori estimate if the sources\nreally were normally distributed).\n\n1This formulation is similar to that used in weighted minimum-norm methods, although there the\n\nweights W are \ufb01xed, implying a pre-determined source correlation matrix.\n\n\fThe model of (3) parameterises the source correlation in a general way, subject to a max-\nimum rank of dz. This rank constraint does not by itself favour sparsity in the source dis-\ntribution, and could easily be chosen to be equal to ds. Instead, the sparsity emerges from\na hyperparameter optimisation similar to the automatic relevance determination (ARD) of\nMackay and Neal [11] (see also [12, 13]). Equation (3) de\ufb01ned a prior on s with parameters\nW . We now add a hyperprior on W under which the expected power of both tangential\ncomponents at the vth voxel is determined by a hyperparameter \u03b1v. For notational conve-\nnience we collect the \u03b1v into a vector \u03b1 and introduce a ds \u00d7 dv indicator matrix J, with\nJiv = 1 if the ith source is located in the vth voxel and 0 otherwise. Thus, each column of\nJ contains exactly two unit entries, one for each tangential component of the corresponding\nvoxel dipole. Finally, we introduce a ds \u00d7 ds diagonal matrix A with Aii = (J\u03b1)i. Then\n\nWij \u223c N (0, A\u22121\nii ) .\n\n(4)\n\nThus each \u03b1v sets a prior distribution on the length of the two rows in the weight matrix\ncorresponding to source components at the vth voxel. As in the original ARD models, opti-\nmisation of the marginal likelihood or evidence, P (B | \u03b1, L, \u03a8), with respect to the \u03b1v re-\nsults in a number of the hyperparameters diverging to in\ufb01nity. This imposes a zero-centred\ndelta-function prior on the corresponding row of W , in turn forcing the corresponding\nsource power to vanish. It is this optimisation, then, which introduces the sparsity.\n\nBefore passing to the optimisation scheme, we summarise the model introduced above\nby the log joint probability it assigns to observations, pre-sources and weights (here, and\nbelow, we drop the explicit conditioning on the \ufb01xed parameters L and \u03a8)\n\nlog P (B, Z, W | \u03b1) = \u22121\n2\n\n(N log |2\u03c0\u03a8| + Tr [(B \u2212 LW Z)0\u03a8\u22121(B \u2212 LW Z)])\n(dz log |2\u03c0A\u22121| + Tr [W 0AW ])\n\n(N dz log(2\u03c0) + Tr [Z0Z]) \u2212 1\n2\n\n\u2212 1\n2\n\n(5)\n\n3 Learning\n\nDirect optimisation of the log marginal likelihood logR dZ dW P (B, Z, W | \u03b1) proves\n\nto be intractable. Instead, we adopt the \u201cvariational Bayes\u201d (VB) framework of [3, 12].\nVB is a form of the Expectation-Maximisation (EM) algorithm for maximum-likelihood\nestimation. Given unknown distributions Qz(Z) and Qw(W ), Jensen\u2019s inequality provides\na bound on the log-likelihood\n\nZ\n\nlog P (B | \u03b1) = log\n\ndZ dW\n\nQz(Z)Qw(W )\nQz(Z)Qw(W ) P (B, Z, W | \u03b1)\n\n\u2265 hlog P (B, Z, W | \u03b1)iQz(Z)Qw(W ) + H(Qz) + H(Qw)\n\n(where H(\u00b7) represents the Shannon entropy). This bound can then be optimised by alter-\nnate maximisations with respect to Qz, Qw and the hyperparameters \u03b1. If, in place of the\nfactored distribution Qz(Z)Qw(W ) we had used a joint Q(Z, W ), this procedure would\nbe guaranteed to \ufb01nd a local maximum in the marginal likelihood (by analogy to EM). As\nit is, the optimisation is only approximate, but has been found to yield good maxima in a\nfactor analysis model very similar to the one we consider here [12]. In our experiments,\na slight variant of the standard VB procedure, described below, improved further on the\naccuracy of the solutions found.\n\n\fGiven estimates Qn\n\nz , Qn\n\nw and \u03b1n at the nth step, the (n + 1)th iteration is given by:\n\nQn+1\n\nz\n\n(Z) \u221d exphlog P (B, Z, W | \u03b1n)iQn\n\n0\n\nL\n\n\u03a8\n\n\u22121B, \u03a3n+1\n\nz\n\nz\n\nw\n\nQn\nw\n\n\u03a3n+1\n\nhWi0\n\n= N(cid:16)\n\u22121LW + I(cid:11)\u22121\n= N(cid:16)\n(cid:16)\n\u22121L + I \u2297 An(cid:17)\u22121\ni(cid:17)\u22121\n(cid:0)(J\n\nw vec\n\n,\n1)v \u2212 \u03b1v(J\n\n\u03a3n+1\n\nQn\nw\n\n\u03a8\n\nL\n\n,\n\n0\n\n0\n\n0\n\n0iQn+1\nhWi0\n\nz\n\n0\n\n0\n\nL\n\n\u03a8\n\nz =(cid:10)W\n(cid:16)hZZ\nhhWiQn+1\n\nw =\n0diag\n\nw\n\nQn+1\n\nw\n\nv\n\nw (W ) \u221d exphlog P (B, Z, W | \u03b1n)iQn\nQn+1\n\u2297 L\n\nwith \u03a3n+1\n\nz\n\n(cid:16)\n\nand \u03b1n\n\nv = dz\n\nJ\n\n(cid:17)\n(cid:17)\n(cid:1) .\n(cid:3))v\n\nwith \u03a3n+1\n\n(cid:17)\n\n;\n\n\u22121B hZ\n\n\u03a8\n\n0iQn+1\n\nz\n\n, \u03a3n+1\n\nw\n\n0diag(cid:2)\u03a3n+1\n\nw\n\nw] I + I)\u22121\n\nwhere the normal distribution on Z implies a normal distribution on each column z; the\ndistribution on W is normal on vec (W ) 2; 1 is a vector of ones; and the diag [\u00b7] operator\nreturns the main diagonal of its argument as a vector.\nOur experience is that better results can be obtained if the posterior expectation of ZZ0 in\nthe Qw update is replaced by its value under the prior on Z, N I. This variant appears to\nconstrain the factored posterior to remain closer to the true joint distribution. It has the\nadditional bene\ufb01t of simplifying both the notational and computational complexities of the\nupdates (for the latter, it reduces the complexity of the inversion needed to calculate \u03a3w\nfrom (dsdz)3 to d2\ns). We can then rewrite the updates into a more compact form by using\nthis assumption, and by evaluating the expectations, to obtain\nz = (W n0L0\u03a8\u22121LW n + Tr [L0\u03a8\u22121L0\u03a3n\n\u03a3n+1\n(6a)\nw = (N L0\u03a8\u22121L + An)\u22121 = (An)\u22121 \u2212 (An)\u22121L0(N \u22121\u03a8 + L(An)\u22121L0)\u22121L(An)\u22121\n\u03a3n+1\n(6b)\n(6c)\n\nW n+1 = \u03a3n+1\nv = dz\n\u03b1n+1\nwhere W n = hWiQn\nnality of A to reduce the computational complexity of the algorithm with respect to ds.\nThe formulae of (6) are easily implemented and recover an estimate of W , and thus the\nsource correlation matrix, by iteration. The source activities can then be estimated by use\nof the Wiener \ufb01lter (2). The updates of (6) also demonstrate an important point concerning\nthe validity of our Gaussian model. Note that the measured data enter into the estimation\nprocedure only through their correlation BB0. In other words, the hyperparameter opti-\nmisation stage of our algorithm is only being used to model the data correlation, not their\namplitudes. As a result, the effects of incorrectly assuming a Gaussian source amplitude\ndistribution can be expected to remain relatively benign.\n\n. The use of the matrix inversion lemma in (6b) exploits the diago-\n\nw L0\u03a8\u22121BB0\u03a8\u22121LW n\u03a3n+1\n\n(cid:0)J 0diag(cid:2)W n+1W n+10(cid:3)(cid:1)\u22121\n\nv (J 0diag(cid:2)\u03a3n+1\n\nv ((J 01)v \u2212 \u03b1n\n\n(cid:3))v),\n\n(6d)\n\nw\n\nz\n\nw\n\n4 Simulations\n\nSimulation studies provide an important tool for evaluating source recovery algorithms, in\nthat they provide \u201csensor\u201d data sets for which the correct answer (i.e. the true locations and\ntime-courses of the sources) is known. We report here the results of simulations carried out\nusing parameters similar to those that might be encountered in realistic recordings.\n\n4.1 Methods\n\nWe simulated 100 1-s-long epochs of evoked response data. The sensor con\ufb01guration was\ntaken from a real experiment: two sensor arrays, with 37 gradiometer coils each, were\n\n2for a discussion of the vec operator and the Kronecker product \u2297 see e.g. [14]\n\n\flocated on either side of the head (see \ufb01gure 1). Candidate source dipoles were located on\na grid with 1 cm spacing within a hemispherical brain volume with a radius of 8 cm, to give\na total of 956 possible source locations. Signi\ufb01cant (above background) evoked activity\nwas simulated at 5 of these locations (see \ufb01gure 1a), with random dipole orientations. The\nevoked waveforms were similar in form to the evoked responses seen in many areas of the\nbrain (see \ufb01gure 2a), and were strongly correlated between the \ufb01ve sites (\ufb01gure 3a). The\ntwo most lateral sites (one on each side), expressed bilateral primary sensory activation,\nand had identical time-courses with the shortest latency. Another lateral site, on the left\nside, had activity with the same waveform, but delayed by 50 ms. Two medial sites had\nslower and more delayed activation pro\ufb01les. The dipole orientation at each site was chosen\nrandomly in the plane parallel to the sensor tangent. Note that the amplitude distribution\nof these sources is strongly non-Gaussian; we will see, however, that they can be recovered\nsuccessfully by the present technique despite its assumption of normality.\n\nThe simulated sensor recordings were corrupted by noise from two sources, both with\nGaussian distribution. Background activity in the brain was simulated with equal power\nat every point on the grid of candidate sources, with a root-mean-square (RMS) amplitude\n1.5 decades below that of the 5 signi\ufb01cant sources. Although this background activity was\nuncorrelated between brain locations, it resulted in correlated disturbances at the magnetic\nsensors. Thermal noise in the sensors was uncorrelated, and had a similar magnitude (at\nthe sensors) to that of the background noise.\n\nThe novel Bayesian estimation technique was applied to the raw simulated sensor trace\nrather than to epoch-averaged data. While in this simulation the evoked activity was iden-\ntical in each trial, determining the correlation matrix from unaveraged data should, in the\nmore general case, make single-trial reconstructions more accurate. Once reconstructed,\nthe source timecourses were averaged, and are shown in \ufb01gure 2. The number of pre-\nsources dz, a free parameter in the algorithm, was set to 10. Sources associated with inverse\nvariance hyperparameters \u03b1i above a threshold (here 1015) were taken to be inactive.\nFor comparison, we also reconstructed sources using the vector minimum-variance adap-\ntive beamformer approach [15]. Note that this technique, along with many other existing\nreconstruction methods, assumes that sources at different locations are uncorrelated and so\nit should not be expected to perform well under the conditions of our simulation.\n\n4.2 Results\n\nFigure 1 shows the source locations and powers reconstructed by the novel Bayesian ap-\nproach developed here (b) and by the beamformer (c). The Bayesian approach identi\ufb01ed\nthe correct number of sources, at the correct locations and with approximately correct rel-\native powers. By contrast, the beamformer approach, which assumes uncorrelated sources,\nentirely failed to locate the sources of activity.\n\nFigure 2b shows the average evoked-response reconstruction at each of the identi\ufb01ed source\nlocations (with the simulated waveforms shown in panel a). The general time-course of the\nactivities has clearly been well characterised. The time-courses estimated by the vector\nbeamformer are shown in \ufb01gure 2c. As beamformer localisation proved to be unreliable,\nthe time-courses shown are the reconstructions at the positions of the correct (simulated)\nsources. Nonetheless, the strong correlations in the sources have corrupted the reconstruc-\ntions. Note that the only difference between the time-courses shown in \ufb01gure 2b and c is\npremultiplication by the estimated source correlation matrix in b.\n\nFinally, \ufb01gure 3 shows the correlation coef\ufb01cient matrices for the dipole amplitude time-\ncourses of the active sources shown in \ufb01gure 2. We see that the Bayesian approach \ufb01nds a\nreasonable approximation to the correct correlation structure. Again, however, the beam-\nformer is unable to accurately characterise the correlation matrix.\n\n\fFigure 1: Reconstructed source power. Each dot represents a single voxel, the size and shade of the\nsuperimposed circles indicates the relative power of the corresponding source. Each column contains\ntwo orthogonal projections of the same source distribution: (a) simulated sources, (b) reconstruction\nby evidence optimisation, (c) beamformer reconstruction (powers have been compressed to make\nsmaller sources more visible)\n\nFigure 2: Source waveforms at active locations. Sources are numbered from left to right in the brain.\nThe two traces for each location show the dipole components in two orthogonal directions. (a) simu-\nlated waveforms; (b) waveforms reconstructed by our novel algorithm; (c) waveforms reconstructed\nby beamforming (at the simulated locations)\n\n0100200300400500600700800900100012345time within epoch (ms)source numbera0100200300400500600700800900100012345time within epoch (ms)source numberb0100200300400500600700800900100012345time within epoch (ms)source numberc\fFigure 3: Source correlation coef\ufb01cient matrices. Correlations were computed between epoch-\naveraged dipole amplitude time-courses at each location. The size of each square indicates the mag-\nnitude of the corresponding coef\ufb01cient (the maximum value being 1), with whites squares positive\nand black squares negative. (a) simulated sources; (b) sources reconstructed by our novel algorithm;\n(c) sources reconstructed by beamforming.\n\n5 Conclusions\n\nWe have demonstrated a novel evidence-optimisation approach to the location and recon-\nstruction of dipole sources contributing to MEG measurements. Unlike existing methods,\nthis new technique does not assume a correlation structure for the sources, instead estimat-\ning it from the data. As such, this approach holds great promise for high \ufb01delity imaging\nof correlated magnetic activity in the brain.\n\nAcknowledgements\nWe thank Dr. Sekihara for useful discussions. This work is funded by grants from the\nWhitaker Foundation and from NIH (1R01004855-01A1).\n\nReferences\n\n[1] S. Baillet, J. C. Mosher, and R. M. Leahy. IEEE Signal Processing Magazine, 18(6):14\u201330,\n\n2001.\n\n[2] M. H\u00a8am\u00a8al\u00a8ainen, R. Hari, R. Ilmoniemi, J. Knuutila, and O. V. Lounasmaa. Rev. Mod. Phys.,\n\n65:413\u201397, 1993.\n\n[3] H. Attias. In S. A. Solla, T. K. Leen, and K.-R. M\u00a8uller, eds., Adv. Neural Info. Processing Sys.,\n\nvol. 12. MIT Press, 2000.\n\n[4] A. C. Tang, B. A. Pearlmutter, N. A. Malaszenko, D. B. Phung, and B. C. Reeb. Neural Comput.,\n\n14(8):1827\u201358, 2002.\n\n[5] O. David, L. Garnero, D. Cosmelli, and F. J. Varela. IEEE Trans. Biomed. Eng., 49(9):975\u201387,\n\n2002.\n\n[6] K. Sekihara and B. Scholz. IEEE Trans. Biomed. Eng., 43(3):281\u201391, 1996.\n[7] K. Sekihara, S. S. Nagarajan, D. Poeppel, and A. Marantz.\n\nIEEE Trans. Biomed. Eng.,\n\n49(12):1234\u201346, 2002.\n\n[8] C. Phillips, M. D. Rugg, and K. J. Friston. Neuroimage, 16(3):678\u201395, 2002.\n[9] E. Rodriguez, N. George, J. P. Lachaux, J. Martinerie, B. Renault, and F. J. Varela. Nature,\n\n397(6718):430\u20133, 1999.\n\n[10] C. Bernasconi, A. von Stein, and C. Chiang. Neuroreport, 11(4):689\u201392, 2000.\n[11] D. J. C. MacKay. In ASHRAE Transactions, V.100, Pt.2, pp. 1053\u20131062. ASHRAE, 1994.\n[12] Z. Ghahramani and M. Beal. In S. A. Solla, T. K. Leen, and K.-R. M\u00a8uller, eds., Adv. Neural\n\nInfo. Processing Sys., vol. 12. MIT Press, 2000.\n\n[13] M. Sahani and J. F. Linden. In S. Becker, S. Thrun, and K. Obermayer, eds., Adv. Neural Info.\n\nProcessing Sys., vol. 15. MIT Press, 2003.\n\n[14] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. CUP, 1991.\n[15] K. Sekihara, S. S. Nagarajan, D. Poeppel, A. Marantz, and Y. Miyashita. IEEE Trans. Biomed.\n\nEng., 48(7):760\u201371, 2001.\n\n1234512345sourcesourcesimulated correlationsa1234512345sourcesourcereconstructed correlationsb1234512345sourcesourcebeamformer correlationsc\f", "award": [], "sourceid": 2460, "authors": [{"given_name": "Maneesh", "family_name": "Sahani", "institution": null}, {"given_name": "Srikantan", "family_name": "Nagarajan", "institution": null}]}