{"title": "Classifying Single Trial EEG: Towards Brain Computer Interfacing", "book": "Advances in Neural Information Processing Systems", "page_first": 157, "page_last": 164, "abstract": null, "full_text": "Classifying Single Trial EEG:\n\nTowards Brain Computer Interfacing\n\nBenjamin Blankertz1(cid:3), Gabriel Curio2 and Klaus-Robert M\u00fcller1;3\n\n1Fraunhofer-FIRST.IDA, Kekul\u00e9str. 7, 12489 Berlin, Germany\n\n2Neurophysics Group, Dept. of Neurology, Klinikum Benjamin Franklin,\nFreie Universit\u00e4t Berlin, Hindenburgdamm 30, 12203 Berlin, Germany\n3University of Potsdam, Am Neuen Palais 10, 14469 Potsdam, Germany\n\u0002\u0001\u0004\u0003\u0006\u0005\b\u0007\n\t\u0002\u000b\r\f\u000e\u0005\u0010\u000f\n0\u0013\u0017,\t \u001f\u0012\u0005\u00151\"%\b02\u0003\u0006\u0017\u0002\u000f43\u0015\u00035\u001b\n\n\u0001\u0012\u0011\u0013\t\u0002\u0005\u0015\u0014\u0016\u0003\u0006\u0017\n\u0018\u001a\u0019\u001c\u001b\u001d\u0014\u001e\u0011\u0013\t \u001f\"!$#$\u0017&%\b\u0001\u0004\u0003\u0006\u0017\n\u0018'\u000f\n\n\u000b(\u001f\"\u0003'\u0011)\u0011*\u0003\u0006\u0017,+\u001c-/.\u0015\u0017&!\u001a\u0018 \u000f\n\n\u001f\u0012\u0017\n\f*%\u0016-/\u0019\u0002\u0003 3\u0012\t7\u0018 \u000f\n\n08\u001f\u0015#\u001a\u0001\u0004\u0003\u0006\u0017\n\u0011)\f\u000e\u00059\u000f43\u0012\u0003\n\nAbstract\n\nDriven by the progress in the \ufb01eld of single-trial analysis of EEG, there is\na growing interest in brain computer interfaces (BCIs), i.e., systems that\nenable human subjects to control a computer only by means of their brain\nsignals. In a pseudo-online simulation our BCI detects upcoming \ufb01nger\nmovements in a natural keyboard typing condition and predicts their lat-\nerality. This can be done on average 100\u2013230 ms before the respective\nkey is actually pressed, i.e., long before the onset of EMG. Our approach\nis appealing for its short response time and high classi\ufb01cation accuracy\n(>96%) in a binary decision where no human training is involved. We\ncompare discriminative classi\ufb01ers like Support Vector Machines (SVMs)\nand different variants of Fisher Discriminant that possess favorable reg-\nularization properties for dealing with high noise cases (inter-trial vari-\nablity).\n\n1 Introduction\n\nThe online analysis of single-trial electroencephalogram (EEG) measurements is a chal-\nlenge for signal processing and machine learning. Once the high inter-trial variability (see\nFigure 1) of this complex multivariate signal can be reliably processed, the next logical\nstep is to make use of the brain activities for real-time control of, e.g., a computer. In this\nwork we study a pseudo-online evaluation of single-trial EEGs from voluntary self-paced\n\ufb01nger movements and exploit the laterality of the left/right hand signal as one bit of infor-\nmation for later control. Features of our BCI approach are (a) no pre-selection for artifact\ntrials, (b) state-of-the-art learning machines with inbuilt feature selection mechanisms (i.e.,\nsparse Fisher Discriminant Analysis and SVMs) that lead to >96% classi\ufb01cation accura-\ncies, (c) non-trained users and (d) short response times. Although our setup was not tuned\nfor speed, the a posteriori determined information transmission rate is 23 bits/min which\nmakes our approach competitive to existing ones (e.g., [1, 2, 3, 4, 5, 6, 7]) that will be\ndiscussed in section 2.\n\n(cid:3)To whom correspondence should be addressed.\n\n6\n\fAims and physiological concept of BCI devices.\nTwo key issues to start with when\nconceiving a BCI are (1) the de\ufb01nition of a behavioral context in which a subject\u2019s brain\nsignals will be monitored and used eventually as surrogate for a bodily, e.g., manual, input\nof computer commands, and (2) the choice of brain signals which are optimally capable to\nconvey the subject\u2019s intention to the computer.\nConcerning the behavioral context, typewriting on a computer keyboard is a highly over-\nlearned motor competence. Accordingly, a natural \ufb01rst choice is a BCI-situation which in-\nduces the subject to arrive at a particular decision that is coupled to a prede\ufb01ned (learned)\nmotor output. This approach is well known as a two alternative forced choice-reaction\ntask (2AFC) where one out of two stimuli (visual, auditory or somatosensory) has to be\ndetected, categorised and responded to by issuing one out of two alternative motor com-\nmands, e.g., pushing a button with either the left or right hand. A task variant without\nexplicit sensory input is the voluntary, endogeneous generation of a \u203ago\u2039 command involv-\ning the deliberate choice between the two possible motor outputs at a self-paced rate. Here,\nwe chose this latter approach so as to approximate the natural computer input situation of\nself-paced typewriting.\nConcerning the selection of brain signals related to such endogeneous motor commands\nwe focussed here on one variant of slow brain potentials which are speci\ufb01cally related to\nthe preparation and execution of a motor command, rather than re\ufb02ecting merely unspe-\nci\ufb01c modulations of vigilance or attention. Using multi-channel EEG-mapping it has been\nrepeatedly demonstrated that several highly localised brain areas contribute to cerebral mo-\ntor command processes. Speci\ufb01cally, a negative \u203aBereitschaftspotential\u2039 (BP) precedes the\nvoluntary initiation of the movement. A differential scalp potential distribution can be re-\nliably demonstrated in a majority of experimental subjects with larger BP at lateral scalp\npositions (C3, C4) positioned over the left or right hemispherical primary motor cortex,\nrespectively, consistenly correlating with the performing (right or left) hand [8, 9].\nBecause one potential BCI-application is with paralysed patients, one might consider to\nmimic the \u203ano-motor-output\u2039 of these individuals by having healthy experimental subjects\nto intend a movement but to withhold its execution (motor imagery). While it is true that\nbrain potentials comparable to BP are associated with an imagination of hand movements,\nwhich indeed is consistent with the assumption that the primary motor cortex is active with\nmotor imagery, actual motor performance signi\ufb01cantly increased these potentials [10]. We\ntherefore chose to instruct the experimental subjects to actually perform the typewriting\n\ufb01nger movements, rather than to merely imagine their performance, for two reasons: \ufb01rst,\nthis will increase the BP signal strength optimising the signal-to-noise ratio in BCI-related\nsingle trial analyses; and second, we propose that it is important for the subject\u2019s task ef\ufb01-\nciency not to be engaged in an unnatural condition where, in addition to the preparation of a\nmotor command, a second task, i.e., to \u203aveto\u2039 the very same movement, has to be executed.\nIn the following section we will brie\ufb02y review part of the impressive earlier research to-\nwards BCI devices (e.g., [1, 2, 3, 4, 5, 6, 7]) before experimental set-up and classi\ufb01cation\nresults are discussed in sections 3 and 4 respectively. Finally a brief conclusion in given.\n\n2 A brief outline of BCI research\n\nBirbaumer et al. investigate slow cortical potentials (SCP) and how they can be self-\nregulated in a feedback scenario. In their thought translation device [2] patients learn to\nproduce cortical negativity or positivity at a central scalp location at will, which is fed back\nto the user. After some training patients are able to transmit binary decisions in a 4 sec pe-\nriodicity with accuracy levels up to 85% and therewith control a language support program\nor an internet browser.\nPfurtscheller et al. built a BCI system based on event-related (de-)synchronisation\n(ERD/ERS, typically of the m and central b\nrhythm) for online classi\ufb01cation of movement\nimaginations or preparations into 2\u20134 classes (e.g., left/right index \ufb01nger, feet, tongue).\n\n\fTypical preprocessing techniques are adaptive autoregressive parameters, common spatial\npatterns (after band pass \ufb01ltering) and band power in subject speci\ufb01c frequency bands.\nClassi\ufb01cation is done by Fisher discriminant analysis, multi-layer neural networks or LVQ\nvariants. In classi\ufb01cation of exogeneous movement preparations, rates of 98%, 96% and\n75% (for three subjects respectively) are obtained before movement onset 1 in a 3 classes\ntask and trials of 8 sec [3]. Only selected, artifact free trials (less that 40%) were used. A\ntetraplegic patient controls his hand orthosis using the Graz BCI system.\nWolpaw et al. study EEG-based cursor control [4], translating the power in subject speci\ufb01c\nfrequency bands, or autoregressive parameters, from two spatially \ufb01ltered scalp locations\nover sensorimotor cortex into vertical cursor movement. Users initially gain control by var-\nious kinds of motor imagery (the setting favours \u203amovement\u2039 vs. \u203ano movement\u2039 in contrast\nto \u203aleft\u2039 vs. \u203aright\u2039), which they report to use less and less as feedback training continues.\nIn cursor control trials of at least 4 sec duration trained subjects reach accuracies of over\n90%. Some subjects acquired also considerable control in a 2-d setup.\n\n3 Acquisition and preprocessing of brain signals\n\nExperimental setup.\nThe subject sat in a normal chair, relaxed arms resting on the ta-\nble, \ufb01ngers in the standard typing position at the computer keyboard. The task was to press\nwith the index and little \ufb01ngers the corresponding keys in a self-chosen order and timing\n(\u203aself-paced key typing\u2039). The experiment consisted of 3 sessions of 6 minutes each, pre-\nand postceeded by 60 seconds relaxing phase. All sessions were conducted on the same\nday with some minutes break inbetween. Typing of a total of 516 keystrokes was done at\nan average speed of 1 key every 2.1 seconds.\nBrain activity was measured with 27 Ag/AgCl electrodes at positions of the extended in-\nternational 10-20 system, 21 mounted over motor and somatosensory cortex, 5 frontal and\none occipital, referenced to nasion (sampled at 1000 Hz, band-pass \ufb01ltered 0.05\u2013200 Hz).\nBesides EEG we recorded an electromyogram (EMG) of the musculus \ufb02exor digitorum bi-\nlaterally (10\u2013200 Hz) and a horizontal and vertical electrooculogram (EOG). In an event\nchannel the timing of keystrokes was stored along with the EEG signal. All data were\nrecorded with a NeuroScan device and converted to Matlab format for further analysis.\nThe signals were downsampled to 100 Hz by picking every 10th sample. In a moderate\nrejection we sorted out only 3 out of 516 trials due to heavy measurement artifacts, while\nkeeping trials that are contaminated by less serious artifacts or eye blinks. Note that 0.6%\nrejection rate is very low in contrast to most other BCI of\ufb02ine studies.\nThe issue of preprocessing.\nPreprocessing the data can have a substantial effect on clas-\nsi\ufb01cation in terms of accuracy, effort and suitability of different algorithms. The question\nto what degree data should be preprocessed prior to classi\ufb01cation is a trade-off between the\ndanger of loosing information or over\ufb01tting and not having enough training samples for\nthe classi\ufb01er to generalize from high dimensional, noisy data. We have investigated two\noptions: unprocessed data and preprocessing that was designed to focus on BP related to\n\ufb01nger movement:\n\n(none) take 200 ms of raw data of all relevant channels;\n\n(<5 Hz) \ufb01lter the signal low pass at 5 Hz, subsample it at 20 Hz and take 150 ms of all\n\nrelevant channels (see Figure 1);\n\nSpeaking of classi\ufb01cation at a certain time point we strictly mean classi\ufb01cation based on\nEEG signals until that very time point. The following procedure of calculating features\nof a single trial due to (<5 Hz) is easy applicable in an online scenario: Take the last 128\nsample points of each channel (to the past relative from the given time point), apply a win-\ndowed (w(n) := 1 (cid:0) cos(np =128)) FFT, keep only the coef\ufb01cients corresponding to the pass\n1Precisely: before mean EMG onset time, for some trials this is before for others after EMG onset.\n\n\faverage\nsingle trial\nfeature\n\n20\n\n10\n\n0\n\n\u221210\n\n\u221220\n\n]\n\nV\n\n[m\n\nF3\nF1\nFZ\nF2\nF4\nCA5\nCA3\nCA1\nCAZ\nCA2\nCA4\nCA6\nC5\nC3\nC1\nCZ\nC2\nC4\nC6\nCP5\nCP3\nCP1\nCPZ\nCP2\nCP4\nCP6\nO1\n\n\u2212500\n\n\u2212400\n\n\u2212300\n\n\u2212200\n\n\u2212100\n\n0\n\n[ms]\n\n\u2212260\n\n\u2212240\n\n\u2212220\n\n\u2212200\n\n\u2212180\n\n\u2212160\n\n\u2212140\n\n\u2212120\n\nFigure 1: Averaged data and two single trials of\nright \ufb01nger movements in channel C3. 3 val-\nues (marked by circles) of smoothed signals are\ntaken as features in each channel.\n\nFigure 2: Sparse Fisher Discriminant Analysis\nselected 68 features (shaded) from 405 input di-\nmensions (27 channels (cid:2) 15 samples [150 ms])\nof raw EEG data.\n\nband (bins 2\u20137, as bin 1 just contains DC information) and transform back. Downsampling\nto 20 Hz is done by calculating the mean of consecutive 5-tuple of data points. We inves-\ntigated the alternatives of taking all 27 channels, or only the 21 located over motor and\nsensorimotor cortex. The 6 frontal and occipital channels are expected not to give strong\ncontributions to the classi\ufb01cation task. Hence a comparison shows, whether a classi\ufb01er is\ndisturbed by low information channels or if it even manages to extract information from\nthem.\nFigure 1 depicts two single trial EEG signals at scalp location C3 for right \ufb01nger move-\nments. These two single trials are very well-shaped and were selected for resembling the\nthe grand average over all 241 right \ufb01nger movements, which is drawn as thick line. Usu-\nally the BP of a single trial is much more obscured by non task-related brain activity and\nnoise. The goal of preprocessing is to reveal task-related components to a degree that they\ncan be detected by a classi\ufb01er. Figure 1 shows also the feature vectors due to preprocessing\n(<5 Hz) calculated from the depicted raw single trial signals.\n\n4 From response-aligned to online classi\ufb01cation\n\nWe investigate some linear classi\ufb01cation methods. Given a linear classi\ufb01er (w;b) in sepa-\nrating hyperplane formulation (w >x+b = 0), the estimated label f1; (cid:0)1g of an input vector\nN is \u02c6y = sign(w>x + b). If no a priori knowledge on the probability distribution of the\nx 2\ndata is available, a typical objective is to minimize a combination of empirical risk function\nand some regularization term that restrains the algorithm from over\ufb01tting to the training\nset f(xk;yk) jk = 1; : : : ; Kg. Taking a soft margin loss function [11] yields the empirical\nrisk function (cid:229) K\nk=1 max(0; 1 (cid:0) yk (w>xk + b)). In most approaches of this type there is a\nhyper-parameter that determines the trade-off between risk and regularization, which has\nto be chosen by model selection on the training set 2.\nFisher Discriminant (FD) is a well known classi\ufb01cation method, in which a projection\nvector is determined to maximize the distance between the projected means of the two\nclasses while minimizing the variance of the projected data within each class [13]. In the\nbinary decision case FD is equivalent to a least squares regression to (properly scaled) class\nlabels.\nRegularized Fisher Discriminant (RFD) can be obtained via a mathematical program-\nming approach [14]:\n\nmin\nw;b;x\n\n2 + C=K jjx jj2\n1=2jjwjj2\nyk(w>xk + b) = 1 (cid:0) x\n\n2\n\nk\n\nsubject to\n\nfor k = 1; : : : ; K\n\n2As this would be very time consuming in k-fold crossvalidation, we proceed similarly to [12].\n\nS\n\u2039\n\u203a\n\n\fch\u2019s\n\ufb01lter\n<5 Hz mc\nall\n<5 Hz\nnone\nmc\nnone\nall\n\nFD\n\n3.7(cid:6)2:6\n3.3(cid:6)2:5\n18.1(cid:6)4:8\n29.3(cid:6)6:1\n\nRFD\n3.3(cid:6)2:2\n3.1(cid:6)2:5\n7.0(cid:6)4:1\n7.5(cid:6)3:8\n\nSFD\n3.3(cid:6)2:2\n3.4(cid:6)2:7\n6.4(cid:6)3:4\n7.0(cid:6)3:9\n\nSVM k-NN\n21.6(cid:6)4:9\n23.1(cid:6)5:8\n29.6(cid:6)5:9\n32.2(cid:6)6:8\n\n3.2(cid:6)2:5\n3.6(cid:6)2:5\n8.5(cid:6)4:3\n9.8(cid:6)4:4\n\nTable 3: Test set error ((cid:6) std) for classi\ufb01cation at 120 ms before keystroke; \u203amc\u2039 refers to the 21\nchannels over (sensori) motor cortex, \u203aall\u2039 refers to all 27 channels.\nwhere jj(cid:1)jj2 denotes the \u20182-norm (jjwjj2\n2 = w>w) and C is a model parameter. The constraint\nyk(w>xk + b) = 1 (cid:0)x\nk ensures that the class means are projected to the corresponding class\nlabels, i.e., 1 and (cid:0)1. Minimizing the length of w maximizes the margin between the\nprojected class means relative to the intra class variance. This formalization above gives\nthe opportunity to consider some interesting variants, e.g.,\nSparse Fisher Discriminant (SFD) uses the \u20181-norm (jjwjj1 = Sjw nj) on the regularizer,\ni.e., the goal function is 1=N jjwjj1 + C=K jjx jj2\n2. This choice favours solutions with sparse\nvectors w, so that this method also yields some feature selection (in input space). When\napplied to our raw EEG signals SFD selects 68 out of 405 input dimensions that allow for a\nleft vs. right classi\ufb01cation with good generalization. The choice coincides in general with\nwhat we would expect from neurophysiology, i.e., high loadings for electrodes close to left\nand right hemisphere motor cortices which increase prior to the keystroke, cf. Figure 2. But\nhere the selection is automatically adapted to subject, electrode placement, etc.\nOur implementation of RFD and SFD uses the cplex optimizer [15].\nSupport Vector Machines (SVMs) are well known for their use with kernels [16, 17].\nHere we only consider linear SVMs:\n1=2 jjwjj2\n\nsubject to\n\nmin\nw;b;x\n\n2 + C=K jjx jj1\n1 (cid:0) x\n\nyk(w>xk + b)\n\nand x\n\nk;\n\n0\n\nk \n\nThe choice of regulization keeps a bound on the Vapnik-Chervonenkis dimension small. In\nan equivalent formulation the objective is to maximize the margin between the two classes\n(while minimizing the soft margin loss function) 3.\nFor comparision we also employed a standard classi\ufb01er of different type:\nk-Nearest-Neighbor (k-NN) maps an input vector to that class to which the majority of the\nk nearest training samples belong. Those neighbors are determined by euclidean distance\nof the corresponding feature vectors. The value of k chosen by model selection was around\n15 for processed and around 25 for unprocessed data.\nClassi\ufb01cation of response-aligned windows.\nIn the \ufb01rst phase we make full use of the\ninformation that we have regarding the timing of the keystrokes. For each single trial we\ncalculate a feature vector as described above with respect to a \ufb01xed timing relative to the\nkey trigger (\u203aresponse-aligned\u2039). Table 3 reports the mean error on test sets in a 10(cid:2)10-\nfold crossvalidation for classifying in \u203aleft\u2039 and \u203aright\u2039 at 120 ms prior to keypress. Figure 4\nshows the time course of the classi\ufb01cation error. For comparison, the result of EMG-based\nclassi\ufb01cation is also displayed. It is more or less at chance level up to 120 ms before the\nkeystroke. After that the error rate decreases rapidly. Based upon this observation we\nchose t =(cid:0)120 ms for investigating EEG-based classi\ufb01cation. From Table 3 we see that\nFD works well with the preprocssed data, but as dimensionality increases the performance\nbreaks down. k-NN is not successful at all. The reason for the failure is that the variance in\nthe discriminating directions is much smaller that the variance in other directions. So using\nthe euclidean metric is not an appropirate similarity measure for this purpose. All regular-\nized discriminative classi\ufb01ers attain comparable results. For preprocessed data a very low\n3We used the implementation LIBSVM by Chang and Lin, available along with other implemen-\n\ntations from\n\n\u0001\u0002\u0001\u0003\u0001\u0005\u0004\n\n\u0006\b\u0007\n\t\f\u000b\r\u0007\n\u000e\u0010\u000f\u0012\u0011\u0014\u0013\u0016\u0015\f\u0017\u0019\u0018\u001a\u000b\u001b\u0007\u001d\u001c\u001e\u0004\n\n\u001f\u0016\t! \n\n.\n\n\n\f60\n\n50\n\n40\n\n30\n\n20\n\n10\n\n]\n\n%\n\n[\n \nr\no\nr\nr\ne\n\n \n\nn\no\n\ni\nt\n\na\nc\ni\nf\ni\ns\ns\na\nc\n\nl\n\n0\n\n\u22121000\n\n\u2212120 ms\ufb01\n\n \n\n12\n\n10\n\n8\n\n6\n\n4\n\n2\n\n\u2212600\n\n\u2212400\n\n\u2212200\n\n0 [ms]\n\n0\n\u2212200\n\n\u2212100\n\n0 [ms]\n\nEMG\nEEG\n\n\u2212800\n\nFigure 4: Comparison of EEG (<5 Hz, mc, SFD) and EMG based classi\ufb01cation with respect to the\nendpoint of the classi\ufb01cation interval. The right panel gives a details view: -230 to 50 ms.\n\nerror rate between 3% and 4% can be reached without a signi\ufb01cant difference between the\ncompeting methods. For the classi\ufb01cation of raw data the error is roughly twice as high.\nThe concept of seeking sparse solution vectors allows SFD to cope very well with the high\ndimensional raw data. Even though the error is twice as high compared to the the minimum\nerror, this result is very interesting, because it does not rely on preprocessing. So the SFD\napproach may be highly useful for online situations, when no precursory experiments are\navailable for tuning the preprocessing.\n\nThe comparison of EEG- and EMG-based classi\ufb01cation in Figure 4 demonstrates the rapid\nresponse capability of our system: 230 ms before the actual keystroke the classi\ufb01cation rate\nexceeds 90%. To assess this result it has to be recalled that movements were performed\nspontaneously. At (cid:0)120 ms, when the EMG derived classi\ufb01er is still close to chance, EEG\nbased classi\ufb01cation becomes already very stable with less than 3.5% errors. Interpreting\nthe last result in the sense of a 2AFC gives an information transfer rate of 60=2:1B (cid:25) 22:9\n[bits/min], where B = log2 N + p log2 p +(1 (cid:0) p) log2(1(cid:0)p=N(cid:0)1) is the number of bits per se-\nlection from N = 2 choices with success probability p = 1 (cid:0) 0:035 (under some uniformity\nassumptions).\nClassi\ufb01cation in sliding windows.\nThe second phase is an important step towards on-\nline classi\ufb01cation of endogeneous brain signals. We have to refrain from using event timing\ninformation (e.g., of keystrokes) in the test set. Accordingly, classi\ufb01cation has to be per-\nformed in sliding windows and the classi\ufb01er does not know in what time relation the given\nsignals are to the event\u2014maybe there is even no event. Technically classi\ufb01cation could be\ndone as before, as the trained classi\ufb01ers can be applied to the feature vectors calculated\nfrom some arbitrary window. But in practice this is very likely to lead to unreliable results\nsince those classi\ufb01ers are highly specialized to signals that have a certain time relation to\nthe response. The behavior of the classi\ufb01er elsewhere is uncertain. The typical way to\nmake classi\ufb01cation more robust to time shifted signals is jittered training. In our case we\nused 4 windows for each trial, ending at -240, -160, -80 and 0 ms relative to the response\n(i.e., we get four feature vectors from each trial).\nMovement detection and pseudo-online classi\ufb01cation. Detecting upcoming events is\na crucial point in online analysis of brain signals in an unforced condition. To accomplish\nthis, we employ a second classi\ufb01er that distinguishes movement events from the \u203arest\u2039.\nFigures 5 and 6 display the continuous classi\ufb01er output w>x + b (henceforth called graded)\nfor left/right and movement/rest distinction, respectively. For Figure 5, a classi\ufb01er was\ntrained as described above and subsequently applied to windows sliding over unseen test\nsamples yielding \u203atraces\u2039 of graded classi\ufb01er outputs. After doing this for several train/test\nset splits, the borders of the shaded tubes are calculated as 5 and 95 percentile values of\n\n\f1.5\n\n1\n\n0.5\n\n0\n\n\u22120.5\n\n\u22121\n\n\u22121.5\n\n\b\t\b\t\b\n\n\t\n\t\n\nright\nleft\n\n2\n\n1\n\n0\n\n\u22121\n\n\u22122\n\nmedian\n10, 90 percentile\n5\u221295 perc. tube\n\n\u000f\u0010\u000f\u0010\u000f\n\u0011\u0010\u0011\u0010\u0011\n\n\u2212250\n\n\u2212500\n\n\u2212750\n\n\u22121000\n\n[ms]\nFigure 5: Graded classi\ufb01er output for left/right\ndistinctions.\n\n500\n\n250\n\n750\n\n0\n\n\u2212250\n\n\u2212500\n\n\u2212750\n\n\u22121000\n[ms]\nFigure 6: Graded classi\ufb01er output for movement\ndetection in endogenous brain signals.\n\n250\n\n500\n\n750\n\n0\n\nthose traces, thin lines are at 10 and 90 percent, and the thick line indicates the median.\nAt t =(cid:0)100 ms the median for right events in Figure 5 is approximately 0.9, i.e., applying\nthe classi\ufb01er to right events from the test set yielded in 50% of the cases an output greater\n0.9 (and in 50% an output less than 0.9). The corresponding 10-percentile line is at 0.25\nwhich means that the output to 90% of the right events was greater than 0.25. The second\nclassi\ufb01er (Figure 6) was trained for class \u203amovement\u2039 on all trials with jitters as described\nabove and for class \u203arest\u2039 in multiple windows between the keystrokes. The preprocessing\nand classi\ufb01cation procedure was the same as for left vs. right.\n\nThe classi\ufb01er in Figure 5 shows a pronounced separation during the movement (preparation\nand execution) period. In other regions there is an overlap or even crossover of the classi\ufb01er\noutputs of the different classes. From Figure 5 we observe that the left/right classi\ufb01er alone\ndoes not distinguish reliably between \u203amovement\u2039 and \u203ano movement\u2039 by the magnitude\nof its output, which explains the need for a movement detector. The elevation for the left\nclass is a little less pronounced (e.g., the median is (cid:0)1 at t =0 ms compared to 1.2 for right\nevents) which is probably due to the fact that the subject is right-handed. The movement\ndetector in Figure 6 brings up the movement phase while giving (mainly) negative output to\nthe post movement period. This differentiation is not as decisive as desirable, hence further\nresearch has to be pursued to improve on this. Nevertheless a pseudo-online BCI run on the\nrecorded data using a combination of the two classi\ufb01ers gave the very satisfying result of\naround 10% error rate. Taking this as a 3 classes choice (left, right, none) this corresponds\nto an information transmission rate of 29 bits/min.\n\n5 Concluding discussion\n\nWe gave an outline of our BCI system in the experimental context of voluntary self-paced\nmovements. Our approach has the potential for high bit rates, since (1) it works at a high\ntrial frequency, and (2) classi\ufb01cation errors are very low. So far we have used untrained\nindividuals, i.e., improvement can come from appropriate training schemes to shape the\nbrain signals. The two-stage process of \ufb01rst a meta classi\ufb01cation whether a movement is\nabout to take place and then a decision between left/right \ufb01nger movement is very natural\nand an important new feature of the proposed system. Furthermore, we reject only 0.6%\nof the trials due to artifacts, so our approach seems ideally suited for the true, highly noisy\nfeedback BCI scenario. Finally, the use of state-of-the-art learning machines enables us not\nonly to achieve high decision accuracies, but also, as a by-product of the classi\ufb01cation, the\nfew most prominent features that are found match nicely with physiological intuition: the\nmost salient information can be gained between 230\u2013100 ms before the movement with a\nfocus on C3/C4 area, i.e., over motor cortices, cf. Figure 2.\nThere are clear perspectives for improvement in this BCI approach: our future research\nactivities will therefore focus on (a) projection techniques like ICA, (b) time-series ap-\nproaches to capture the (non-linear) dynamics of the multivariate EEG signals, and (c)\nconstruction of specially adapted kernel functions (SVM or kernel FD) in the spirit of, e.g.,\n[17] to ultimately obtain a BCI feedback system with an even higher bit rate and accuracy.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0001\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0004\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0005\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0006\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u0007\n\u000b\n\u000b\n\u000b\n\u000b\n\u000b\n\u000b\n\f\n\f\n\f\n\f\n\f\n\f\n\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\u000e\n\fAcknowledgements. We thank S. Harmeling, M. Kawanabe, J. Kohlmorgen, J. Laub,\nS. Mika, G. R\u00e4tsch, R. Vig\u00e1rio and A. Ziehe for helpful discussions.\n\nReferences\n[1] J. J. Vidal, \u201cToward direct brain-computer communication\u201d, Annu. Rev. Biophys., 2: 157\u2013180,\n\n1973.\n\n[2] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K\u00fcbler, J. Perel-\nmouter, E. Taub, and H. Flor, \u201cA spelling device for the paralysed\u201d, Nature, 398: 297\u2013298,\n1999.\n\n[3] B. O. Peters, G. Pfurtscheller, and H. Flyvbjerg, \u201cAutomatic Differentiation of Multichannel\n\nEEG Signals\u201d, IEEE Trans. Biomed. Eng., 48(1): 111\u2013116, 2001.\n\n[4] J. R. Wolpaw, D. J. McFarland, and T. M. Vaughan, \u201cBrain-Computer Interface Research at the\n\nWadsworth Center\u201d, IEEE Trans. Rehab. Eng., 8(2): 222\u2013226, 2000.\n\n[5] W. D. Penny, S. J. Roberts, E. A. Curran, and M. J. Stokes, \u201cEEG-based cummunication: a\n\npattern recognition approach\u201d, IEEE Trans. Rehab. Eng., 8(2): 214\u2013215, 2000.\n\n[6] J. D. Bayliss and D. H. Ballard, \u201cRecognizing Evoked Potentials in a Virtual Environment\u201d,\nin: S. A. Solla, T. K. Leen, and K.-R. M\u00fcller, eds., Advances in Neural Information Processing\nSystems, vol. 12, 3\u20139, MIT Press, 2000.\n\n[7] S. Makeig, S. Enghoff, T.-P. Jung, and T. J. Sejnowski, \u201cA Natural Basis for Ef\ufb01cient Brain-\n\nActuated Control\u201d, IEEE Trans. Rehab. Eng., 8(2): 208\u2013211, 2000.\n\n[8] W. Lang, O. Zilch, C. Koska, G. Lindinger, and L. Deecke, \u201cNegative cortical DC shifts pre-\nceding and accompanying simple and complex sequential movements\u201d, Exp. Brain Res., 74(1):\n99\u2013104, 1989.\n\n[9] R. Q. Cui, D. Huter, W. Lang, and L. Deecke, \u201cNeuroimage of voluntary movement: topography\nof the Bereitschaftspotential, a 64-channel DC current source density study\u201d, Neuroimage, 9(1):\n124\u2013134, 1999.\n\n[10] R. Beisteiner, P. Hollinger, G. Lindinger, W. Lang, and A. Berthoz, \u201cMental representations\nof movements. Brain potentials associated with imagination of hand movements\u201d, Electroen-\ncephalogr. Clin. Neurophysiol., 96(2): 183\u2013193, 1995.\n\n[11] K. P. Bennett and O. L. Mangasarian, \u201cRobust Linear Programming Discrimination of two\n\nLinearly Inseparable Sets\u201d, Optimization Methods and Software, 1: 23\u201334, 1992.\n\n[12] G. R\u00e4tsch, T. Onoda, and K.-R. M\u00fcller, \u201cSoft Margins for AdaBoost\u201d, 42(3): 287\u2013320, 2001.\n[13] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classi\ufb01cation, Wiley & Sons, 2nd edn., 2001.\n[14] S. Mika, G. R\u00e4tsch, and K.-R. M\u00fcller, \u201cA mathematical programming approach to the Kernel\nFisher algorithm\u201d, in: T. K. Leen, T. G. Dietterich, and V. Tresp, eds., Advances in Neural\nInformation Processing Systems 13, 591\u2013597, MIT Press, 2001.\n\n[15] \u201cILOG Solver, ILOG CPLEX 6.5 Reference Manual\u201d,\n[16] V. Vapnik, The nature of statistical learning theory, Springer Verlag, New York, 1995.\n[17] K.-R. M\u00fcller, S. Mika, G. R\u00e4tsch, K. Tsuda, and B. Sch\u00f6lkopf, \u201cAn Introduction to Kernel-\nBased Learning Algorithms\u201d, IEEE Transactions on Neural Networks, 12(2): 181\u2013201, 2001.\n\n\u0001\u0003\u0001\u0002\u0001\u0005\u0004\n\n, 1999.\n\n\u0018\u0010\u000e\n\n \u0019\u0004\n\n\u0015\u001d\u001f\u0016\u0011\n\n\u001f\n\f", "award": [], "sourceid": 2030, "authors": [{"given_name": "Benjamin", "family_name": "Blankertz", "institution": null}, {"given_name": "Gabriel", "family_name": "Curio", "institution": null}, {"given_name": "Klaus-Robert", "family_name": "M\u00fcller", "institution": null}]}