{"title": "Unsupervised Color Constancy", "book": "Advances in Neural Information Processing Systems", "page_first": 1327, "page_last": 1334, "abstract": null, "full_text": "Unsupervised Color Constancy\n\nKinh Tieu\n\nArti\ufb01cial Intelligence Laboratory\n\nMassachusetts Institute of Technology\n\nCambridge, MA 02139\ntieu@ai.mit.edu\n\nErik G. Miller\n\nComputer Science Division\n\nUC Berkeley\n\nBerkeley, CA 94720\n\negmil@cs.berkeley.edu\n\nAbstract\n\nIn [1] we introduced a linear statistical model of joint color changes in\nimages due to variation in lighting and certain non-geometric camera pa-\nrameters. We did this by measuring the mappings of colors in one image\nof a scene to colors in another image of the same scene under different\nlighting conditions. Here we increase the \ufb02exibility of this color \ufb02ow\nmodel by allowing \ufb02ow coef\ufb01cients to vary according to a low order\npolynomial over the image. This allows us to better \ufb01t smoothly vary-\ning lighting conditions as well as curved surfaces without endowing our\nmodel with too much capacity. We show results on image matching and\nshadow removal and detection.\n\n1 Introduction\n\nThe number of possible images of an object or scene, even when taken from a single view-\npoint with a \ufb01xed camera, is very large. Light sources, shadows, camera aperture, expo-\nsure time, transducer non-linearities, and camera processing (such as auto-gain-control and\ncolor balancing) can all affect the \ufb01nal image of a scene. These effects have a signi\ufb01cant\nimpact on the images obtained with cameras and hence on image processing algorithms,\noften hampering or eliminating our ability to produce reliable recognition algorithms.\n\nAddressing the variability of images due to these photic parameters has been an important\nproblem in machine vision. We distinguish photic parameters from geometric parameters,\nsuch as camera orientation or blurring, that affect which parts of the scene a particular pixel\nrepresents. We also note that photic parameters are more general than \u201clighting parame-\nters\u201d and include anything which affects the \ufb01nal RGB values in an image given that the\ngeometric parameters and the objects in the scene have been \ufb01xed.\n\nWe present a statistical linear model of color change space that is learned by observing\nhow the colors in static images change jointly under common, naturally occurring lighting\nchanges. Such a model can be used for a number of tasks, including synthesis of images\nof new objects under different lighting conditions, image matching, and shadow detection.\nResults for each of these tasks will be reported.\n\nSeveral aspects of our model merit discussion. First, it is obtained from video data in a\ncompletely unsupervised fashion. The model uses no prior knowledge of lighting condi-\ntions, surface re\ufb02ectances, or other parameters during data collection and modeling. It also\nhas no built-in knowledge of the physics of image acquisition or \u201ctypical\u201d image color\n\n\fchanges, such as brightness changes. Second, it is a single global model and does not need\nto be re-estimated for new objects or scenes. While it may not apply to all scenes equally\nwell, it is a model of frequently occurring joint color changes, which is meant to apply to\nall scenes. Third, while our model is linear in color change space, each joint color change\nthat we model (a 3-D vector \ufb01eld) is completely arbitrary, and is not itself restricted to\nbeing linear. This gives us great modeling power, while capacity is controlled through the\nnumber of basis \ufb01elds allowed.\n\nAfter discussing previous work in Section 2, we introduce the color \ufb02ow model and how\nit is obtained from observations in Section 3. In Section 4, we show how the model and a\nsingle observed image can be used to generate a large family of related images. We also\ngive an ef\ufb01cient procedure for \ufb01nding the best \ufb01t of the model to the difference between two\nimages. In Section 5 we give preliminary results for image matching (object recognition)\nand shadow detection.\n\n2 Previous work\n\nThe color constancy literature contains a large body of work on estimating surface re-\n\ufb02ectances and various photic parameters from images. A common approach is to use linear\nmodels of re\ufb02ectance and illuminant spectra [2]. Gray world algorithms [3] assume the\naverage re\ufb02ectance of all the surfaces in a scene is gray. White world algorithms [4] as-\nsume the brightest pixel corresponds to a scene point with maximal re\ufb02ectance. Brainard\nand Freeman attacked this problem probabilistically [5] by de\ufb01ning prior distributions on\nparticular illuminants and surfaces. They used a new, maximum local mass estimator to\nchoose a single best estimate of the illuminant and surface.\n\nAnother technique is to estimate the relative illuminant or mapping of colors under an un-\nknown illuminant to a canonical one. Color gamut mapping [6] uses the convex hull of all\nachievable RGB values to represent an illuminant. The intersection of the mappings for\neach pixel in an image is used to choose a \u201cbest\u201d mapping. [7] trained a back-propagation\nmulti-layer neural network to estimate the parameters of a linear color mapping. The ap-\nproach in [8] works in the log color spectra space where the effect of a relative illuminant\nis a set of constant shifts in the scalar coef\ufb01cients of linear models for the image colors and\nilluminant. The shifts are computed as differences between the modes of the distribution\nof coef\ufb01cients of randomly selected pixels of some set of representative colors.\n\n[9] bypasses the need to predict speci\ufb01c scene properties by proving that the set of images\nof a gray Lambertian convex object under all lighting conditions form a convex cone. 1 We\nwanted a model which, based upon a single image (instead of three required by [9]), could\nmake useful predictions about other images of the same scene. This work is in the same\nspirit, although we use a statistical method rather than a geometric one.\n\n3 Color \ufb02ows\n\nIn the following, let C = f(r; g; b)T 2 R3 : 0 (cid:20) r (cid:20) 255; 0 (cid:20) g (cid:20) 255; 0 (cid:20) b (cid:20) 255g be\nthe set of all possible observable image color 3-vectors. Let the vector-valued color of an\nimage pixel p be denoted by c(p) 2 C.\nSuppose we are given two P -pixel RGB color images I1 and I2 of the same scene taken\nunder two different photic parameters (cid:18)1 and (cid:18)2 (the images are registered). Each pair of\n\n1This result depends upon the important assumption that the camera, including the transducers,\nthe aperture, and the lens introduce no non-linearities into the system. The authors\u2019 results on color\nimages also do not address the issue of metamers, and assume that light is composed of only the\nwavelengths red, green, and blue.\n\n\fa\n\nb\n\nc\n\nd\n\ne\n\nf\n\nFigure 1: Matching non-linear color changes. b is the result of squaring the value of a (in\nHSV) and re-normalizing it to 255. c-f are attempts to match b with a using four different\nalgorithms. Our algorithm (f) was the only one to capture the non-linearity.\n\ncorresponding image pixels pk\ncolor mapping c(pk\n\n1) 7! c(pk\n\n1 and pk\n2 ) that is conveniently represented by the vector difference:\n\n2 ; 1 (cid:20) k (cid:20) P , in the two images represents a single-\n\nd(pk\n\n1 ; pk\n\n2) = c(pk\n\n2 ) (cid:0) c(pk\n\n1 ):\n\n(1)\n\nBy computing P vector differences (one for each pair of pixels) and placing each at the\npoint c(pk\n\n1 ) in color space C, we have a partially observed color \ufb02ow:\n\n(cid:8)0(c(pk\n\n1 )) = d(pk\n\n1 ; pk\n\n2);\n\n1 (cid:20) k (cid:20) P\n\n(2)\n\nde\ufb01ned at points in C for which there are colors in image I1.\nTo obtain a full color \ufb02ow (i.e. a vector \ufb01eld (cid:8) de\ufb01ned at all points in C) from a partially\nobserved color \ufb02ow (cid:8)0, we must address two issues. First, there will be many points in\nC at which no vector difference is de\ufb01ned. Second, there may be multiple pixels of a\nparticular color in image I1 that are mapped to different colors in image I2. We use a radial\nbasis function estimator which de\ufb01nes the \ufb02ow at a color point (r; g; b)T as the weighted\nproximity-based average of nearby observed \u201c\ufb02ow vectors\u201d. We found empirically that\n(cid:27)2 = 16 (with colors on a 0\u2013255 scale) worked well. Note that color \ufb02ows are de\ufb01ned so\nthat a color point with only a single nearby neighbor will inherit a \ufb02ow vector that is nearly\nparallel to its neighbor. The idea is that if a particular color, under a photic parameter\nchange (cid:18)1 7! (cid:18)2, is observed to get a little bit darker and a little bit bluer, for example, then\nits neighbors in color space are also de\ufb01ned to exhibit this behavior.\n\n3.1 Structure in the space of color \ufb02ows\n\nConsider a \ufb02at Lambertian surface that may have different re\ufb02ectances as a function of\nthe wavelength. While in principle it is possible for a change in lighting to map any color\nfrom such a surface to any other color independently of all other colors 2, we know from\nexperience that many such joint maps are unlikely. This suggests that while the marginal\ndistribution of mappings for a particular color is broadly distributed, the space of possible\njoint color maps (i.e., color \ufb02ows) is much more compact 3.\nIn learning a statistical model of color \ufb02ows, many common color \ufb02ows can be anticipated\nsuch as ones that make colors a little darker, lighter, or more red. These types of \ufb02ows can\nbe well modeled with a simple global 3x3 matrix A that maps a color c1 in image I1 to a\ncolor c2 in image I2 via\n\nc2 = Ac1:\n\n(3)\n\nHowever, there are many effects which linear maps cannot model. Perhaps the most signi\ufb01-\ncant is the combination of a large brightness change coupled with a non-linear gain-control\nadjustment or brightness re-normalization by the camera. Such photic changes will tend\n\n2By carefully choosing properties such as the surface re\ufb02ectance of a point as a function of wave-\nlength and lighting any mapping ~(cid:8) can, in principle, be observed even on a \ufb02at Lambertian surface.\nHowever the metamerism which would cause such effects is uncommon in practice [10, 11]\n\n3We will address below the signi\ufb01cant issue of non-\ufb02at surfaces and shadows, which can cause\n\nhighly \u201cincoherent\u201d maps.\n\n\fFigure 2: Evidence of non-linear color\nchanges. The \ufb01rst two images are of\nthe top and side of a box covered with\nmulti-colored paper. The quotient image\nis shown next. The rightmost image is an\nideal quotient image, corresponding to a\nlinear lighting model.\n\nFigure 3: Effects of the \ufb01rst three eigen\ufb02ows.\nSee text.\n\nto leave the bright and dim parts of the image alone, while spreading the central colors of\ncolor space toward the margins.\n\nFor a linear imaging process, the ratio of the brightnesses of two images, or quotient image\n[12], should vary smoothly except at surface normal boundaries. However as shown in\nFigure 2, the quotient image is a function not only of surface normal, but also of albedo\u2013\ndirect evidence of a non-linear imaging process. Another pair of images exhibiting a non-\nlinear color \ufb02ow is shown in Figures 1a and b. Notice that the brighter areas of the original\nimage get brighter and the darker portions get darker.\n\n3.2 Color eigen\ufb02ows\n\nWe wanted to capture the structure in color \ufb02ow space by observing real-world data in an\nunsupervised fashion. A one square meter color palette was printed on standard non-glossy\nplotter paper using every color that could be produced by a Hewlett Packard DesignJet\n650C. The poster was mounted on a wall in our of\ufb01ce so that it was in the direct line of\noverhead lights and computer monitors but not the single of\ufb01ce window. An inexpensive\nvideo camera (the PC-75WR, Supercircuits, Inc.) with auto-gain-control was aimed at the\nposter so that the poster occupied about 95% of the \ufb01eld of view.\n\nImages of the poster were captured using the video camera under a wide variety of lighting\nconditions, including various intervals during sunrise, sunset, at midday, and with various\ncombinations of of\ufb01ce lights and outdoor lighting (controlled by adjusting blinds). People\nused the of\ufb01ce during the acquisition process as well, thus affecting the ambient lighting\nconditions. It is important to note that a variety of non-linear normalization mechanisms\nbuilt into the camera were operating during this process.\n\n1 ; I j\n\nWe chose image pairs I j = (I j\n2 ); 1 (cid:20) j (cid:20) 800; by randomly and independently se-\nlecting individual images from the set of raw images. Each image pair was then used to\nestimate a full color \ufb02ow (cid:8)(I j ). We used 4096 distinct RGB colors (equally spaced in\nRGB space), so (cid:8)(I j) was represented by a vector of 3 (cid:3) 4096 = 12288 components.\nWe modeled the space of color \ufb02ows using principal components analysis (PCA) because:\n1) the \ufb02ows are well represented (in an L2 sense) by a small number of principal compo-\nnents, and 2) \ufb01nding the optimal description of a difference image in terms of color \ufb02ows\nwas computationally ef\ufb01cient using this representation (see Section 4). We call the prin-\ncipal components of the color \ufb02ow data \u201ccolor eigen\ufb02ows\u201d, or just eigen\ufb02ows, 4 for short.\nWe emphasize that these principal components of color \ufb02ows have nothing to do with the\ndistribution of colors in images, but only model the distribution of changes in color. This\nis a key and potentially confusing point. Our work is very different from approaches that\ncompute principal components in the intensity or color space itself [14, 15]. Perhaps the\nmost important difference is that our model is a global model for all images, while the\n\n4PCA has been applied to motion vector \ufb01elds [13], and these have also been termed \u201ceigen\ufb02ows\u201d.\n\n\f25\n\n20\n\nr\no\nr\nr\ne\n\n \ns\nm\n\nr\n\n15\n\n10\n\n5\n\n0\n\na\n\ncolor flow\nlinear\ndiagonal\ngray world\n\n1\n\n2\n\nimage\n\n3\n\n4\n\nb\n\nFigure 4: Image matching. Top row: original images. Bottom row: best approximation to\noriginal images using eigen\ufb02ows and the source image a. Reconstruction errors per pixel\ncomponent for four methods are shown in b.\n\nabove methods are models only for a particular set of images, such as faces.\n\n4 Using color \ufb02ows to synthesize novel images\n\nHow do we generate a new image from a source image and a color \ufb02ow (cid:8)? For each pixel\np in the new image, its color c0(p) can be computed as\n\nc0(p) = c(p) + (cid:11)(cid:8)(^c(p));\n\n(4)\nwhere c(p) is color in the source image and (cid:11) is a scalar multiplier that represents the\n\u201cquantity of \ufb02ow\u201d. ^c(p) is interpreted to be the color vector closest to c(p) (in color space)\nat which (cid:8) has been computed. RGB values are clipped to 0\u2013255.\nFigure 3 shows the effect of the \ufb01rst three eigen\ufb02ows on an image of a face. The original\nimage is in the middle of each row while the other images show the application of each\neigen\ufb02ow with (cid:11) values between (cid:6)4 standard deviations. The \ufb01rst eigen\ufb02ow (top row)\nrepresents a generic brightness change that could probably be represented well with a linear\nmodel. Notice, however, the third row. Moving right from the middle image, the contrast\ngrows. The shadowed side of the face grows darker while the lighted part of the face grows\nlighter. This effect cannot be achieved with a simple matrix multiplication as given in\nEquation 3. It is precisely these types of non-linear \ufb02ows we wish to model.\n\nWe stress that the eigen\ufb02ows were only computed once (on the color palette data), and that\nthey were applied to the face image without any knowledge of the parameters under which\nthe face image was taken.\n\n4.1 Flowing one image to another\n\nSuppose we have two images and we pose the question of whether they are images of the\nsame object or scene. We suggest that if we can \u201c\ufb02ow\u201d one image to another then the\nimages are likely to be of the same scene.\n\nLet us treat an image I as a function that takes a color \ufb02ow and returns a difference image\nD by placing at each (x,y) pixel in D the color change vector (cid:8)(c(px;y)). The difference\nimage basis for I and set of eigen\ufb02ows (cid:9)i; 1 (cid:20) i (cid:20) E, is Di = I((cid:9)i). The set of images\nS that can be formed using a source image and a set of eigen\ufb02ows is S = fS : S =\nI + PE\ni=1 (cid:13)iDig, where the (cid:13)i\u2019s are scalars, and here I is just an image, and not a function.\nIn our experiments, we used E = 30 of the top eigenvectors.\nWe can only \ufb02ow image I1 to another image I2 if it is possible to represent the difference\nimage as a linear combination of the Di\u2019s, i.e. if I2 2 S. We \ufb01nd the optimal (in the\nleast-squares sense) (cid:13)i\u2019s by solving the system\nE\n\nD =\n\nX\n\ni=1\n\n(cid:13)iDi;\n\n(5)\n\n\fa\n\nc\n\nb\n\nd\n\ne\n\nf\n\nFigure 5: Modeling lighting changes with color \ufb02ows. a. Image with strong shadow. b.\nSame image under more uniform lighting conditions. c. Flow from a to b using eigen\ufb02ows.\nd. Flow from a to b using linear. Evaluating the capacity of the color \ufb02ow model. e. Mirror\nimage of b. f. Failure to \ufb02ow b to e implies that the model is not overparameterized.\n\nusing the pseudo-inverse, where D = I2 (cid:0) I1. The error residual represents a match score\nfor I1 and I2. We point out again that this analysis ignores clipping effects. While clipping\ncan only reduce the error between a synthetic image and a target image, it may change\nwhich solution is optimal in some cases.\n\n5 Experiments\n\n5.1 Image matching\n\nOne use of the color change model is for image matching. An ideal system would \ufb02ow\nmatching images with zero error, and have large errors for non-matching images.\n\nWe \ufb01rst examined our ability to \ufb02ow a source image to a matching target image under\ndifferent photic parameters. We compared our system to 3 other commonly used methods:\nlinear, diagonal, and gray world. The linear method \ufb01nds the matrix A in Equation 3 that\nminimizes the L2 error between the synthetic and target images; diagonal does the same\nwith a diagonal A; gray world linearly matches the mean R, G, B values of the synthetic\nand target images. While our goal was to reduce the numerical difference between two\nimages using \ufb02ows, it is instructive to examine one example that was particularly visually\ncompelling, shown in Figure 1.\n\nIn a second experiment (Figure 4), we matched images of a face taken under various camera\nparameters but with constant lighting. Color \ufb02ows outperforms the other methods in all but\none task, on which it was second.\n\n5.2 Local \ufb02ows\n\nIn another test, the source and target images were taken under very different lighting con-\nditions. Furthermore, shadowing effects and lighting direction changed between the two\nimages. None of the methods could handle these effects when applied globally. Thus we\nrepeatedly applied each method on small patches of the image. Our method again per-\nformed the best, with an RMS error of 13:8 per pixel component, compared with errors of\n17:3; 20:1; and 20:6 for the other methods. Figure 5 shows obvious visual artifacts with the\nlinear method, while our method seems to have produced a much better synthetic image,\nespecially in the shadow region at the edge of the poster.\n\n\fa\n\nb\n\nc\n\nd\n\nFigure 6: Backgrounding with color \ufb02ows. a A background image. b A new object and\nshadow have appeared. c For each of the two regions (from background subtraction), a\n\u201c\ufb02ow\u201d was done between the original image and the new image based on the pixels in each\nregion. d The color \ufb02ow of the original image using the eigen\ufb02ow coef\ufb01cients recovered\nfrom the shadow region. The color \ufb02ow using the coef\ufb01cients from the non-shadow region\nare unable to give a reasonable reconstruction of the new image.\n\nSynthesis on patches of images greatly increases the capacity of the model. We performed\none experiment to measure the over-\ufb01tting of our method versus the others by trying to\n\ufb02ow an original image to its re\ufb02ection (Figure 5). The RMS error per pixel component was\n33:2 for our method versus 41:5; 47:3, and 48:7 for the other methods. Note that while our\nmethod had lower error (which is undesirable), there was still a signi\ufb01cant spread between\nmatching images and non-matching images. We believe we can improve differentiation\nbetween matching and non-matching image pairs by assigning a cost to the change in (cid:13)i\nacross each image patch. For non-matching images, we would expect the (cid:13)i\u2019s to vary\nrapidly to accommodate the changing image. For matching images, sharp changes would\nonly be necessary at shadow boundaries or changes in the surface orientation relative to\ndirectional light sources.\n\n5.3 Shadows\n\nShadows confuse tracking algorithms [16], backgrounding schemes and object recognition\nalgorithms. For example, shadows can have a dramatic effect on the magnitude of differ-\nence images, despite the fact that no \u201cnew objects\u201d have entered a scene. Shadows can\nalso move across an image and appear as moving objects. Many of these problems could\nbe eliminated if we could recognize that a particular region of an image is equivalent to a\npreviously seen version of the scene, but under a different lighting.\nFigure 6a shows how color \ufb02ows may be used to distinguish between a new object and a\nshadow by \ufb02owing both regions. A constant color \ufb02ow across an entire region may not\nmodel the image change well. However, we can extend our basic model to allow linearly\nor quadratically (or other low order polynomially) varying \ufb01elds of eigen\ufb02ow coef\ufb01cients.\nThat is, we can \ufb01nd the best least squares \ufb01t of the difference image allowing our (cid:13) esti-\nmates to vary linearly or quadratically over the image. We implemented this technique by\ncomputing \ufb02ows (cid:13)x;y between corresponding image patches (indexed by x and y), and then\nminimizing the following form:\n\narg min\nM\n\n((cid:13)x;y (cid:0) M cx;y)T (cid:6)(cid:0)1\n\nx;y((cid:13)x;y (cid:0) M cx;y):\n\nX\n\nx;y\n\n(6)\n\nHere, each cx;y is a vector polynomial of the form [x y 1]T for the linear case and\n[x2 xy y2 x y 1]T for the quadratic case. M is an Ex3 matrix in the linear case and\nan Ex6 matrix in the quadratic case. The (cid:6)(cid:0)1\nx;y\u2019s are the error covariances in the estimate\nof the (cid:13)x;y\u2019s for each patch.\nAllowing the (cid:13)\u2019s to vary over the image greatly increases the capacity of a matcher, but\nby limiting this variation to linear or quadratic variation, the capacity is still not able to\nqualitatively match \u201cnon-matching\u201d images. Note that this smooth variation in eigen\ufb02ow\ncoef\ufb01cients can model either a nearby light source or a smoothly curving surface, since\neither of these conditions will result in a smoothly varying lighting change.\n\n\fconstant\n\nshadow 36.5\nnon-shadow 110.6\n\nlinear\n12.5\n64.8\n\nquadratic\n12.0\n59.8\n\nTable 1: Error residuals for shadow and non-shadow regions after color \ufb02ows.\n\nWe consider three versions of the experiment: 1) a single vector of \ufb02ow coef\ufb01cients, 2)\nlinearly varying (cid:13)\u2019s, 3) quadratically varying (cid:13)\u2019s. In each case, the residual error for the\nshadow region is much lower than for the non-shadow region (Table 1).\n\n5.4 Conclusions\n\nExcept for the synthesis experiments, most of the experiments in this paper are preliminary\nand only a proof of concept. Much larger experiments need to be performed to estab-\nlish the utility of the color change model for particular applications. However, since the\ncolor change model represents a compact description of lighting changes, including non-\nlinearities, we are optimistic about these applications.\n\nReferences\n[1] E. Miller and K. Tieu. Color eigen\ufb02ows: Statistical modeling of joint color changes. InIEEE\n\nICCV, volume 1, pages 607\u2013614, 2001.\n\n[2] D. H. Marimont and B. A. Wandell. Linear models of surface and illuminant spectra. J. Opt.\n\nSoc. Amer., 11, 1992.\n\n[3] G. Buchsbaum. A spatial processor model for object color perception. J. Franklin Inst., 310,\n\n1980.\n\n[4] J. J. McCann, J. A. Hall, and E. H. Land. Color mondrian experiments: The study of average\n\nspectral distributions. J. Opt. Soc. Amer., A(67), 1977.\n\n[5] D. H. Brainard and W. T. Freeman. Bayesian color constancy. J. Opt. Soc. Amer., 14(7):1393\u2013\n\n1411, 1997.\n\n[6] D. A. Forsyth. A novel algorithm for color constancy. IJCV, 5(1), 1990.\n[7] V. C. Cardei, B. V. Funt, and K. Barnard. Modeling color constancy with neural networks. In\n\nProc. Int. Conf. Vis., Recog., and Action: Neural Models of Mind and Machine, 1997.\n\n[8] R. Lenz and P. Meer.\n\nIllumination independent color image representation using log-\n\neigenspectra. Technical Report LiTH-ISY-R-1947, Link\u00a8oping University, April 1997.\n\n[9] P. N. Belhumeur and D. Kriegman. What is the set of images of an object under all possible\n\nillumination conditions? IJCV, 28(3):1\u201316, 1998.\n\n[10] W. S. Stiles, G. Wyszecki, and N. Ohta. Counting metameric object-color stimuli using fre-\n\nquency limited spectral re\ufb02ectance functions.J. Opt. Soc. Amer., 67(6), 1977.\n\n[11] L. T. Maloney. Evaluation of linear models of surface spectral re\ufb02ectance with small numbers\n\nof parameters. J. Opt. Soc. Amer., A1, 1986.\n\n[12] A. Shashua and R. Riklin-Raviv. The quotient image: Class-based re-rendering and recognition\n\nwith varying illuminations. IEEE PAMI, 3(2):129\u2013130, 2001.\n\n[13] J. J. Lien. Automatic Recognition of Facial Expressions Using Hidden Markov Models and\n\nEstimation of Expression Intensity. PhD thesis, Carnegie Mellon University, 1998.\n\n[14] M. Turk and A. Pentland. Eigenfaces for recognition. J. Cog. Neuro., 3(1):71\u201386, 1991.\n[15] M. Soriano, E. Marszalec, and M. Pietikainen. Color correction of face images under different\nilluminants by rgb eigenfaces. In Proc. 2nd Int. Conf. on Audio- and Video-Based Biometric\nPerson Authentication, pages 148\u2013153, 1999.\n\n[16] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers. Wall\ufb02ower: Principles and practice of\n\nbackground maintenance. In IEEE CVPR, pages 255\u2013261, 1999.\n\n\f", "award": [], "sourceid": 2323, "authors": [{"given_name": "Kinh", "family_name": "Tieu", "institution": null}, {"given_name": "Erik", "family_name": "Miller", "institution": null}]}