Learning a Color Algorithm from Examples

Part of Neural Information Processing Systems 0 (NIPS 1987)

Bibtex Metadata Paper


Tomaso A. Poggio, Anya Hurlbert


A lightness algorithm that separates surface reflectance from illumination in a Mondrian world is synthesized automatically from a set of examples, pairs of input (image irradiance) and desired output (surface reflectance). The algorithm, which re(cid:173) sembles a new lightness algorithm recently proposed by Land, is approximately equiva(cid:173) lent to filtering the image through a center-surround receptive field in individual chro(cid:173) matic channels. The synthesizing technique, optimal linear estimation, requires only one assumption, that the operator that transforms input into output is linear. This assumption is true for a certain class of early vision algorithms that may therefore be synthesized in a similar way from examples. Other methods of synthesizing algorithms from examples, or "learning", such as backpropagation, do not yield a significantly dif(cid:173) ferent or better lightness algorithm in the Mondrian world. The linear estimation and backpropagation techniques both produce simultaneous brightness contrast effects.

The problems that a visual system must solve in decoding two-dimensional images into three-dimensional scenes (inverse optics problems) are difficult: the information supplied by an image is not sufficient by itself to specify a unique scene. To reduce the number of possible interpretations of images, visual systems, whether artificial or biological, must make use of natural constraints, assumptions about the physical properties of surfaces and lights. Computational vision scientists have derived effective solutions for some inverse optics problems (such as computing depth from binocular disparity) by determining the appropriate natural constraints and embedding them in algorithms. How might a visual system discover and exploit natural constraints on its own? We address a simpler question: Given only a set of examples of input images and desired output solutions, can a visual system synthesize. or "learn", the algorithm that converts input to output? We find that an algorithm for computing color in a restricted world can be constructed from examples using standard techniques of optimal linear estimation.

The computation of color is a prime example of the difficult problems of inverse optics. We do not merely discriminate betwN'n different wavelengths of light; we assign

@ American Institute of Physics 1988


roughly constant colors to objects even though the light signals they send to our eyes change as the illumination varies across space and chromatic spectrum. The compu(cid:173) tational goal underlying color constancy seems to be to extract the invariant surface spectral reflectance properties from the image irradiance, in which reflectance and iI-" lumination are mixed 1 •

Lightness algorithms 2-8, pioneered by Land, assume that the color of an object

can be specified by its lightness, or relative surface reflectance, in each of three inde(cid:173) pendent chromatic channels, and that lightness is computed in the same way in each channel. Computing color is thereby reduced to extracting surface reflectance from the image irradiance in a single chromatic channel.

The image irra.diance, s', is proportional to the product of the illumination inten(cid:173)

sity e' and the surface reflectance r' in that channel:

(1 ) This form of the image intensity equation is true for a Lambertian reflectance model, in which the irradiance s' has no specular components, and for appropriately chosen color channels 9. Taking the logarithm of both sides converts it to a sum:

s' (x, y) = r' (x, y )e' (x, y).

s(x, y) = rex, y) + e(x,y),


where s = loges'), r = log(r') and e = log(e').

Given s(x,y) alone, the problem of solving Eq. 2 for r(x,y) is underconstrained. Lightness algorithms constrain the problem by restricting their domain to a world of Mondrians, two-dimensional surfaces covered with patches of random colors2 and by exploiting two constraints in that world: (i) r'(x,y) is unifonn within patches but has sharp discontinuities at edges between patches and (ii) e' (x, y) varies smoothly across the Mondrian. Under these constraints, lightness algorithms can recover a good approximation to r( x, y) and so can recover lightness triplets that label roughly constant colors 10.

We ask whether it is possible to synthesize from examples an algorithm that ex·

tracts reflectance from image irradiance. and whether the synthesized algorithm will re(cid:173) semble existing lightness algorithms derived from an explicit analysis of the constraints. We make one assumption, that the operator that transforms irradiance into reflectance is linear. Under that assumption, motivated by considerations discussed later, we use optimal linear estimation techniques to synthesize an operator from examples. The examples are pairs of images: an input image of a Mondrian under illumination that varies smoothly across space and its desired output image that displays the reflectance of the Mondrian without the illumination. The technique finds the linear estimator that best maps input into desired output. in the least squares sense.

For computational convenience we use one-dimensional "training vectors" that represent vertical scan lines across the ~londrian images (Fig. 1). We generate many