{"title": "Local Phase Coherence and the Perception of Blur", "book": "Advances in Neural Information Processing Systems", "page_first": 1435, "page_last": 1442, "abstract": "", "full_text": "Local Phase Coherence\n\nand the Perception of Blur\n\nZhou Wang and Eero P. Simoncelli\n\nHoward Hughes Medical Institute\n\nCenter for Neural Science and Courant Institute of Mathematical Sciences\n\nNew York University, New York, NY 10003\n\nzhouwang@ieee.org, eero.simoncelli@nyu.edu\n\nHumans are able to detect blurring of visual images, but the mechanism\nby which they do so is not clear. A traditional view is that a blurred\nimage looks \u201cunnatural\u201d because of the reduction in energy (either glob-\nally or locally) at high frequencies. In this paper, we propose that the\ndisruption of local phase can provide an alternative explanation for blur\nperception. We show that precisely localized features such as step edges\nresult in strong local phase coherence structures across scale and space in\nthe complex wavelet transform domain, and blurring causes loss of such\nphase coherence. We propose a technique for coarse-to-\ufb01ne phase pre-\ndiction of wavelet coef\ufb01cients, and observe that (1) such predictions are\nhighly effective in natural images, (2) phase coherence increases with the\nstrength of image features, and (3) blurring disrupts the phase coherence\nrelationship in images. We thus lay the groundwork for a new theory of\nperceptual blur estimation, as well as a variety of algorithms for restora-\ntion and manipulation of photographic images.\n\n1\n\nIntroduction\n\nBlur is one of the most common forms of image distortion. It can arise from a variety\nof sources, such as atmospheric scatter, lens defocus, optical aberrations of the lens, and\nspatial and temporal sensor integration. Human observers are bothered by blur, and our\nvisual systems are quite good at reporting whether an image appears blurred (or sharpened)\n[1, 2]. However, the mechanism by which this is accomplished is not well understood.\n\nClearly, detection of blur requires some model of what constitutes an unblurred image. In\nrecent years, there has been a surge of interest in the modelling of natural images, both for\npurposes of improving the performance of image processing and computer vision systems,\nand also for furthering our understanding of biological visual systems. Early statistical\nmodels were almost exclusively based on a description of global Fourier power spectra.\nSpeci\ufb01cally, image spectra are found to follow a power law [3\u20135]. This model leads to\nan obvious method of detecting and compensating for blur. Speci\ufb01cally, blurring usually\nreduces the energy of high frequency components, and thus the power spectrum of a blurry\nimage should fall faster than a typical natural image. The standard formulation of the\n\u201cdeblurring\u201d problem, due to Wiener [6], aims to restore those high frequency components\nto their original amplitude. But this proposal is problematic, since individual images show\nsigni\ufb01cant variability in their Fourier amplitudes, both in their shape and in the rate at which\n\n\fthey fall [1]. In particular, simply reducing the number of sharp features (e.g., edges) in\nan image can lead to a steeper falloff in global amplitude spectrum, even though the image\nwill still appear sharp [7]. Nevertheless, the visual system seems to be able to compensate\nfor this when estimating blur [1, 2, 7].\n\nOver the past two decades, researchers from many communities have converged on a view\nthat images are better represented using bases of multi-scale bandpass oriented \ufb01lters.\nThese representations, loosely referred to as \u201cwavelets\u201d, are effective at decoupling the\nhigh-order statistical features of natural images. In addition, they provide the most basic\nmodel for neurons in the primary visual cortex of mammals, which are presumably adapted\nto ef\ufb01ciently represent the visually relevant features of images. Many recent statistical im-\nage models in the wavelet domain are based on the amplitudes of the coef\ufb01cients, and the\nrelationship between the amplitudes of coef\ufb01cients in local neighborhoods or across differ-\nent scales [e.g. 8]. In both human and computer vision, the amplitudes of complex wavelets\nhave been widely used as a mechanism for localizing/representing features [e.g. 9\u201313]. It\nhas also been shown that the relative wavelet amplitude as a function of scale can be used\nto explain a number of subjective experiments on the perception of blur [7].\n\nIn this paper, we propose the disruption of local phase as an alternative and effective mea-\nsure for the detection of blur. This seems counterintuitive, because when an image is\nblurred through convolution with a symmetric linear \ufb01lter, the phase information in the\n(global) Fourier transform domain does not change at all. But we show that this is not true\nfor local phase information.\n\nIn previous work, Fourier phase has been found to carry important information about image\nstructures and features [14] and higher-order Fourier statistics have been used to examine\nthe phase structure in natural images [15]. It has been pointed out that at the points of\nisolated even and odd symmetric features such as lines and step edges, the arrival phases\nof all Fourier harmonics are identical [11, 16]. Phase congruency [11, 17] provides a quan-\ntitative measure for the agreement of such phase alignment pattern. It has also been shown\nthat maximum phase congruency feature detection is equivalent to maximum local energy\nmodel [18]. Local phase has been used in a number of machine vision and image process-\ning applications, such as estimation of image motion [19] and disparity [20], description\nof image textures [21], and recognition of persons using iris patterns [22]. However, the\nbehaviors of local phase at different scales in the vicinity of image features, and the means\nby which blur affects such behaviors have not been deeply investigated.\n\n2 Local Phase Coherence of Isolated Features\n\nWavelet transforms provide a convenient framework for localized representation of signals\nsimultaneously in space and frequency. The wavelets are dilated/contracted and translated\nversions of a \u201cmother wavelet\u201d w(x). In this paper, we consider symmetric (linear phase)\nwavelets whose mother wavelets may be written as a modulation of a low-pass \ufb01lter:\n\n(1)\nwhere !c is the center frequency of the modulated band-pass \ufb01lter, and g(x) is a slowly\nvarying and symmetric function. The family of wavelets derived from the mother wavelet\nare then\n\nw(x) = g(x) ej!cx ;\n\nws;p(x) =\n\n1\nps\n\nw(cid:18) x (cid:0) p\n\ns (cid:19) =\n\n1\nps\n\ng(cid:18) x (cid:0) p\n\ns (cid:19) ej!c(x(cid:0)p)=s ;\n\n(2)\n\nwhere s 2 R+ is the scale factor, and p 2 R is the translation factor. Considering the fact\nthat g((cid:0)x) = g(x), the wavelet transform of a given real signal f (x) can be written as\n\nF (s; p) = Z 1\n\n(cid:0)1\n\nf (x) w(cid:3)\n\ns;p(x) dx = (cid:20)f (x) (cid:3)\n\n1\nps\n\ng(cid:16) x\n\ns(cid:17) ej!cx=s(cid:21)x=p\n\n:\n\n(3)\n\n\fNow assume that the signal f (x) being analyzed is localized near the position x0, and we\nrewrite it into a function f0(x) that satis\ufb01es f (x) = f0(x (cid:0) x0). Using the convolution\ntheorem and the shifting and scaling properties of the Fourier transform, we can write\n\nF (s; p) =\n\n=\n\n=\n\n(cid:0)1\n\n1\n\n1\n\n2(cid:25) Z 1\n2(cid:25) Z 1\n2(cid:25)ps Z 1\n\nF (!)ps G(s ! (cid:0) !c) ej!p d!\nF0(!)ps G(s ! (cid:0) !c) ej!(p(cid:0)x0) d!\nF0(cid:16) !\n\ns(cid:17) G(! (cid:0) !c) ej!(p(cid:0)x0)=s d! ;\n\n1\n\n(cid:0)1\n\n(cid:0)1\n\nwhere F (!), F0(!) and G(!) are the Fourier transforms of f (x), f0(x) and g(x), respec-\ntively.\n\nWe now examine how the phase of F (s; p) evolves across space p and scale s. From Eq.\n(4), we see that the phase of F (s; p) highly depends on the nature of F0(!). If F0(!) is\nscale-invariant, meaning that\n\nwhere K(s) is a real function of only s, but independent of !, then from Eq. (4) and Eq.\n(5) we obtain\n\nF0(cid:16) !\n\ns(cid:17) = K(s)F0(!) ;\n\n(4)\n\n(5)\n\nF (s; p) =\n\n=\n\nK(s)\n\n2(cid:25)ps Z 1\nK(s)\nps\n\n(cid:0)1\n\nF (1; x0 +\n\np (cid:0) x0\n\ns\n\n) :\n\nF0(!) G(! (cid:0) !c) ej!(p(cid:0)x0)=s d!\n\n(6)\n\n(7)\n\n)) :\n\nSince both K(s) and s are real, we can write the phase as:\n\n(cid:8)(F (s; p)) = (cid:8)(F (1; x0 +\n\np (cid:0) x0\n\ns\n\nThis equation suggests a strong phase coherence relationship across scale and space. An\nillustration is shown in Fig. 1(a), where it can be seen that equal-phase contours in the (s; p)\nplane form straight lines de\ufb01ned by\n\nx0 +\n\np (cid:0) x0\n\ns\n\n= C ;\n\n(8)\n\nwhere C can be any real constant. Further, all these straight lines converge exactly at the\nlocation of the feature x0. More generally, the phase at any given scale may be computed\nfrom the phase at any other scale by simply rescaling the position axis.\n\nThis phase coherence relationship relies on the scale-invariance property of Eq. (5) of the\nsignal. Analytically, the only type of continuous spectrum signal that satis\ufb01es Eq. (5)\nfollows a power law:\n\nF0(!) = K0 !P :\n\n(9)\nIn the spatial domain, the functions f0(x) that satisfy this scale-invariance condition in-\nclude the step function f0(x) = K(u(x)(cid:0) 1\n2 ) (where K is a constant and F0(!) = K=j!)\nand its derivatives, such as the delta function f0(x) = K(cid:14)(x) (where K is a constant and\nF0(!) = K). Notice that both functions of f0(x) are precisely localized in space.\nFigure 1(b) shows that this precisely convergent phase behavior is disrupted by blurring.\nSpeci\ufb01cally, if we convolve a sharp feature (e.g., an step edge) with a low-pass \ufb01lter, the\nresulting signal will no longer satisfy the scale-invariant property of Eq. (5) and the phase\ncoherence relationship of Eq. (7). Thus, a measure of phase coherence can be used to detect\nblur. Note that the phase congruency relationship [11, 17], which expresses the alignment\nof phase at the location of a feature, corresponds to the center (vertical) contour of Fig. 1,\nwhich remains intact after blurring. Thus, phase congruency measures [11, 17] provide no\ninformation about blur.\n\n\f0(cid:13)\n\n0(cid:13)\n\n1(cid:13)\n\n0(cid:13)\n\ns (cid:13)(scale)(cid:13)\n\n...(cid:13)\n\nx(cid:13)0(cid:13)\n\nx(cid:13)0(cid:13)\n\nx(cid:13)0(cid:13)\n(a)(cid:13)\n\nsignal space(cid:13)\n\nwavelet space(cid:13)\n\n...(cid:13)\n\n...(cid:13)\n\nx(cid:13)\n\nx(cid:13)\n\n...(cid:13)\n\np (cid:13)(position)(cid:13)\n\nx(cid:13)0(cid:13)\n\nx(cid:13)0(cid:13)\n\nx(cid:13)0(cid:13)\n(b)(cid:13)\n\nFig. 1: Local phase coherence of precisely localized (scale-invariant) features, and the\ndisruption of this coherence in the presence of blur. (a) precisely localized features. (b)\nblurred features.\n\n3 Phase Prediction in Natural Images\n\nIn this section, we show that if the local image features are precisely localized (such as the\ndelta and the step functions), then in the discrete wavelet transform domain, the phase of\nnearby \ufb01ne-scale coef\ufb01cients can be well predicted from their coarser-scale parent coef\ufb01-\ncients. We then examine these phase predictions in both sharp and blurred natural images.\n\n3.1 Coarse-to-\ufb01ne Phase Prediction\n\nFrom Eq. (3), it is straightforward to prove that for f0(x) = K(cid:14)(x),\n\n(10)\nwhere n1 is an integer whose value depends on the value range of !c (p (cid:0) x0) and the sign\nof Kg(p (cid:0) x0). Using the phase coherence relation of Eq. (7), we have\n\n(cid:8)(F (1; p)) = (cid:0) !c (p (cid:0) x0) + n1(cid:25) ;\n\n(cid:8)(F (s; p)) = (cid:0)\n\n!c (p (cid:0) x0)\n\ns\n\n+ n1(cid:25) :\n\n(11)\n\nIt can also be shown that for a step function f0(x) = K[u(x) (cid:0) 1\nvarying and p is located near the feature location x0,\n!c (p (cid:0) x0)\n\n+ n2(cid:25) :\n\n(cid:8)(F (s; p)) (cid:25)\n\ns\n\n(cid:25)\n2\n\n(cid:0)\n\n2 ], when g(x) is slowly\n\n(12)\n\nSimilarly, n2 is an integer.\nThe discrete wavelet transform corresponds to a discrete sampling of the continuous\nwavelet transform F (s; p). A typical sampling grid is illustrated in Fig. 2(a), where be-\ntween every two adjacent scales, the scale factor s doubles and the spatial sampling rate\nis halved. Now we consider three consecutive scales and group the neighboring coef\ufb01-\ncients fa; b1; b2; c1; c2; c3; c4g as shown in Fig. 2(a), then it can be shown that the phases\n\n\f4(cid:13)\n\n2(cid:13)\n\n1(cid:13)\n\ns(cid:13)\n\na(cid:13)\n\na(cid:13)\n\nb(cid:13) 11(cid:13)\n\ns(cid:13)\n\np(cid:13) 2(cid:13)\n\nb(cid:13) 12(cid:13)\n\np(cid:13) 1(cid:13)\n\nb(cid:13) 1(cid:13)\n\nb(cid:13) 2(cid:13)\n\nc(cid:13) 1(cid:13) c(cid:13) 2(cid:13) c(cid:13) 3(cid:13)\n\nc(cid:13) 4(cid:13)\n\n(a)(cid:13)\n\nb(cid:13) 21(cid:13)\n\nc(cid:13) 11(cid:13)\n\nb(cid:13) 22(cid:13)\nc(cid:13) 12(cid:13)\n\nc(cid:13) 13(cid:13)\n\nc(cid:13) 14(cid:13)\n\nc(cid:13) 21(cid:13)\n\nc(cid:13) 22(cid:13)\n\nc(cid:13) 23(cid:13)\n\nc(cid:13) 24(cid:13)\n\np(cid:13)\n\nc(cid:13) 31(cid:13)\n\nc(cid:13) 32(cid:13)\n\nc(cid:13) 33(cid:13)\n\nc(cid:13) 34(cid:13)\n\nc(cid:13) 41(cid:13)\n\nc(cid:13) 42(cid:13)\n\nc(cid:13) 44(cid:13)\n\nc(cid:13) 43(cid:13)\n\n(b)(cid:13)\n\nFig. 2: Discrete wavelet transform sampling grid in the continuous wavelet transform do-\nmain. (a) 1-D sampling; (b) 2-D sampling.\n\nof the \ufb01nest scale coef\ufb01cients fc1; c2; c3; c4g can be well predicted from the coarser scale\ncoef\ufb01cients fa; b1; b2g, provided the local phase satis\ufb01es the phase coherence relationship.\nSpeci\ufb01cally, the estimated phase ^(cid:8) for fc1; c2; c3; c4g can be expressed as\n\n^(cid:8)0\nB@\n\nc1\nc2\nc3\nc4\n\n1\nCA\n\n= (cid:8)0\nB@\n\n(a(cid:3))2 (cid:1)\n\nb3\n1\nb2\n1b2\nb1b2\n2\nb3\n2\n\n2\n64\n\n3\n75\n\n1\nCA\n\n:\n\n(13)\n\nWe can develop a similar technique for the two dimensional case. As shown in Fig. 2(b),\nthe phase prediction expression from the coarser scale coef\ufb01cients fa; b11; b12; b21; b22g to\nthe group of \ufb01nest scale coef\ufb01cients fcijg is as follows:\n\n^(cid:8)(fcijg) = (cid:8)0\nB@\n\n(a(cid:3))2 (cid:1)\n\nb3\n11\nb2\n11b21\nb11b2\n21\nb3\n21\n\n2\n64\n\nb2\n11b12\nb2\n11b22\nb11b21b22\nb2\n21b22\n\nb11b2\n12\nb11b12b22\nb11b2\n22\nb21b2\n22\n\nb3\n12\nb2\n12b22\nb12b2\n22\nb3\n22\n\n3\n75\n\n1\nCA\n\n:\n\n(14)\n\n3.2\n\nImage Statistics\n\nWe decompose the images using the \u201csteerable pyramid\u201d [23], a multi-scale wavelet de-\ncomposition whose basis functions are spatially localized, oriented, and roughly one octave\nin bandwidth. A 3-scale 8-orientation pyramid is calculated for each image, resulting in 26\nsubbands (24 oriented, plus highpass and lowpass residuals). Using Eq. (14), the phase\nof each coef\ufb01cient in the 8 oriented \ufb01nest-scale subbands is predicted from the phases of\nits coarser-scale parent and grandparent coef\ufb01cients as illustrated in Fig. 2(b). We applied\nsuch a phase prediction method to a dataset of 1000 high-resolution sharp images as well as\ntheir blurred versions, and then examined the errors between the predicted and true phases\nat the \ufb01ne scale.\n\nThe summary histograms are shown in Fig. 3. In order to demonstrate how blurring affects\nthe phase prediction accuracy, in all these conditional histograms, the magnitude axis cor-\nresponds to the coef\ufb01cient magnitudes of the original image, so that the same column in the\nthree histograms correspond to the same set of coef\ufb01cients in spatial location. From Fig.\n\n\fr\no\nr\nr\ne\n\n \n.\n\n \n\nd\ne\nr\np\ne\ns\na\nh\np\n\nr\no\nr\nr\ne\n\n \n.\n\n \n\nd\ne\nr\np\ne\ns\na\nh\np\n\nr\no\nr\nr\ne\n\n \n.\n\n \n\nd\ne\nr\np\ne\ns\na\nh\np\n\np\n\n0\n\n-p\n\np\n\n0\n\n-p\n\np\n\n0\n\n-p\n\n(d)\n\n0.04\n\n0.03\n\n(e)\n\n0.02\n\nsharp\n\nblurred\n\nhighly\nblurred\n\n-p\n\n0\n\np\n\nphase prediction error\n\n(g)\n\noriginal coefficient magnitude\n\n(f)\n\n(a)\n\nnatural sharp image\n\nblurred image\n\n(b)\n\n(c)\n\nhighly blurred image\n\nFig. 3: Local phase coherence statistics in sharp and blurred images. (a),(b),(c): example\nnatural, blurred and highly blurred images taken from the test image database of 1000\n(512(cid:2)512, 8bits/pixel, gray-scale) natural images with a wide variety of contents (humans,\nanimals, plants, landscapes, man-made objects, etc.). Images are cropped to 200(cid:2)200 for\nvisibility; (d),(e),(f): conditional histograms of phase prediction error as a function of the\noriginal coef\ufb01cient magnitude for the three types of images. Each column of the histograms\nis scaled individually, such that the largest value of each column is mapped to white; (g)\nphase prediction error histogram of signi\ufb01cant coef\ufb01cients (magnitude greater than 20).\n\n3, we observe that phase coherence is highly effective in natural images and the phase pre-\ndiction error decreases as the coef\ufb01cient magnitude increases. Larger coef\ufb01cients implies\nstronger local phase coherence. Furthermore, as expected, the blurring process clearly re-\nduces the phase prediction accuracy. We thus hypothesize that it is perhaps this disruption\nof local phase coherence that the visual system senses as being \u201cunnatural\u201d.\n\n4 Discussion\n\nThis paper proposes a new view of image blur based on the observation that blur induces\ndistortion of local phase, in addition to the widely noted loss of high-frequency energy.\nWe have shown that isolated precisely localized features create strong local phase coher-\nence, and that blurring disrupts this phase coherence. We have also developed a particular\nmeasure of phase coherence based on coarse-to-\ufb01ne phase prediction, and shown that this\nmeasure can serve as an indication of blur in natural images. In the future, it remains to\nbe seen whether the visual systems detect blur by comparing the relative amplitude of lo-\ncalized \ufb01lters at different scales [7], or alternatively, comparing the relative spread of local\nphase across scale and space.\n\nThe coarse-to-\ufb01ne phase prediction method was developed in order to facilitate examina-\ntion of phase coherence in real images, but the computations involved bear some resem-\nblance to the behaviors of neurons in the primary visual cortex (area V1) of mammals.\nFirst, phase information is measured using pairs of localized bandpass \ufb01lters in quadra-\nture, as are widely used to describe the receptive \ufb01eld properties of neurons in mammalian\nprimary visual cortex (area V1) [24]. Second, the responses of these \ufb01lters must be ex-\n\n\fponentiated for comparison across different scales. Many recent models of V1 response\nincorporate such exponentiation [25]. Finally, responses are seen to be normalized by the\nmagnitudes of neighboring \ufb01lter responses. Similar \u201cdivisive normalization\u201d mechanisms\nhave been successfully used to account for many nonlinear behaviors of neurons in both\nvisual and auditory neurons [26, 27]. Thus, it seems that mammalian visual systems are\nequipped with the basic computational building blocks that can be used to process local\nphase coherence.\n\nThe importance of local phase coherence in blur perception seems intuitively sensible from\nthe perspective of visual function. In particular, the accurate localization of image features\nis critical to a variety of visual capabilities, including various forms of hyperacuity, stere-\nopsis, and motion estimation. Since the localization of image features depends critically\non phase coherence, and blurring disrupts phase coherence, blur would seem to be a partic-\nularly disturbing artifact. This perhaps explains the subjective feeling of frustration when\nconfronted with a blurred image that cannot be corrected by visual accommodation.\n\nFor purposes of machine vision and image processing applications, we view the results of\nthis paper as an important step towards the incorporation of phase properties into statistical\nmodels for images. We believe this is likely to lead to substantial improvements in a variety\nof applications, such as deblurring or sharpening by phase restoration, denoising by phase\nrestoration, image compression, image quality assessment, and a variety of more creative\nphotographic applications, such as image blending or compositing, reduction of dynamic\nrange, or post-exposure adjustments of depth-of-\ufb01eld.\n\nFurthermore, if we would like to detect the position of an isolated precisely localized fea-\nture from phase samples measured above a certain allowable scale, then in\ufb01nite precision\ncan be achieved using the phase convergence property illustrated in Fig. 1(a), provided\nthe phase measurement is perfect. In other words, the detection precision is limited by\nthe accuracy of phase measurement, rather than the highest spatial sampling density. This\nprovides a workable mechanism of \u201cseeing beyond the Nyquist limit\u201d [28], which could ex-\nplain a number of visual hyperacuity phenomena [29, 30], and may be used for the design\nof super-precision signal detection devices.\n\nReferences\n\n[1] Y. Tadmor and D. J. Tolhurst, \u201cDiscrimination of changes in the second-order statis-\n\ntics of natural and synthetic images,\u201d Vis Res, vol. 34, no. 4, pp. 541\u2013554, 1994.\n\n[2] M. A. Webster, M. A. Georgeson, and S. M. Webster, \u201cNeural adjustments to image\n\nblur,\u201d Nature Neuroscience, vol. 5, no. 9, pp. 839\u2013840, 2002.\n\n[3] E. R. Kretzmer, \u201cThe statistics of television signals,\u201d Bell System Tech. J., vol. 31,\n\npp. 751\u2013763, 1952.\n\n[4] D. J. Field, \u201cRelations between the statistics of natural images and the response prop-\n\nerties of cortical cells,\u201d J. Opt. Soc. America, vol. 4, pp. 2379\u20132394, 1987.\n\n[5] D. L. Ruderman, \u201cThe statistics of natural images,\u201d Network: Computation in Neural\n\nSystems, vol. 5, pp. 517\u2013548, 1996.\n\n[6] N. Wiener, Nonlinear Problems in Random Theory. New York: John Wiley and Sons,\n\n1958.\n\n[7] D. J. Field and N. Brady, \u201cVisual sensitivity, blur and the sources of variability in the\n\namplitude spectra of natural scenes,\u201d Vis Res, vol. 37, no. 23, pp. 3367\u20133383, 1997.\n\n[8] E. P. Simoncelli, \u201cStatistical models for images: Compression, restoration and synthe-\nsis,\u201d in Proc 31st Asilomar Conf on Signals, Systems and Computers, (Paci\ufb01c Grove,\nCA), pp. 673\u2013678, Nov 1997.\n\n\f[9] E. H. Adelson and J. R. Bergen, \u201cSpatiotemporal energy models for the perception of\n\nmotion,\u201d J Optical Society, vol. 2, pp. 284\u2013299, Feb 1985.\n\n[10] J. R. Bergen and E. H. Adelson, \u201cEarly vision and texture perception,\u201d Nature,\n\nvol. 333, pp. 363\u2013364, 1988.\n\n[11] M. C. Morrone and R. A. Owens, \u201cFeature detection from local energy,\u201d Pattern\n\nRecognition Letters, vol. 6, pp. 303\u2013313, 1987.\n\n[12] N. Graham, Visual pattern analyzers. New York: Oxford University Press, 1989.\n[13] P. Perona and J. Malik, \u201cDetecting and localizing edges composed of steps, peaks and\n\nroofs,\u201d in Proc. 3rd Int\u2019l Conf Comp Vision, (Osaka), pp. 52\u201357, 1990.\n\n[14] A. V. Oppenheim and J. S. Lim, \u201cThe importance of phase in signals,\u201d Proc. of the\n\nIEEE, vol. 69, pp. 529\u2013541, 1981.\n\n[15] M. G. A. Thomson, \u201cVisual coding and the phase structure of natural scenes,\u201d Net-\n\nwork: Comput. Neural Syst., no. 10, pp. 123\u2013132, 1999.\n\n[16] M. C. Morrone and D. C. Burr, \u201cFeature detection in human vision: A phase-\n\ndependent energy model,\u201d Proc. R. Soc. Lond. B, vol. 235, pp. 221\u2013245, 1988.\n\n[17] P. Kovesi, \u201cPhase congruency: A low-level image invariant,\u201d Psych. Research, vol. 64,\n\npp. 136\u2013148, 2000.\n\n[18] S. Venkatesh and R. A. Owens, \u201cAn energy feature detection scheme,\u201d Int\u2019l Conf on\n\nImage Processing, pp. 553\u2013557, 1989.\n\n[19] D. J. Fleet and A. D. Jepson, \u201cComputation of component image velocity from local\n\nphase information,\u201d Int\u2019l J Computer Vision, no. 5, pp. 77\u2013104, 1990.\n\n[20] D. J. Fleet, \u201cPhase-based disparity measurement,\u201d CVGIP: Image Understanding,\n\nno. 53, pp. 198\u2013210, 1991.\n\n[21] J. Portilla and E. P. Simoncelli, \u201cA parametric texture model based on joint statistics\nof complex wavelet coef\ufb01cients,\u201d Int\u2019l J Computer Vision, vol. 40, pp. 49\u201371, 2000.\n[22] J. Daugman, \u201cStatistical richness of visual phase information: update on recognizing\n\npersons by iris patterns,\u201d Int\u2019l J Computer Vision, no. 45, pp. 25\u201338, 2001.\n\n[23] E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger, \u201cShiftable multi-\nscale transforms,\u201d IEEE Trans Information Theory, vol. 38, pp. 587\u2013607, Mar 1992.\n[24] D. A. Pollen and S. F. Ronner, \u201cPhase relationships between adjacent simple cells in\n\nthe cat,\u201d Science, no. 212, pp. 1409\u20131411, 1981.\n\n[25] D. J. Heeger, \u201cHalf-squaring in responses of cat striate cells,\u201d Visual Neuroscience,\n\nno. 9, pp. 427\u2013443, 1992.\n\n[26] D. J. Heeger, \u201cNormalization of cell responses in cat striate cortex,\u201d Visual Neuro-\n\nscience, no. 9, pp. 181\u2013197, 1992.\n\n[27] O. Schwartz and E. P. Simoncelli, \u201cNatural signal statistics and sensory gain control,\u201d\n\nNature Neuroscience, no. 4, pp. 819\u2013825, 2001.\n\n[28] D. L. Ruderman and W. Bialek, \u201cSeeing beyond the Nyquist limit,\u201d Neural Comp.,\n\nno. 4, pp. 682\u2013690, 1992.\n\n[29] G. Westheimer and S. P. McKee, \u201cSpatial con\ufb01gurations for visual hyperacuity,\u201d Vison\n\nRes., no. 17, pp. 941\u2013947, 1977.\n\n[30] W. S. Geisler, \u201cPhysical limits of acuity and hyperacuity,\u201d J. Opti. Soc. America, no. 1,\n\npp. 775\u2013782, 1984.\n\n\f", "award": [], "sourceid": 2398, "authors": [{"given_name": "Zhou", "family_name": "Wang", "institution": null}, {"given_name": "Eero", "family_name": "Simoncelli", "institution": null}]}