{"title": "Neural Approach for TV Image Compression Using a Hopfield Type Network", "book": "Advances in Neural Information Processing Systems", "page_first": 264, "page_last": 271, "abstract": "", "full_text": "264 \n\nNEURAL APPROACH FOR TV IMAGE COMPRESSION \n\nUSING A HOPFIELD TYPE NETWORK \n\nJean-Bernard THEETEN \nLaboratoire d'Electronique et de Physique Appliquee * \n\nMartine NAILLON \n\n3 Avenue DESCARTES, BP 15 \n\n94451 LIMEIL BREVANNES Cedex FRANCE. \n\nABSTRACT \n\nA self-organizing Hopfield network has been \ndeveloped in the context of Vector Ouantiza(cid:173)\n-tion, aiming at compression of \ntelevision \nimages. The metastable states of the spin \nglass-like network are used as \nan extra \nthe Minimal Overlap \nstorage resource using \nand Mezard 1987) to \nrule (Krauth \nlearning \nthe organization of the attractors. \noptimize \nThe sel f-organi zi ng \nthat we have \nscheme \ndevised \nthe generation of an \nin \nadaptive codebook for any qiven TV image. \n\nresults \n\nI NTRODOCTI ON \n\nability of \n\ntransmission, \n\nis a \n\nan Hopfield network \n\nThe \n(Little,1974; \nHopfield, 1982,1986; Amit. and al., 1987; Personnaz and \nal. 1985; Hertz, 1988) to behave as an associative memory \nusua 11 y aSSlJ11es a pri ori knowl edge of the patterns to be \nstored. As in many applications they are unknown, the aim \nof this work is to develop a network capable to learn how \nto select its attractors. TV \nimage compression using \nVector Quantization (V.Q.)(Gray, 1984), a key issue for \nHOTV \nthe non \nneural algorithms which generate the list of codes (the \ncodebookl \nthe \nprani si ng neural canpressi on techni ques (Jackel et al., \n1987; Kohonen, 1988; Grossberg, 1987; Cottrel et al., \n19B7) our idea is to use the metastability in a spin \nglass-like net as an additional storage resource and to \nderive after a \"cl assi cal II \na \nsel f-organi zi ng \nthe \ncodebook. We present the illustrative case of 2D-vectors. \n* LEP : A member of the Philips Research Organization. \n\ntypical case, since \n\nare suboptimal. As \n\nan alternative \n\nto \n\nsheme \n\nfor generatf ng \n\nadaptively \n\ncl usteri nq \n\na 1 gori thm \n\n\fNeural Approach for TV Image Compression \n\n265 \n\nNON NEURAL APPROACH \n\nIn V.O., the image is divided into blocks, named vectors, \nof N pixels (typically 4 x 4 pixels). Given the codebook, \neach vector is coded by associating it with the nearest \nelement of \n(Nearest Neighbour Classifier) \n( fi g ure 1). \n\nlist \n\nthe \n\nEMCaD\u00a3\" \n\nINPUT \nYEtTa\" \n\nCOP1PARE \n\nINDEX \n\nCODE BOOK \n\nINDEX \n\n~ \n\nCODEBOOK \n\n~ ftECDN-\nSTRUCTED \nVECTOR \n\nFigure 1 : Basic scheme of a vector quantizer. \n\nFor designing an optimal codebook, a clustering algorithm \nis app1 ied to a training set of vectors (figure 2), the \ncriterium of optimality being a distorsion measure \nbetween the training set and the codebook. The algorithm \nis actua 11 y \nconnex \ntraining set, as it is based on an iterative computation \nof centers of grav i ty whi ch tends to overcode the dense \nregions of poi nts whereas the 1 ight ones are undercoded \n(figure 2). \n\nsubopt ima1, \n\nespeci a 11 y \n\nfor \n\nnon \n\n- - ---- - - - - - - - - - - - - -\n\n~r_---------~~-~---~ \na 1::-\n\n.. \n\n.. \n\na+--____ ~-------~--------~ \n---.- - - - - - -\n\n. -- --- - -- - - --\n\nPIXEl.. 1 . \n. \n\n. \n\n230.0 \n\n110.0 \n\n- . --\n\n- ---\n\n---\n\nFigure 2 : Training set of two pixels vectors and the \nassociated codebook canputed by a non neural c1 ustering \nalgorithm: overcoding of the dense regions (pixel 1 148) \nand subcoding of the light ones. \n\n\f266 \n\nNaillon and Theeten \n\nNEURAL APPROACH \n\nis \n\nto \n\nsubstituted \n\nthe nearest \n\nreferred to as \n\nare prescribed \n\nIn a Hopfield neural network, the code vectors are the \nattractors of the net and the neural dynamics (resolution \nphase) \nneighbourg \nclassification. \n~en patterns -\nII prototypes\" and named \nhere \"explicit memory\" \nspin \nglass-like net, other attractors \nto as \n\"metastable states\" - are induced in the net (Sherrington \nand Kirkpatrick, 1975; Toulouse, 1977; Hopfield, 1982; \ninduced \nMezard \nattractors as additional memory named here \"impl icit \nmemory\" whi ch can be used by the network \nto code the \npreviously mentioned \nregions of points. This \nprovides a higher flexibility \nthe \nself-organization process, as it can choose in a large \nbasis of explicit and implicit attractors the ones which \nwill optimize the coding task. \n\nand al., 1984). We \n\nthe net during \n\nconsider \n\nreferred \n\nlight \n\nthose \n\nin a \n\nto \n\nNEURAL NOTATION \nA vector of 2 pixels with 8 bits per pel is a vector of \n2 dimensions in an Eucl idean space where each dimension \ncorresponds to 256 grey levels. To preserve the Euclidean \ndi stance, we use the well-known themometri c notati on \n: \n256 neurons for 256 level s per dimens i on, the number of \nneurons set to one, wi th a reg ul ar orderi ng, g iv i ng the \npixel \nluminance, e.g. 2 = 1 1-1-1-1-1 \u2022\u2022\u2022 For vectors of \ndimension 2, 512 neurons will be used, e.g. v=(2,3)= \n(1 1-1-1 \u2022\u2022\u2022\u2022\u2022\u2022 -1,1 1 1-1-1 \u2022\u2022\u2022 ,-1) \n\nthe \n\nthe Minimal Overlap rule \n\nINDUCTION PROCESS \nThe \ninduced impl icit memory depends on the prescription \nrule. We have compared the Projection rule (Personnaz and \nal., 1985) and \n(Krauth and \nMezard, 1987). \nThe metastable states are detected by relaxing any point \nits \nof \ncorresponding prescribe or induced attractor marked \nin \nfigure 3 with a small diamond. \nFor \ninduction process \nis rather \ndetenni ni stic, generati ng an orthogonal mesh \n: \nif two \nprototypes \n(P21,P22) are prescribed, a \nmetastable state is induced at the cross-points, namely \n(P11,P22) and (P21,P12) (figure 3). \n\n(P11,P12) and \n\ntraining \n\nset of \n\nfigure \n\nrules, \n\nthe \n\nthe \n\nthe \n\ntwo \n\nto \n\n2, \n\n\fNeural Approach for TV Image Compression \n\n267 \n\n... \na+-__ ~ ____ ~ __ ~ ar---~----~--~ \n... \n\n..... \n\n.. \n\n.. \n\n... \n\n-. \n\nPDIB. 1 \n\nPDCEI. 1 \n\nFigure 3 : Comparaison of the induction process for 2 \nprescription rules. The prescribed states are the full \nsquares, the induced states the open diamonds. \n\n; s \n\ntwo \n\nthe \n\nrul es \n\nWhat differs between \nthe number of \ninduced attractors. For 50 prototypes and a training set \nof 2000 2d-vectors, \ninduces about \n1000 metastable states (ratio 1000/50 = 20) whereas Min \nOver induces only 234 (ratio 4.6). This is due to the \ndifferent stabil ity of the prescribed and the induced \nstates in the case of Min Over (Naillon and Theeten, to \nbe published). \n\nthe projection rule \n\nGENERALIZED ATTRACTORS \nSome attractors are \n(Figure 4) \nthe \nhas \nconfigurations to be compared with the (28 )2= 216 \nconfigurati ons. \n\ninduced out of \n\nthe \nspace \n\nneurons \n\nimage space \n2512 \nimage \n\n512 \n\nas \n\nimage space by defi n1 ng a \"genera 1 i ze d \nWe extend \nthe \nattractor\" as \nthe same \nthe class of patterns having \nnumber of neurons set to one for each pixel, whatever \nthei r orderi ng. Such a notati on corresponds to a random \nrepresentati on. The simul ati on has \nthermometri c neural \nshown \nto \nacceptable states (Figure 4) i.e. they are located at the \nplace when one would like to obtain a normal attractor. \n\nthe generalized attractors correspond \n\nthat \n\n\f268 \n\nN siIlon and Theeten \n\ni \n\ni \n\nNO GENERAUZATION \n\nWITI-I GENERAUZATJON \n\ni\u00b7 \n\n/ \n\n~ CII!JeMIJZm AJ'TAACT'OR \n\n\\ \n\n,,' \n\n\",b. \n\n.A \n\n..._ \n\n~6 ... \n\nwrTHOVT AT \n\n~;m\"\"ING~ r' ~ , \n.... 1'-.(f;-6 \n\n--< \u00a5. ~ ~ \\l'J!'ft \n0-\\ -,' \ni ~ ~!~~ . '~~6 i \n\" ~~~r.~ lf~ .. \n~ I ' .... ... ,J::.,..,.-\n~ \n~. -(\u00b71.f~ll1\" '. \nl!Jl!. -~ ..... \n\u2022\u2022 f~\u00b7 . \ni ~~/ ~ .. ;;~ (J.,\". - j.J ~. \n\u2022 \n-.,... \nt I \n~, ,)t ~~'~ ~5\\ :-\n~ ~J -'-t~~i &~ ~ \n~ \n... \n.. .. \n\u2022 \n.... \n\n; \n.... \n\n-\n\n'- l!.4--6 \n\n'fl \n\nPIXEL! \n\n--\n\n'1ft 6.. \n\n+6A-+ \n\nI't hjlt \n\n~~ ~ 6+6 \n\n,~ \n\n. . [ '& \n~ t \n\n\\---J. ... \n\n1 \n\n.. .. \n\nf. \n\n& -. \n\nPIXEL ! \n\n4-\n\n.. .. \n\n6\"'6.- 6 \n\nto \n\n: The \n\nFigure 4 \ninduced bassins of attractions are \nrepresented with arrows. In the left plot, some training \nvectors have no attractor \nimage space. After \ngeneralization (randon thermometric notation), the right \n~ot shows their corresponding attractors. \n\nthe \n\nin \n\nADAPTIVE NEURAL CODEBOOK LEARNING \n\nas \n\nthe \n\nimage, \n\nthe codebook. For a given TV \n\niterative sel f- organi zi ng process has been developed \nAn \nto optimi ze \nthe \ncodebook is defined, at each step of the process, as the \nset of prescribed and induced attractors, selected by the \ntraining set of vectors. The self-organizing scheme is \nthe distorsion measure \ncontrolled by a cost function, \ntraining set and \nthe codebook. Having a \nbetween \ntarget of 50 code vectors, we have to prescri be at each \ntypically 50/4.6 = 11 \nstep, \nprototypes. As seen in figure Sa, we choose 11 initial \nprototypes uniformly distributed along \nthe bisecting \nline. Using the training set of vectors of the figure 2, \ninduced metastable states are detected with their \nthe \ncorresponding bassins of attraction. The \n11 most \nfrequent, prescribed or induced, attractors are selected \n11 centers of gravi ty of thei r bassi ns of \nand \ntaken as new prototypes (figure 5b ). \nattracti on are \nAfter 3 iterations, \nthe distorsion measure stabilizes \n(Table 1). \n\ndiscussed \n\nabove, \n\nthe \n\n\fNeural Approach for TV Image Compression \n\n269 \n\nINmALIZATION \n\nn: \u2022 \n\u2022 \n\u2022 \u2022 \ni \n\u2022 \n\" \n\" \n\u2022 \n~ \n~ \ns- \u2022 \n\u2022 \n\u2022 \ns \n\u2022 \n\u2022 \ni ..... - ---\n\n.... \n\nPlXB.1 \n\n\u2022 \n\n... \n\n--\n\n.... \n\nPIXEl. 1 \n\nFi gure 5a \n\nInitialization of the self-organizing scheme. \n\nITERATION \u00b71 \n\nPROTOTYPES \n\nFAST ORGANIZATION \n\ni \n\ni \n\" \n~ \n\u2022 \n! 8 \n, \nI , , \ni \n\nN \n\n\u2022 \u2022 \u2022 \u2022 \n\u2022 \n\u2022 ~ \n\u2022\u2022 \n\u2022 \n\u2022\u2022 \u2022 \n\nFigure 5b \nscheme. \n\n\u00b7 \u00b7 First \n\niteration of \n\nthe self-organizinq \n\n\u00abIobal \ndislofsion size \n\ncodebook \n\n\u00abeneralized \naUraclors \n\n1001 \nitrrllioM \n\n1 \n2 \n3 \n4 \n5 \n\n0 \n1571 53 ! \n4 \n1031 57 \n97 79 i 20 \n97 84 I 20 \n98 . 68 i 15 \n\nTable 1 : Evolution of the distorsion measure versus the \niterations of the self-organizing scheme. It stabilizes \nin 3 iterations. \n\n\f270 \n\nNOOllon and Theeten \n\nFourty 1 i nes of a TV image (the port of Ba 1 timore) of 8 \nbits per pel, has been coded with an adaptive neural \ncodebook of 50 20-vectors. The coherence of the coding is \nvisible \nimage \n(Figure 6). \nThe coded image has 2.5 bits per pel. \n\nthe apparent continuity of \n\nfrom \n\nthe \n\nI \nj \n\u2022 1 \n\n-\n\nFigure 6 : Neural coded image with 2.5 bits per pel. \n\na \n\nclusterinq \n\nalgorithm, \n\nCONCLUSION \n\nusing \n\nbeen \n\nshown \n\n\"classical\" \n\nthat \nthe metastable states \n\nUsing \na \nself-organizing scheme has been developed in a Hopfield \nnetwork f.or the adaptive design of a codebook of small \nd imensi on vectors ina Vector Quanti zati on techni Que. It \nthe Minimal Overlap \nhas \nprescription rule, \nin a \nspin gl ass-like network can be used as extra-codes. The \noptimal organization of \ninduced \nattractors, has been defined as \nthe limit organization \nobtained from the iterative learning process. It is an \nexample of \"learning by selection\" as already proposed by \nphysicists and biologists \n1986). \nHard~re impl ementation on \nci rcuit \ncurren~y designed at LEP \nfor on-line \ncodebook computations. \n\n(Toulouse \nale \nthe neural VLSI \nshould allow \n\nthe prescribed and \n\ninduced \n\nand \n\nWe woul d like to thank J.J. Hopfield who has inspired \nthis study as well H. Bosma and W. Kreuwel s from Phil ips \nResearch Laboratories, Eindhoven, who have allow \nto \ninitialize this research. \n\n\fNeural Approach for TV Image Compression \n\n271 \n\nREFERENCES \n\n- J.J. Hopfield, Proc. Nat. Acad. Sci. USA, 79, 2554 - 2558 \n(1982); J.J. Hopfield and D.W. Tank, SC1ence 233 \n, 625 \n(1986) ; W.A. Little, Math. Biosi.,..!2., 101-120 :-T1974). \n\n- D.J. ftrnit, H. Gutfreund, and H. Sanpolinslc.y, Phys.Rev. 32, \n\nAnn. Phys. 173, 30 (1987). \n\n-\n\n- L. Personnaz, I. Guyon and G. Dreyfus, J. Phys. Lett. 46, \n\nL359 (1985). \n\n- J.A. Hertz, 2nd \n\nInternational Conference on \"Vector and \n\npa ra 11 e 1 canputi ng, Transo, Norway, June (1988). \n\n- M.A. Virasoro, Disorder Systems and Biological Organization, \n\ned. E. Bienenstoclc., Springer, Berlin (1985); H. Gutfreund \n(Racah Institute of Physics, Jerusalem) (1986); C. Cortes, \nA. Kro<;lh and J .A. Hertz, J. of Phys. A., (1986). \n\n- R .M. Gray, IEEE ASSP Magazi ne 5 (Apr. 1984). \n\n- L.D. Jackel, R.E. Howard, \n\nJ.S. Denker, W. Hubbard and \n\nS.A. ~ol1a, ADpl ied Ootics, Vol. 26, Q, (1987). \n\n- i. Kononen, Finland, Helsinky University of Technology, \n\nTech. ~eo. No. iKK..;:\"\u00b7A601; T. Kahanen, ~Jeural Networks, 1, \n~jumoer :, (1988). \n\n-\n\n1 \n\n2 \n\n3 \n\n4 \n\n5 \n\n6 \n\n7 \n\n8 \n\n9 \n\n- S. Grossoerg, Cognitive ScL,.!.!., 23-63 (1987). \n\n10 - G.W. Cottrell, P. Murro and D.Z. Zioser, Institute of \n\ncognitive Science, Report 8702 (1987). \n\n11 - D. Sherrington and S. Kirkpatrick, Phys. Rev. Lett. 35 t \n\n1792 (1975); G. Toulouse, Commun. Phys. 2, 115-119 (lID); \nM. Mezard , G. Parisi, N. Sourlas , G. Toulouse and \nM. Virasoro, Phys. Dey. Lett., g, 1156-1159 (1984). \n\n12 - W. Krauth and M. Mezard \n\nL 745-L 752 (1987) \n\nt J. Phys.A : Math. Gen. 20, \n\n13 - M. ~Jaillon and J.B. Theeten, to be published. \n\n14 - G. Toulouse, S. Dehaene and J.P. Changeux, Pro. Natl.Acad. \n\nSci. USA,~, 1695, (1986). \n\n\f", "award": [], "sourceid": 161, "authors": [{"given_name": "Martine", "family_name": "Naillon", "institution": null}, {"given_name": "Jean-Bernard", "family_name": "Theeten", "institution": null}]}