WO2009079769A1 - Methods and systems for electoral-college-based image recognition - Google Patents

Methods and systems for electoral-college-based image recognition Download PDF

Info

Publication number
WO2009079769A1
WO2009079769A1 PCT/CA2008/002229 CA2008002229W WO2009079769A1 WO 2009079769 A1 WO2009079769 A1 WO 2009079769A1 CA 2008002229 W CA2008002229 W CA 2008002229W WO 2009079769 A1 WO2009079769 A1 WO 2009079769A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
gallery
images
similarity
windows
Prior art date
Application number
PCT/CA2008/002229
Other languages
French (fr)
Inventor
Liang Chen
Naoyuki Tokuda
Original Assignee
University Of Northern British Columbia
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Northern British Columbia filed Critical University Of Northern British Columbia
Publication of WO2009079769A1 publication Critical patent/WO2009079769A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data

Definitions

  • This invention relates to automatic image-recognition systems.
  • the invention has application, for example, in automatic face-recognition systems.
  • the Electoral College concept originated in association with presidential elections in countries, such as the United States, where elections first take place in each of several pre-partitioned regions/states and the final winner is selected according to the weighted sum of wins within the sum of all regions based on the winner-take-all principle.
  • L. Chen and N. Tokuda Regional voting versus national voting-stability of regional voting , International ICSC Symposium on Advances in Intelligent Data Analysis, Rochester, New York, USA, June 22-25 1999 describes the use of the Electoral College method in pattern recognition.
  • Literature of interest includes:
  • This invention has a range of aspects. Aspects provide both apparatus for image recognition and methods for image recognition. The methods and apparatus have particular application for face recognition.
  • Figure 1 is a flow chart illustrating a recognition method according to an Manmlfi ernhoriiment
  • Figure 2 illustrates one manner in which faces in a gallery may be partitioned and how a corresponding area of a query face, X, may be partitioned.
  • Figure 3 illustrates an example where a query fac; is partitioned into 12 equal window re "Oei 1 ons.
  • Figures 5A through 5D are respectively plots showing recognition rates for example embodiments of the invention for different numbers of windows.
  • the invention will be illustrated with reference to a face-recognition system that applies an electoral college approach for human face recognition.
  • the face- recognition system may perform large-gallery face recognition in large face data sets.
  • the invention may be embodied in apparatus for image recognition as well as in methods for image recognition.
  • One aspect of the invention provides face recognition systems which apply face-recognition algorithms (for example any suitable known face-recognition algorithm) in an electoral college framework.
  • Face recognition algorithms that may be used in an electoral college framework according to this aspect of the invention include, without limitation those described in:
  • a prototype embodiment applying the "eigen-face" appro "eh within an electoral college framework has been shown to provide a remarkably high recognition rate when it is tested in FERET datasets. Testing may be performed as described by P. Phillips, II WCJJ ! ⁇ , :aiig, ⁇ i ⁇ I R'u. . /V FI - ' ' J'.tse aiul evaluation procedw C j , ⁇ .. e recognition algotltlsins , Iri* ⁇ ' ⁇ Vision Computirg, l ⁇ (5):295-306, 1998; and PJ. Phillips, H. Moon, S. A. Rizvi, PJ.
  • the invention may apply any pattern-recognition algorithm suitable for use in face recognition in the context of the electoral college approach. Take the case where G is a suitable pattern-recognition algorithm. Given a query image Q, G identifies the closest match to Q in a gallery of known images P. In a method according to an embodiment of the invention, a Query image Q is divided into a plurality of regions. The algorithm G is applied to determine similarity values between each of the plurality of regions and corresponding regions in images P in the gallery. The method identifies the best matches between the region of the query image Q all corresponding regions in the gallery. After the best matches have been identified, a simple vote may be used to identify the image on the gallery which has the greatest number of regions that match corresponding regions of the query image. That image may be identified as the matching image.
  • Figure 1 shows a recognition method 10 according to a simple example embodiment.
  • Block 12 compares a region of the query image Q to a corresponding region of a gallery image P.
  • Blocks 14 and 16 cycle through all regions of the query image Q and all corresponding regions of gallery images P.
  • Block 15 identifies the rln ⁇ f ⁇ t mntT-hin ⁇ ⁇ ail ⁇ rv imacrp rpcrinn fnr parh rpoinn nf nnprv imn ⁇ p O Riork I S determines which gallery image P provides the most closest matching gallery image regions for the query image Q.
  • Figure 2 illustrates schematically an embodiment in which the gallery
  • the gallery may comprise, for example, a database containing i ⁇ a ⁇ c dcta representing each of gallery images P.
  • the images in the gallery may represent // different individuals IT some crssrs, the gallery i u.) coiitaiii two or r ⁇ oi e K.L-J , ⁇ [ - f.iw individual-.. TI-J c L . ;. ' j.. arc to be compared with the quer> IM ⁇ '.
  • ⁇ l ⁇ determine the iu'crulp, J , ' Q TLi ciaUbase may i O contain additional information regarding individuals depicted in ⁇ h ; images in the gallery.
  • setup processes 3OA and recognition processes 3OB each may involve several steps.
  • Block 32 may involve receiving and processing images for inclusion in the gallery.
  • a face-detection method is applied to locate faces in the images to be included in the gallery.
  • the images of faces identified in block 32A are standardized or 0 normalized by cropping, resizing, and/or rotating, if necessary, so that the images in the gallery all have a common size (e.g. a common rectangular format) and the depicted faces are similarly located in each image.
  • Any suitable face-detection approach may be applied in block 3OB.
  • An example of a face-detection approach that may be applied in block 3OB is described in Henry A. Rowley, Shumeet Baluja, and 5 Takeo Kanade Neural Network-Based Face Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 20, number 1, pages 23-38, January 1998.
  • a number of regions are associated with different0 parts of each gallery image.
  • the windows are equal in size in preferred cases.
  • the windows may be rectangular.
  • the portion of the image in each window is in the order of 11 to 13 pixels wide and 11 to 13 pixels high (e.g. 11 columns of pixels by 13 rows of pixels or 13 rows of pixels by 13 columns of pixels etc.).
  • the windows collectively cover all of the faces5 visible in the gallery images.
  • the windows may be arranged in a rectangular array of windows which cover thf » ⁇ nlifM-v imaii p c Tn C ⁇ m p r-asp ⁇ rtip winHnwc arp arran ⁇ prl in a niiira ⁇ h/ nf rriw ⁇ and a plurality of columns. In some cases the windows do not overlap with one another. In an example embodiment, each gallery image is divided into twelve windows
  • the number of windows may he greater than 12.
  • the number of window" miy b oCt dcr/CL ⁇ ng on the size of the 3. 11 Lr f ⁇ • ; ! uages
  • *V nir- ⁇ ⁇ y be Tr > ⁇ O ⁇ P ,i i I 1 , ,U und V di e 1 'h3 .. h_ t ⁇ to " IV ⁇ ' .
  • Dividing the gallery images into windows or regions does not require that the data of the gallery images be physically segregated. It is sufficient to provide a data structure or function that can identify the image data corresponding to a desired one of the windows so that the image data corresponding to the window can be compared to a corresponding window or region within a query image Q.
  • Block 36 involves setting up a face recognition algorithm denoted G 1 , for each window, denoted W 1 .
  • G 1 may comprise any suitable face recognition approach.
  • block 36 may comprise adjustment or calculation of parameters useful for comparing windows in a query image to corresponding windows in gallery images
  • block 36 may involve computing eigen values and eigenvectors.
  • Recognition process 3OB commences at block 40 which applies a face detection approach to locate a face within the query image Q (block 40A).
  • the query image is normalized by cropping, resizing and/or rotating the image to yield a normalized query image that can be conveniently compared to images P in the gallery.
  • blocks 40A and 4OB are the same as or similar to blocks 32A and 32B so that the normalized query image is directly comparable to the images in the gallery.
  • windows or regions are associated with different parts of the query image. This my be done, for example, in the same way that windows are associated with different parts of the gallery images in block 34
  • face-recognition system G 1 is applied to compare the image content nf parh winHnw the nnprv ima ⁇ e with rnrrp ⁇ snnnriinp windows W nf the cnllprv images.
  • Block 44 may involve application of the parameters developed in block 36 of set-up process 3OA to assess the similarity of the image content of a query window and the image content of a window of the gallery. In some cases, block 44 will find that the best matches for different windows are different identities.
  • Block 46 the best matches between windows determined in block 44 are used to determine which image in the gallery will be identified as best matching the query image.
  • Block 46 may involve a voting approach. A simple voting approach first identifies the gallery image or identity (an identity may correspond to more than one gallery images in some embodiments) that is identified as being the best match for the largest number of windows in block 44. The most-often best-matching identity from the gallery will be taken as the identity for the query image.
  • Block 46 may optionally apply a weighted voting approach.
  • similarity values as determined in block 44 for each identity that has been identified as being the best match for at least one window are summed. The identity with the greatest sum of similarity values may be taken as the identity corresponding to the query image Q.
  • Some embodiments may attempt to find a better match between the query image and a gallery image by translating and/or rotating the query image and repeating the recognition process 3OB for each different position and orientation of the query image. Translations may be achieved by shifting the locations of the windows in one or both of the query and gallery images by one or more pixels in any direction.
  • may be shifted or rotated in each direction.
  • the positions of the pixels of the query image from which data is drawn for a window may be shifted up, down, right, left or some combination thereof.
  • the resulting window data may then be compared to the data for corresponding windows in the palierv as described above.
  • Figures 4A, through 4F show how one window W8 from the query image shown in Figure 4 may be shifted one pixel in each direction to produce nine sub- images ⁇ including the unshifted image of Figure 4). Eight shifted sub-images can be obtained by shifting the window one pixel in each main and diagonal direction.
  • the windows for the query image remain fixed and the windows of the gallery images may be shifted and/or rotated.
  • the algorithm G 1 may be applied to find the best match for each window of the query image among all the sub-images of all gallery images in the shifted windows to find an identity of the sub-image of the query face. Then the identity with the greatest similarity value may be chosen as the identity of this query image in Wi.
  • block 36 may comprise computing parameters or otherwise setting up algorithm G 1 for the shifted and/or rotated windows.
  • the degree of shifting and/or the degree of rotation are constrained.
  • the face size was chosen to be 130x 150 pixels
  • the degree of shifting was constrained to be no more than 3 pixels in any direction and the total movement of a pixel in any direction was constrained to not exceed 4 pixels.
  • Certain implementations of the invention comprise computer processors which execute software instructions which cause the processors to perform a method of the invention.
  • one or more processors in an image recognition system may implement the methods of Figure 1 or 3 by executing software instructions in a program memory accessible to the processors.
  • the invention may also be provided in the form of a program product.
  • the program product may comprise any medium which carries a set of computer-readable instructions which, when executed by a data processor, cause the data processor to execute a method of the invention.
  • Program products according to the invention may be in any of a wide variety of forms.
  • the computer-readable signals on the program product may optionally be compressed or encrypted. 5
  • Figure 5 shows an example 50 comprising a can.era " Z for acquiring digital images of pc; 1 - ⁇ C; mera 52 communicates the ' r ⁇ ' Ia' imager t" a i :c .jeste r unit 54 I')' ⁇ v: ⁇ ) of felicit ' !c f i_o 5 " I ' ⁇ CC C ⁇ L ⁇ ipt isec • ⁇ >"" or more data pioccssui J JJschreib.;> . ⁇ ,. ⁇ sjlf ⁇ aic " -O. TL * . sofw FacultyL ,' - .: .j the data
  • I O processor(s) to perform a recognition method as descr ibed h.-rtln v/itli images obtained by camera 52 treated as query images.
  • ⁇ gallery is provI.Lxl by way of a database 58.
  • Database 58 comprises images 59 A associated with identification information 59B.
  • One, two or more images 59A may be associated with the same identification information (identity) 59B.
  • rights information 59C is also available by way of database 58.
  • Rights information 59C may, for example, determine what action ought to be taken in response to identification of the particular associated identity. For example, if the identity corresponds to a person entitled to access a secure area, the 0 rights information may indicate this.
  • Software 56 may cause processor 53 to coordinate some action, which may be based on rights information 59C upon identifying the corresponding identity 59B.
  • processor unit could do any of:
  • the action may be controlled by way of a suitable action interface 60 which controls a display, alarm system, or other device that interacts with the world outside of processor unit 54. 5
  • Input a probe image P to be recognized.
  • step 2(4) when there is a tie-ups among t gallery images, then we can let each image get one vote or 1/t vote.
  • step 3 if there is a tie-up, we randomly choose one as the return.
  • PCA with Mahalanobis Cosine measurement
  • FERET face database is the set of gallery hia ⁇ jr.s Ontnining 1196 grayscale images, "fb", * 'fc" , '"dupl” and “dup2" are sets of probe images.
  • each image is rotated, scaled and cropped to a size 15Ox 130 pixels (150 pixels each column, 130 pixels each row), so that the distance between the centers of two eyes is 70 pixels and line between two eyes lies on the 45th pixel below the up boundary.
  • Tlie recognition process therefore involves the following two stage "
  • T he System Setup stage obtains o ⁇ :- set c f t ' gen values and cigenvc f ;.rs for e i L ot the re windows,
  • each gallery subimage h,i j obtained by step 2 (3) in previous subsection calculate the Mahalanobis Cosine similarity Sirn(P, j;5lS2 , Ik,ij) between its projection in the PCA space and probe subimage P 1J7S132 ] let Sim i3 ⁇ P, 1 % )
  • the gallery image I x having the maximal votes is taken as the return image.

Abstract

An automated face-recognition system applies an electoral-college approach to face recognition. Windows correspond to portions of query and gallery images. Windows of a query image are compared to corresponding windows in the gallery. An identity corresponding to the best match for each window of the query image is identified. The identity best matching the overall query image is determined by a simple or weighted vote.

Description

METHODS AND SYSTEMS FOR ELECTORAL-COLLEGE-BASED IMAGE
RECOGNITION
Cross-Reference to Related Application
[0001] This application claims priority from United States patent application No. 61/016437 filed 21 December 2007 and entitled METHODS AND SYSTEMS FOR ELECTORAL-COLLEGE-BASED IMAGE RECOGNITION, which is hereby incorporated by reference. For purposes of the United States of America, this application claims the benefit under 35 U. S. C. §119 of United States patent application No. 61/016437 filed 21 December 200"?
Technical Field
[0002] This invention relates to automatic image-recognition systems. The invention has application, for example, in automatic face-recognition systems.
Acknowledgement
[0003] Portions of the research related to the invention described herein used the FERET database of facial images collected under the FERET program, sponsored by the DOD Counterdπtg Technology Office,
Background
[0004] Human face recognition has many applications in areas of national security, banking security systems, access control, etc. Many existing face-recognition systems are not accurate enough for reliable commercial application.
[0005] Methods that have been proposed for face recognition include:
• the Eigenface approach (see M. Turk and A. Pentland, Eigenfaces for Recognition, J. Cognitive Neuroscience, voB, no. l, 1991); and
• the Fisherface approach by V. I. Belhumeur, J. P. Hespanha, and D.J. Kriegman, Eigenfaces vs. fisherfaces: Recognition using Class Specific Linear Projection, IEEE Trans. "On Pattern Analysis and Machine Intelligence", vol.19, no.7, pp.711-720, July 1997).
These references and all other references referred to herein are hereby incorporated herein in their entireties as if fully set forth herein.
[0006] The Electoral College concept originated in association with presidential elections in countries, such as the United States, where elections first take place in each of several pre-partitioned regions/states and the final winner is selected according to the weighted sum of wins within the sum of all regions based on the winner-take-all principle. [0007] L. Chen and N. Tokuda, Regional voting versus national voting-stability of regional voting , International ICSC Symposium on Advances in Intelligent Data Analysis, Rochester, New York, USA, June 22-25 1999 describes the use of the Electoral College method in pattern recognition.
[0008] Literature of interest includes:
• W. Zhao, R. Chailappa, A.Roseπfώd :.nJ PJ. Pt.Jiψs - Face rccogn ilj<-<: .. Literature Survey, ACM Computing Surveys, Volume 35, Issue 4, Dec. 2003, pp.399-458;
• B. Moghaddam - Principal Manifolds and Probabilistic Subspaces for Visual Recognition (2002);
• P. Sinha et al - Face Recognition By Humans: Nineteen results all computer vision researchers should know about (2006); * G. Shakhnarovich et al. Face Recognition in Subspaces (2004);
• M. Turk: A Random Walk through Eigenspace (2001).
Siiπimarυ
[0009] Despite the work that has gone into developing automated image-recognition systems and, in particular human face-recognition systems, there remains a need for practical and accurate methods and systems for performing image recognition. There is a particular need for such methods and systems that are capable of accurate face recognition.
[0010] This invention has a range of aspects. Aspects provide both apparatus for image recognition and methods for image recognition. The methods and apparatus have particular application for face recognition.
[001I]In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the drawings and by study of the following detailed descriptions.
Brief Description of Drawings
[0012]The accompanying drawings illustrate non-limiting embodiments of the invention.
[0013] Figure 1 is a flow chart illustrating a recognition method according to an Manmlfi ernhoriiment [0014] Figure 2 illustrates one manner in which faces in a gallery may be partitioned and how a corresponding area of a query face, X, may be partitioned.
[0015] Figure 3 illustrates an example where a query fac; is partitioned into 12 equal window re "Oei1ons.
|_ιii>l 6"j Hgiic. • ' ku tπαes variations in the main ' r : - a Jung - r ^iun υf rt <\n~^ : face in Figure I v/ith ^ corresponding region of a fα_ - <:* ihc galbry ,
[0017J Figures 5A through 5D are respectively plots showing recognition rates for example embodiments of the invention for different numbers of windows.
Description [0018]Throughout the following description specific details are set forth in order to provide a more thorough understanding to persons skilled in the art. However, well known elements may not have been shown or described in detail to avoid unnecessarily obscuring the disclosure. Accordingly, the description and drawings are to be regarded in an illustrative, rather than a restrictive, sense.
[0019] The invention will be illustrated with reference to a face-recognition system that applies an electoral college approach for human face recognition. The face- recognition system may perform large-gallery face recognition in large face data sets. The invention may be embodied in apparatus for image recognition as well as in methods for image recognition.
[0020]One aspect of the invention provides face recognition systems which apply face-recognition algorithms (for example any suitable known face-recognition algorithm) in an electoral college framework. Face recognition algorithms that may be used in an electoral college framework according to this aspect of the invention include, without limitation those described in:
• M. Turk and A. Pentland, Eigenfaces for Recognition, J. Cognitive Neuroscience, vol3, no. l , 1991 ;
• V. I. Belhumeur, J.P. Hespanha, and DJ. Rriegman, Eigenfaces vs. fisherfaces: Recognition using Class Specific Linear Projection, IEEE Trans.
On Pattern Analysis and Machine Intelligence, vol.19, no.7, pp.711-720, My 1997. These face recognition algorithms define a similarity value between a face to be determined ('the query face') and faces in a gallery. The algorithms select a face in the gallery that has the largest similarity value to the query face as the correct match.
[0021] A prototype embodiment applying the "eigen-face" appro "eh within an electoral college framework has been shown to provide a remarkably high recognition rate when it is tested in FERET datasets. Testing may be performed as described by P. Phillips, II WCJJ ! , :aiig, ύiύ I R'u. . /V FI - ' ' J'.tse aiul evaluation procedw Cj, ^.. e recognition algotltlsins , Iri* ^ 'Λ Vision Computirg, lό(5):295-306, 1998; and PJ. Phillips, H. Moon, S. A. Rizvi, PJ. Rauss The FERET Evaluation Methodology for Face Recognition algorithm Λ , IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, pp. 1090-1104, 2000. Accuracies of 99.42% , 99.485% , 89.042% and 87.215 % have respectively been obtained for fb (which is the set of face images acquired on the same day as the corresponding gallery images, but with different facial expressions), fc (which is the set of face images acquired the same day as the corresponding gallery image, but with a different camera and lighting), dupl (which is the set of face images acquired between 0 and 1031 days after the corresponding gallery image) and dup2 (which is the set of face images acquired about 18 months after the corresponding gallery images).
[0022] The invention may apply any pattern-recognition algorithm suitable for use in face recognition in the context of the electoral college approach. Take the case where G is a suitable pattern-recognition algorithm. Given a query image Q, G identifies the closest match to Q in a gallery of known images P. In a method according to an embodiment of the invention, a Query image Q is divided into a plurality of regions. The algorithm G is applied to determine similarity values between each of the plurality of regions and corresponding regions in images P in the gallery. The method identifies the best matches between the region of the query image Q all corresponding regions in the gallery. After the best matches have been identified, a simple vote may be used to identify the image on the gallery which has the greatest number of regions that match corresponding regions of the query image. That image may be identified as the matching image.
[0023] Figure 1 shows a recognition method 10 according to a simple example embodiment. Block 12 compares a region of the query image Q to a corresponding region of a gallery image P. Blocks 14 and 16 cycle through all regions of the query image Q and all corresponding regions of gallery images P. Block 15 identifies the rlnβfβt mntT-hinσ σail^rv imacrp rpcrinn fnr parh rpoinn nf nnprv imnσp O Riork I S determines which gallery image P provides the most closest matching gallery image regions for the query image Q.
[0024] Figure 2 illustrates schematically an embodiment in which the gallery
5 comprises an array of N images, P1 , V1, ... , PN. The gallery may comprise, for example, a database containing iπa^c dcta representing each of gallery images P. The images in the gallery may represent // different individuals IT some crssrs, the gallery i u.) coiitaiii two or rπoi e K.L-J , ■ [ - f.iw individual-.. TI-J c L . ;. ' j.. arc to be compared with the quer> IM^'. ^ lυ determine the iu'crulp, J ,' Q TLi ciaUbase may i O contain additional information regarding individuals depicted in ιh ; images in the gallery.
[0025] Aspects of the invention include setup processes 3OA and recognition processes 3OB, each may involve several steps.
15
[0026] An example setup process 3OA is illustrated in Figure 3. Block 32 may involve receiving and processing images for inclusion in the gallery. In block 32A, a face-detection method is applied to locate faces in the images to be included in the gallery. In block 32B, the images of faces identified in block 32A are standardized or 0 normalized by cropping, resizing, and/or rotating, if necessary, so that the images in the gallery all have a common size (e.g. a common rectangular format) and the depicted faces are similarly located in each image. Any suitable face-detection approach may be applied in block 3OB. An example of a face-detection approach that may be applied in block 3OB is described in Henry A. Rowley, Shumeet Baluja, and 5 Takeo Kanade Neural Network-Based Face Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 20, number 1, pages 23-38, January 1998.
[0027] In block 34 a number of regions (or 'windows') are associated with different0 parts of each gallery image. The windows are equal in size in preferred cases. The windows may be rectangular. In a preferred embodiment, the portion of the image in each window is in the order of 11 to 13 pixels wide and 11 to 13 pixels high (e.g. 11 columns of pixels by 13 rows of pixels or 13 rows of pixels by 13 columns of pixels etc.). In preferred embodiments, the windows collectively cover all of the faces5 visible in the gallery images.
[0028] The windows may be arranged in a rectangular array of windows which cover thf» σnlifM-v imaiipc Tn CΛmp r-aspβ rtip winHnwc arp arranσprl in a niiiraϋh/ nf rriwβ and a plurality of columns. In some cases the windows do not overlap with one another In an example embodiment, each gallery image is divided into twelve windows
[0029] The number of windows may he greater than 12. The number of window" miy b: oCt dcr/CL ϋng on the size of the 3.11Lr f\ ; ! uages For example, *V nir- ^ ^y be
Figure imgf000008_0001
Tr> ^O^P ,i i I 1 , ,U und V di e 1 'h3 .. h_ t ϊ to " IV λ ' .
:, !,,. u^tiuϊ, the Λ uidoW o αi ,- a Ku ij 1 n i ^ a Ih ... JU C 1C r , 9 ^ ' 2 , I Z, 8 x 3, or 3 x4.
[0030] Dividing the gallery images into windows or regions does not require that the data of the gallery images be physically segregated. It is sufficient to provide a data structure or function that can identify the image data corresponding to a desired one of the windows so that the image data corresponding to the window can be compared to a corresponding window or region within a query image Q.
[0031] Block 36 involves setting up a face recognition algorithm denoted G1, for each window, denoted W1. G1 may comprise any suitable face recognition approach. For example, block 36 may comprise adjustment or calculation of parameters useful for comparing windows in a query image to corresponding windows in gallery images For example, where the "Eigenføce" approach is applied to compare windows, block 36 may involve computing eigen values and eigenvectors.
[0032] Recognition process 3OB commences at block 40 which applies a face detection approach to locate a face within the query image Q (block 40A). In block 4OB the query image is normalized by cropping, resizing and/or rotating the image to yield a normalized query image that can be conveniently compared to images P in the gallery. In some embodiments, blocks 40A and 4OB are the same as or similar to blocks 32A and 32B so that the normalized query image is directly comparable to the images in the gallery.
[0033] In block 42 windows or regions are associated with different parts of the query image. This my be done, for example, in the same way that windows are associated with different parts of the gallery images in block 34
[0034] In block 44 face-recognition system G1 is applied to compare the image content nf parh winHnw
Figure imgf000008_0002
the nnprv imaσe with rnrrp<snnnriinp windows W nf the cnllprv images. Block 44 may involve application of the parameters developed in block 36 of set-up process 3OA to assess the similarity of the image content of a query window and the image content of a window of the gallery. In some cases, block 44 will find that the best matches for different windows are different identities.
[0035] This con be expressed as: the image cortopt oF er""h area or window T of t^" rmerv irp.3' rP :s '"orππared to ;'ie iniac0 content of n^ of the rr>'!eγ irmςros usin^ the - •:nπ ;i v »<:■ , ] G in the 'omaiπ T 1 t'- <* < - "\ ' -.vie tr ' ler.tiU ^f t!i- ψκ - ' LJC ' ,.ι - ,..*, 'i' based u-,ou tlje sirmla; :i_' . '1 ^
Figure imgf000009_0001
, jf tliar pan >f [u:ry . Z area T and. the part of each gallery face in the corresponding area.
[0036] Ia block 46, the best matches between windows determined in block 44 are used to determine which image in the gallery will be identified as best matching the query image. Block 46 may involve a voting approach. A simple voting approach first identifies the gallery image or identity (an identity may correspond to more than one gallery images in some embodiments) that is identified as being the best match for the largest number of windows in block 44. The most-often best-matching identity from the gallery will be taken as the identity for the query image.
[0037] Block 46 may optionally apply a weighted voting approach. In an example weighted voting approach, similarity values as determined in block 44 for each identity that has been identified as being the best match for at least one window are summed. The identity with the greatest sum of similarity values may be taken as the identity corresponding to the query image Q.
[0038] Some embodiments may attempt to find a better match between the query image and a gallery image by translating and/or rotating the query image and repeating the recognition process 3OB for each different position and orientation of the query image. Translations may be achieved by shifting the locations of the windows in one or both of the query and gallery images by one or more pixels in any direction.
[0039] In one embodiment, to confirm best alignment among the query image Q and the gallery images, each window W| may be shifted or rotated in each direction. For example, the positions of the pixels of the query image from which data is drawn for a window may be shifted up, down, right, left or some combination thereof. The resulting window data may then be compared to the data for corresponding windows in the palierv as described above. [0040] For example, Figures 4A, through 4F show how one window W8 from the query image shown in Figure 4 may be shifted one pixel in each direction to produce nine sub- images {including the unshifted image of Figure 4). Eight shifted sub-images can be obtained by shifting the window one pixel in each main and diagonal direction. In rigυrc 4Λ the
Figure imgf000010_0001
IT* window i? shifted up ".r:d riffb*. In
Figure imgf000010_0002
is sMf^ed icΛ, In fλ υre 4C !' ' ' :- ;? s!. :f;-: dov a. F;i Γ ,J,« . " ^ ''idcn, Ii shif: <f i ighi aiu
Figure imgf000010_0003
' .. ,^ , -ιlu n G1, n;-/ tlit.i L' applied to find an identity match for each of the nine srb images and corresponding similarity values for each match. Then the identity with the greatest similarity value h chosen as the identity of this query image in W1.
[0041] In another embodiment the windows for the query image remain fixed and the windows of the gallery images may be shifted and/or rotated. In this embodiment, the algorithm G1, may be applied to find the best match for each window of the query image among all the sub-images of all gallery images in the shifted windows to find an identity of the sub-image of the query face. Then the identity with the greatest similarity value may be chosen as the identity of this query image in Wi. In this embodiment, block 36 may comprise computing parameters or otherwise setting up algorithm G1 for the shifted and/or rotated windows.
[0042] In some embodiments the degree of shifting and/or the degree of rotation are constrained. For example, in a non-limiting prototype embodiment the face size was chosen to be 130x 150 pixels, the degree of shifting was constrained to be no more than 3 pixels in any direction and the total movement of a pixel in any direction was constrained to not exceed 4 pixels.
[0043] Certain implementations of the invention comprise computer processors which execute software instructions which cause the processors to perform a method of the invention. For example, one or more processors in an image recognition system may implement the methods of Figure 1 or 3 by executing software instructions in a program memory accessible to the processors. The invention may also be provided in the form of a program product. The program product may comprise any medium which carries a set of computer-readable instructions which, when executed by a data processor, cause the data processor to execute a method of the invention. Program products according to the invention may be in any of a wide variety of forms. The nropram nroduct mav eomnrise for examnle. nhvςica! media such as maenetic data storage media including floppy diskettes, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, or the like. The computer-readable signals on the program product may optionally be compressed or encrypted. 5
[0044] Figure 5 shows an example
Figure imgf000011_0001
50 comprising a can.era "Z for acquiring digital images of pc; 1-\ C; mera 52 communicates the 'r\ 'Ia' imager t" a i :c .jeste r unit 54 I')' \
Figure imgf000011_0002
v:<) of „ ' !c f i_o 5" I 'ΌCC CΓ L ^ipt isec •>"" or more data pioccssui J JJ „.;> .α,.^ sjlfΛ aic "-O. TL*. sofw ».L ,' - .: .j the data
I O processor(s) to perform a recognition method as descr ibed h.-rtln v/itli images obtained by camera 52 treated as query images. Λ gallery is provI.Lxl by way of a database 58. Database 58 comprises images 59 A associated with identification information 59B. One, two or more images 59A may be associated with the same identification information (identity) 59B.
15
[0045] In some embodiments, rights information 59C is also available by way of database 58. Rights information 59C may, for example, determine what action ought to be taken in response to identification of the particular associated identity. For example, if the identity corresponds to a person entitled to access a secure area, the 0 rights information may indicate this. Software 56 may cause processor 53 to coordinate some action, which may be based on rights information 59C upon identifying the corresponding identity 59B. For example, processor unit could do any of:
• display a message; 5 • generate a warning or alarm;
• cause a copy of the image from camera 52 to be forwarded or saved;
• actuate a mechanism such as a door lock, controlled gate, or the like;
• provide access to a computing resource such as certain data, applications, or the like; 0 • generate billing information;
• etc.
The action may be controlled by way of a suitable action interface 60 which controls a display, alarm system, or other device that interacts with the world outside of processor unit 54. 5
[0046]The attached Appendix A describes some more example implementations of the invention. [0047] Where a component (e.g. a software module, processor, assembly, device, circuit, etc.) is referred to above, unless otherwise indicated, reference to that component should be interpreted as including as equivalents of that component any component which performs the function of the described component (i.e. , that is functionally equivalent), including components which are not structurally eqv.P'°!ent to the disclosed structure which performs C:c function in the ϋhstraf*-! exen^hry emlvd'rfients of the invention.
i_v J-«>i v\ I-IIe d number of uλaπ t ,.: ^
Figure imgf000012_0001
1 ^.^ a.J oiabuJiirunio hav e L>, w , above, those of skill in the art will : ccogr.ize ceitain modifications, pcrfi: 'lotions, addliioiiS and sub-combinations thereof. Ii Ls intended (hat Ih^ invent Icn Lϊiudes all such practical modifications, permutations, additions and sub-combinations. Methods as described herein may be applied to the recognition of animal faces, human faces, other natural or manmade items or structures or the like. The methods and apparatus of the invention have special applicability to the problem of recognizing human faces.
SCHEDULE A Example Embodiments
We assume that all the face images are pre-cropped, scaled and rotated to a size m x n pixels, according to the positions of eyes 2. We assume the gallery size is K.
We suppose an existing algontl.m G has Leon L>3ed for face io gniLion, τ't -H o , .pi i i H ty of two iππjoii ' n.τv' P> * ^ ^I^ TH by Sim(.Λ, T3 ) T»V: IJ the smaller tlie better.
Due to the known alignment problem ([Hj) for face recognition, we allow the perturbing of images in a neighborhood during the matching process. We can either perturb gallery images or probe images. We perturb only probe images in this paper.
The algorithm works as follows:
Input: a probe image P to be recognized.
1. Divide each of the images in the gallery into re equal sized windows/regions, the two opposite corners of each window wtj (i = 1, 2, • • , r;j = 1, 2, • • • , c) are Cl} β +
Figure imgf000013_0001
("-2s)j 3 c
Therefore we come up with re sets of galleries, each having K images.
2We do not discuss the algorithm for locating the eyes, which can be found in paper
[8, 9, 10]
3We leave s pixels in each four sizes for the perturbing purpose. 2. For each window VJn (i = 1, 2, ■ • ■ , r, j — I1 2, • ■ ■ , c);
(1) Obtain (2s — I)2 subimages from the probe image P by perturbing the window, P1J1S1S2 , S1 , s^ € {-s, — s + I1 • • ■ , 0, s — 1, s}. The two opposite cor- r ers of probe subrange P1J (Si1 Sj) rue Cυ f (n ) '-t;2) a ^d Py13152. C) Usui- aV pii'-h'-'. '7 * o calculntf r' -o siiril <. r;< ,- ,-"- ( r]^^ 2 i fk tJ) , - πn triage PiJ1Si Si βJid the subimage
Figure imgf000014_0001
image J^ in the window ωir
(3) For each gallery subimage Ikμj, use maxsltJ2{Sim{Ptjrai3a , Ik1Ij)] os the similarity measurement Simy(P, Zn, ) of the gallery image Ik and the input image P within window w%J .
(4) Find x = maxfc(Simi;ϊ(P, Ik)- gallery image Ix got 1 vote on this window.
3. Summing up the votes that each gallery image Ik get in all the windows toy-; return the one with highest votes as the target face.
Some small tricks could be used of course. For example, for step 2(4), when there is a tie-ups among t gallery images, then we can let each image get one vote or 1/t vote. In step 3, if there is a tie-up, we randomly choose one as the return.
Experiments
PCA (with Mahalanobis Cosine measurement) approach is applied as the basic approach for measuring the Similarity of two face images.
We use FERET face database. 'Ta" is the set of gallery hiaαjr.s Ontnining 1196 grayscale images, "fb", *'fc" , '"dupl" and "dup2" are sets of probe images.
We firstly normalized the images into standard format: each image is rotated, scaled and cropped to a size 15Ox 130 pixels (150 pixels each column, 130 pixels each row), so that the distance between the centers of two eyes is 70 pixels and line between two eyes lies on the 45th pixel below the up boundary.
The algorithm described in the preceding section is implemented. Some details of the implementation are as follows:
1. The ellipse mask included in the FERET data collection is also applied for removing the background and hair corners. However, since we use "Electoral College" matching and the probe images are perturbed for reliable alignment, the mask is only applied for the subimages in the windows, rather than the entire image. Histogram equalization and intensity standardization are also applied to subimages in the windows. 2, We tested on different sized windows: the gallery images are equally divided into R (R = 1, 2, 3, ■ ■ • , 41) rows and C(C = 1, 2, 3, - - - , 41) columns respectively so that there are R x C windows.
Tlie recognition process therefore involves the following two stage "
Gallery Processing Stage
1. Divide each gallery image Ik into re equal sized subimages Ik, ij (i = 1, 2, ■ ■ ■ , !; j = 1, 2, - - - , C). The positions of two opposite corners of each gallery sυbimage Ik} ij are C13 = (a + t"-y-i) + 1, s + ("-gy-H + i) and C1 j = (s + (m~ty* ι s -f fr-^OJ . Therefore we come up with re sets of galleries, each having K images. 4
2. For each window Wij (i = 1, 2, • • • , r, j ' — 1, 2, • • , c):
(1) Read in the subimage of mask image within the window.
(2) Read in the gallery subimages 4^-, k = 1, 2, • • • , K.
(3) Standardization of each gallery subimage Ik<ij as follows: apply the part of mask obtained in (1) to remove background and hair (The remaining pixels are called effective pixels) ; apply histogram equalization and intensity
4We leave s pixels in. each four sizes for the perturbing purpobe; and a decimal is always rounded to the nearest integer.
standardization on the effective pixels in the subimage.
(4) Use eigen face decomposition, find the first k 5 eigen values and the corresponding eigenvectors.
T he System Setup stage obtains oτ:- set c f t 'gen values and cigenvc f ;.rs for e i L ot the re windows,
Recognition Stage
For each probe image, do the following: 1. For each window wi3 (i = 1, 2, , r, j = 1, 2, ■ ■ • , c): (1) By perturbing the position of the window 0, 1 s pixels in each direction, we obtain (2s — I)2 probe sub-images Py (si, S2) (si, $2 6 {—$> —s + 1, • • ■ , 0, s— 1, s}) within the window 6. (2) For each probe subimage Pl}(sι, S2)
(a) Normalize it as 2(3) in previous subsection.
(b) each gallery subimage h,ij obtained by step 2 (3) in previous subsection, calculate the Mahalanobis Cosine similarity Sirn(P,j;5lS2 , Ik,ij) between its projection in the PCA space and probe subimage P1J7S132 ] let Simi3{P, 1%)
5we choose k = 40%timesmin(M, N) in our experiments, where M1 N represents the number of number of effective pixels in each image and the number of gallery images, which is 1196 here
6We choose s = 2, in this experiment = maxSliSi {Sim(PjJi6l82, /jfciy)} be similarity of the gallery image Ik and the probe input image P within window w.,-j.
(c) The gallery image Ix satisfying SimJ;(F, Ix) = πiaxkφx- ,κ} {Simι:i(P. Ik)} is lake ! as the gallery matched Io a pro! i triage P f';r (Λe window), and '"is
Figure imgf000018_0001
ϋ.
2. The gallery image Ix having the maximal votes is taken as the return image.
Note that the followings tricks are used in our implementation: (1) If there is a tie among t images in step (6) and one of them is the correct answer, we only count 1/t when we calculate the accuracy. We believe that this is a fair strategy. (2) When the number of effective pixels in the window, i.e., the pixel outside the mask, is less than half of the total number of pixels, we discard the window.
The results are shown in figures 5 A to 5IλThe results of "Direct Popular Vote", i.e. using PCA approch on the entire images directly, are also shown in the figures. It is also noted that, for the direct popular vote approach, we also perturb the probe images to find the best match, which slightly improved the accuracies for each probe sets7.
The best iecognition accuracies for "fb" , "fc" , "dupl" and "dup2" are 97.776% (when the images are partitioned into 10 rows and 8 columns), 96.985% (when the images vre partitioned mto 9 rmyq and 12 colloums") , 79.CΪOT, (v/hcα tli. imagr3 a-e partll l . >d Lr1O 10 i u- 3 aM 12 colurrr } and S0.ό77>o (w hen cl.t,
Figure imgf000019_0001
arc part/ ioned mto '; ,. * _. id 8 coi'uiin }
Figure imgf000019_0002
Although it is not our purpose, we know that by selecting different parameters, such as enlarging perturbed area, removing the first one or two eigen faces, can further improve the performance. We have applied the strategy of removing the two eigen faces corresponding to the largest two eigen values, we get improved recognition rate 99.227% for Mfc" dataset (when we divide the entire image into 15 rows and 12 columns and thus come up with 15 x 12 = 180 windows for "Electoral College" type matching).
When we apply the strategy of removing the two eigen faces corresponding to the largest two eigen values, and set the perturbed area to be 7 by 7 (s = 3) , and we come up with improved recognition rate for "fb" dataset to 99.07% (when we divide the entire image into 8 rows and 9 columns and come up
8It is also observed that, when the numbers of rows and columns that the images are partitioned into are within the rages of 8 and 12, the minimal accuracies for "fb", "fc", "dupl" and "dup2" are 96.903%, 92.531%, 75.605% and 75.027% respectively. with 8 x 9 windows for "Electoral College" type matching).
For reference purpose, although it is not our main purpose of this paper, the -mown best result for eich probe ^c* is shown as follows9: fL: 93 A% ([12] "\ fc: 82 00% (in [IH]. h : Ui iversily of South OHo-nifΛ,
:^i- Gcru ([U]), duis2. 6 <% {[i r,.
It is easy to see that the Electoral College based approach has a marked improvement over the direct vote type recogntϊon algorithms.
4 Conclusion
1. The theory of "Regional and National Voting" is applicable at least in the area of face recognition.
2. The Regional voting version of face recognition algorithm can improves the original algorithm markedly. As far as the authors know, no known algorithm has reached the performance or anywhere close by, on "fc" , "dupl" ,"dup2" , as we have reached in this paper. We have also reached the best
9We exclude the works, where the idealities of parts of probe images were used in a training process, which would significantly affect the finally "accuracy". 10We are not sure if the 540 images from 270 subjects used ia [12] for their training are subsets of "fa" and "fb" . If it is the case, we think the actual recognition accuracy would be slightly lower than 98.4%. performance for "fb" , although it is not significant comparing to the best known approach.
3. Further improvement might be possible il techniques such as allowing more "srr . l.j ' or evtm "rotations" of eacL ., .nclow doling comparison tuc have weignis in different windows.
Notice that we only use the simple PGA approach for matching imaged in each window, and we know that PCA is now not the best approach. It is expected to further improve the face recognition performance by embedding Electoral strategy with more efffective matching approaches. Acknowledgements
This research of the first author is supported by a Discovery Grant of NSERC, Canada.
Portions of the research in this paper use the FERET database of facial images collected under the FERET program.
References
[1] Chen, L., Tokuda, N.: Regional voting versus national voting -stability of regional voting (extended abstract). In: International ICSC Sympo-
sium on Advances in Intelligent Data Analysis, Rochester, New York, USA (1999)
[2] Chen, L., Tokuda, N.: A general stability analysis on regional and nationj.1 voting schemes against noise - why is an elector a. college mere stable than a direct popular election? Artificial Intelligence 163 (2005) 47-66
[3] Chen, L., Tokuda, N.: Robustness of regional matching scheme over globe matching scheme. Artificial Intelligence 144 (2003) 213-232
[4] Chen, L., Tokuda, N".: Stability analysis of regional and national voting schemes by a continuous model. IEEE Transactions on Knowledge and Data Engineering 15 (2003) 1037-1042
[5] Chen, L., Tokuda, N., Nagai, A.; Capacity analysis for a two-level decoupled hamming network for associative memory under a noisy environment. Neural Networks (2007) doi:10.1016/j.neunet.2006.05.045, http://www.sciencedirect.com/science/article/B6T08-4NlJRP5- l/2/c5ba4e29ec0cc053ff3229144da2a098.
[6] Ikeda, N., Watta, P., Artiklar, M., H&ssoun, M.H.; A two-level bamming network for high performance associative memory. Neural Networks 14 (2001) 1189-1200
[7] PLiILp;- , ?., UVl
Figure imgf000023_0001
diabase and evaluation procedure for face rucogmfion
Figure imgf000023_0002
Image and Vision Computing 16 (1998) 295-306
[8] Wang, P., Green, M.B., Ji, Q., Wayman, J.: Automatic eye detection and its validation. In: Proc. IEEE Workshop Face Recognition Grand Challenge Experiments. (2005) 164
[9] Peris, R.S., Gemmell, J., Toyama, K., ger, V.K.: Hierarchical wavelet networks for facial feature localization. In: IEEE International Conference on Automatic Face and Gesture Recognition. (2002) 118-123
[10] Ma, Y., Ding, X., Wang, Z., N, W.: Robust precise eye location under probabilistic framework. In: IEEE International Conference on Automatic Face and Gesture Recognition. (2004) 339-344
[11] Wang, P., Tran, L.C., Ji, Q.: Improving face recognition by online image alignment. In: Proceedings of the 18th International Conference on Pattern Recognition. Volume 1., Hong Kong (2006) 311-314 [12] Liao, S., Lei, Z., Zhu, X , Sun, Z., Li, S.Z , Tan, T.: Face recognition using ordinal features. In Zhang, D., Jain, A.K., eds.: Proceedings of IAPR International Conference on Biometrics, Hong Kong (2006) 40- 16
[13] Phillips, V , M ' oon, Ti ,
Figure imgf000024_0001
6.A., KαiioS, I1 TLe r L riϋl evaluation methodology for lace-recognition algorithms IEEE Transactions on Pattern Analysis and Machine Ingelligence 22 (2000) 1090-1104
[14] Timo Ahonen, Abdenour Hadid, M. P.: Face recognition with local binary patterns. In: Proceedings of 9th European Conference on Computer Vision. (2004) 469-481

Claims

WHAT IS CLAIMED IS:
1. A face recognition method comprising: associating a plurality of windows with areas of a query image of a face; for a plurality of the windows, comparing image data within Im, window to image data within corresponding windows in gallery images in a gallery; identifying an idcntiry correspond hie co the query inuige i:y a vot ing technique.
2. A face recognition method according to claim 1 wherein the windows are rectangular.
3. A face recogntion method according to claim 1 or 2 wherein the windows are non-overlapping.
4. A face recognition method according to any one of claims 1 to 3 wherein the voting technique comprises counting a number of the windows most closely matched by gallery windows associated with the identity.
5. A face recognition method according to any one of claims 1 to 3 wherein the voting technique comprises computing a weighted sum of similarity values for the windows most closely matched by gallery windows associated with the identity.
6. A method for image recognition comprising: providing a query image, Q; for each of a plurality of regions in the query image, applying an image matching algorithm to obtain a measure of similarity between the region of the query image and a corresponding region in each of a plurality of gallery images; identifying one or more best matching ones of the gallery images based on the measures of similarity.
7. A method according to claim 6 comprising normalizing the query image prior to applying the image matching algorithm.
8. A method according to claim 7 wherein normalizing the query image comprises one or more of scaling, resizing, cropping and rotating the query image.
9. A method according to any one of claims 6 to 8 wherein the image matching algϋiiduu comprises an Eigeiiface
Figure imgf000026_0001
algoi ithm.
10. Λ rnciLoα according to any one or clai.n > (> u> S whu Sm applying the imc^ , i'^ tc'iii o algorithm comprises comj '' ^ . , e or mofj cigci v'dlucc z,Δ eigenvectors for the region and comparing the one or more eigcπ values and eigenvectors to eigenvalues and eigenvectors determined for the corresponding regions of the gallery images.
11. A method according to any one of claims 6 to 10 wherein applying the image matching algorithm comprises, for each of a plurality of relative positions of the region of the query image and the corresponding region of one of the gallery images applying the image matching algorithm and taking the measure of similarity for one of the plurality of relative positions for which the measure of similarity represents the greatest similarity.
12. A method according to claim 11 wherein the plurality of relative positions include an unshifted position, a position in which the corresponding region is shifted upward relative to the region, a position in which the corresponding region is shifted downward relative to the region, a position in which the corresponding region is shifted right relative to the region, and a position in which the corresponding region is shifted left relative to the region.
13. A method according to claim 11 wherein the plurality of relative positions include all translations of the region relative to the corresponding region of up to N pixels in any direction wherein N is an integer.
14. A method according to claim 13 wherein N is 3 or 4.
15. A method according to any of claims 6 to 14 wherein identifying the one or more best matching ones of the gallery images comprises, for each of the regions identifying a matching gallery image for which the measure of similarity to the corresponding region represents a greater similarity than do the rnrrpcnnnHinσ lvσinm nf nttiprt nf thf crall<>rv imflσes anri irtenrifvinji the gallery image or images which are the matching gallery image for a greatest number of the regions.
16. A method according to any of claims 6 to 14 wherein identifying the one or 5 more best matching ones of the gallery images comprises summing the measures or' sinilltuity for each of the iegiojis.
17. A πicilioJ , ■ >: .ing to ai.y of claims 6 to U v < . '\ identi fying tlit: one or iuoi C be; ^ v.,hi.ig ones of the galLτy in1 i ~ ir pr'..-es using the me.isuj -
10 of similarity in a weighted vote.
18. A method according to any one of claims 6 to 17 wherein the regions are rectangular,
15 19. A method according to claim 18 wherein the regions are arranged in a plurality of rows and a plurality of columns.
20. A method according to claim 19 wherein the regions are all equal in size.
20 21. A method according to claim 19 or 20 wherein the regions are arranged in an array having a number of columns in the range of 3 to 20 and a number of rows in the range of 3 to 20.
22. A method according to claim 19, 20 or 21 wherein the regions are each 11 to 25 13 pixels wide and 11 to 13 pixels high.
23. A method according to any one of claims 6 to 22 wherein providing a query image comprises obtaining an image using a digital camera.
30 24. A method according to claim 23 wherein the query image comprises an image of at least one person's face and the gallery images comprise images of peoples' faces.
25. A method according to claim 24 comprising, from the image obtained using 35 the digital camera extracting an image of one person's face and using the image of the one person's face as the query image.
26. A method according to any one of claims 6 to 26 comprising, based upon the one or more best matching ones of the gallery images, performing an action comprising one or more of: displaying a message; generating a warning or alarm; (a)causing a copy of the query image to be forwarded and/or saved; 5 (b)actuating a median's*"-!; and providing access to a compu'iig resource; generating billing infer i"i..v, ion.
27 A raetho'i U a ,\ - _, '; ion thai etnpkr s :he P i/ > ' r 11C^e appr >α':h
10 28. Image recognition apparatus comprising: a gallery comprising information representative of a plurality of gallery images; an input for receiving image data comprising a query image; a windowing system configured to identify in the image data, image 15 data for regions within the query image; a similarity measurement system configured to apply an image matching algorithm to obtain a measure of similarity between the image data for a selected region of the query image and corresponding regions in the gallery images;
20 a voting system configured to identify one or more best matching ones of the gallery images based upon measures of similarity determined by the similarity measurement system for a plurality of regions within the query image identified by the windowing system.
25 29. Image recognition apparatus comprising: a data processor configured to execute instructions that cause the data processor to receive a query image, Q; for each of a plurality of regions in the query image, apply an image matching algorithm to obtain a measure of similarity between the region of the 30 query image and a corresponding region in each of a plurality of gallery images; and identify one or more best matching ones of the gallery images based on the measures of similarity.
35 30. Image recognition apparatus according to claim 29 comprising a camera wherein the data processor is configured to acquire digital images from the camera.
31. Image recognition apparatus according to claim 29 or 30 comprising a database wherein the plurality of gallery images are represented by data stored in the database.
32. Apparatus comprising any new, useful and inventive feature, combination of features or sub-combination of features as described or depicted herein.
./J . Methods coi.ϊpriαing any ;;e ,-, , useful ax'.d iπvei.tΛ ; yVp, ail, -.'
Figure imgf000029_0001
of steps and/or acts or sub-combination of steps and/or acts as 'inscribed or depicted herein.
PCT/CA2008/002229 2007-12-21 2008-12-19 Methods and systems for electoral-college-based image recognition WO2009079769A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US1643707P 2007-12-21 2007-12-21
US61/016,437 2007-12-21

Publications (1)

Publication Number Publication Date
WO2009079769A1 true WO2009079769A1 (en) 2009-07-02

Family

ID=40800611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2008/002229 WO2009079769A1 (en) 2007-12-21 2008-12-19 Methods and systems for electoral-college-based image recognition

Country Status (1)

Country Link
WO (1) WO2009079769A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9483821B2 (en) 2014-01-28 2016-11-01 Samsung Medison Co., Ltd. Method and ultrasound apparatus for displaying ultrasound image corresponding to region of interest
CN107704520A (en) * 2017-09-05 2018-02-16 小草数语(北京)科技有限公司 Multifile search method and apparatus based on recognition of face

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030086593A1 (en) * 2001-05-31 2003-05-08 Chengjun Liu Feature based classification
US20040151347A1 (en) * 2002-07-19 2004-08-05 Helena Wisniewski Face recognition system and method therefor
US20040213437A1 (en) * 2002-11-26 2004-10-28 Howard James V Systems and methods for managing and detecting fraud in image databases used with identification documents
US20050102246A1 (en) * 2003-07-24 2005-05-12 Movellan Javier R. Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030086593A1 (en) * 2001-05-31 2003-05-08 Chengjun Liu Feature based classification
US20040151347A1 (en) * 2002-07-19 2004-08-05 Helena Wisniewski Face recognition system and method therefor
US20040213437A1 (en) * 2002-11-26 2004-10-28 Howard James V Systems and methods for managing and detecting fraud in image databases used with identification documents
US20050102246A1 (en) * 2003-07-24 2005-05-12 Movellan Javier R. Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Proc. 6th IEEE International Conference on Cognitive Informatics 6-8 August 2007", 6 August 2007, 08082007, article CHEN ET AL.: "A simple High Accuracy Approach for Face Recognition", pages: 92 - 98, XP031141680 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9483821B2 (en) 2014-01-28 2016-11-01 Samsung Medison Co., Ltd. Method and ultrasound apparatus for displaying ultrasound image corresponding to region of interest
CN107704520A (en) * 2017-09-05 2018-02-16 小草数语(北京)科技有限公司 Multifile search method and apparatus based on recognition of face

Similar Documents

Publication Publication Date Title
US11657525B2 (en) Extracting information from images
US10565433B2 (en) Age invariant face recognition using convolutional neural networks and set distances
Rathod et al. Automated attendance system using machine learning approach
US7680330B2 (en) Methods and apparatus for object recognition using textons
Ahonen et al. Face description with local binary patterns: Application to face recognition
Chan et al. Multiscale local phase quantization for robust component-based face recognition using kernel fusion of multiple descriptors
Jain et al. Face matching and retrieval in forensics applications
Burl et al. Face localization via shape statistics
KR100601957B1 (en) Apparatus for and method for determining image correspondence, apparatus and method for image correction therefor
Garcia et al. A neural architecture for fast and robust face detection
Huang et al. Robust face detection using Gabor filter features
Juefei-Xu et al. Unconstrained periocular biometric acquisition and recognition using COTS PTZ camera for uncooperative and non-cooperative subjects
Oh et al. An analytic Gabor feedforward network for single-sample and pose-invariant face recognition
WO2009158700A1 (en) Assessing biometric sample quality using wavelets and a boosted classifier
Aravindan et al. Robust partial fingerprint recognition using wavelet SIFT descriptors
Khanam et al. Implementation of the pHash algorithm for face recognition in a secured remote online examination system
Sawant et al. Age estimation using local direction and moment pattern (ldmp) features
El-Naggar et al. Which dataset is this iris image from?
WO2009079769A1 (en) Methods and systems for electoral-college-based image recognition
Hiremath et al. Depth and intensity Gabor features based 3D face recognition using symbolic LDA and AdaBoost
Jagadeesh et al. DBC based Face Recognition using DWT
Grover et al. Attendance monitoring system through face recognition
Adedeji et al. Comparative Analysis of Feature Selection Techniques For Fingerprint Recognition Based on Artificial Bee Colony and Teaching Learning Based Optimization
Sun et al. Iris recognition based on non-local comparisons
Sangodkar et al. Ear recognition for multimedia security

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08865850

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08865850

Country of ref document: EP

Kind code of ref document: A1