JP4992289B2 - Authentication system, authentication method, and program - Google Patents

Authentication system, authentication method, and program Download PDF

Info

Publication number
JP4992289B2
JP4992289B2 JP2006132580A JP2006132580A JP4992289B2 JP 4992289 B2 JP4992289 B2 JP 4992289B2 JP 2006132580 A JP2006132580 A JP 2006132580A JP 2006132580 A JP2006132580 A JP 2006132580A JP 4992289 B2 JP4992289 B2 JP 4992289B2
Authority
JP
Japan
Prior art keywords
authentication
individual
individual part
part
object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2006132580A
Other languages
Japanese (ja)
Other versions
JP2007304857A (en
Inventor
修 遠山
Original Assignee
コニカミノルタホールディングス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタホールディングス株式会社 filed Critical コニカミノルタホールディングス株式会社
Priority to JP2006132580A priority Critical patent/JP4992289B2/en
Publication of JP2007304857A publication Critical patent/JP2007304857A/en
Application granted granted Critical
Publication of JP4992289B2 publication Critical patent/JP4992289B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Description

  The present invention relates to an object authentication technique.

  In recent years, various services digitized by the development of network technology and the like have become widespread, and the need for non-face-to-face personal authentication technology that does not rely on people is increasing. Along with this, research on biometrics authentication technology (biometric authentication technology) for automatically identifying an individual based on a person's biometric features has been actively conducted. Face authentication technology, which is one of biometric authentication technologies, is a non-contact authentication method, and is expected to be applied in various fields such as security by a surveillance camera or image database search using a face as a key.

  As such an authentication technique, a registered image registered in advance and a photographed image obtained by photographing an authentication object are acquired only as plane information (“texture information” or “two-dimensional information”), respectively. It is determined whether or not the person in the registered image and the person in the captured image are the same person by comparing the plane information of the images (for example, see Non-Patent Document 1).

  For general facial parts such as contours, left eye, right eye, nose, and mouth, weights according to personality are given to the characteristics of the shape and arrangement, and features that have a lot of personality are included. A technique has been proposed in which authentication accuracy is improved by focusing on features that are poor in characteristics (for example, Patent Document 1).

  In addition, general facial features such as facial contours, head hair, eyebrows, eyes, nostrils, and lips, as well as individual facial features such as wrinkles and wrinkles due to facial irregularities, moles, etc. There has been proposed a technique for improving authentication accuracy by performing matching using unique features that characterize (for example, Patent Document 2).

Dadet Pramady Hunt, Moto Kure and Masahiko Taniuchi, "Personal Identification from Input Face Images with Various Postures", IEICE Transactions, D-II, August 1997, Vol. J80-D- II, No.8, pp.2232-2238 JP 2003-58888 A JP 2005-242432 A

  However, in the above technique, there is a problem that high-precision authentication cannot be performed if the posture (orientation) of the person in both images to be compared is not the same.

  Further, in the technique proposed in Patent Document 1, since the collation focusing only on the shape and arrangement of a general facial part is performed, it cannot be said that the authentication accuracy is high.

  Furthermore, since the technique proposed in Patent Document 2 performs pattern matching using the entire face, authentication is performed over a wide range including non-characteristic areas for each individual, resulting in a decrease in authentication accuracy.

  Such a problem is not limited to face authentication, but is common to object authentication techniques in general.

  The present invention has been made in view of the above problems, and an object of the present invention is to provide an authentication technique capable of performing highly accurate authentication when authenticating the identity of an object including a person.

In order to solve the above-mentioned problem, the invention of claim 1 is an authentication system for authenticating whether or not the authentication object is the same as the comparison object, and is provided in common for many similar items. An individual part that is different from the common part and is provided individually is detected from at least one of an authentication target image capturing the authentication target object and a comparison target image capturing the comparison target object. comprising a separate location detector, the individual characteristic quantity recognizing means for recognizing the individual sites feature quantity relating to the individual site, and authenticating means for performing authentication relating to the authentication object using the individual sites characteristic quantity, the individual sites A unit in which the detection means generates a plurality of reference images each capturing a large number of similar objects, and at least one of the authentication target image and the comparison target image divided according to the same rule. By comparing each image, the individual part is detected, and the individual part detection unit targets the at least one of the authentication target image and the comparison target image as the target for each of the partial images. Means for calculating a deviation relating to the predetermined parameter with respect to a reference value relating to the predetermined parameter characterizing the partial image obtained from the reference image, and if the deviation exceeds a predetermined reference, When a plurality of the individual part-containing partial images are adjacent to each other, and a determination unit that determines the partial image as an individual part-containing partial image that captures the individual part, the plurality of adjacent individual part-containing partial images are combined into one individual part. A region recognizing unit for recognizing as an individual part region corresponding to, and a plurality of individual part regions exceeding a predetermined number are Means for calculating a deviation relating to the predetermined variation factor with respect to a reference value relating to the predetermined variation factor that characterizes the individual region determined for each individual region, and the plurality of individual regions of site region, while selectively employing discrete site region according to the predetermined number of factors deviation from the larger, the remaining individual site region is characterized Rukoto to have a means for not adopted.
Further, the invention of claim 2 is an authentication system for authenticating whether or not the authentication object is the same as the comparison object, which is different from a common part provided in common for many similar objects. And the individual site | part detection means which detects the individual site | part comprised separately from the at least one image of the authentication object image which caught the said authentication object, and the comparison object image which caught the said comparison object, An individual feature amount recognizing means for recognizing an individual part feature amount related to an individual part; and an authentication unit for performing authentication related to the authentication object using the individual part feature amount. A plurality of reference images that respectively capture the object and at least one of the authentication target image and the comparison target image for each partial image generated by dividing the same image according to the same rule. The individual part is detected, and the individual part detection means is obtained from the plurality of reference images for each partial image with respect to at least one of the authentication target image and the comparison target image. Means for calculating a deviation relating to the predetermined parameter with respect to a reference value relating to a predetermined parameter characterizing the partial image, and if the deviation exceeds a predetermined reference, the partial image relating to the deviation is A discriminating means for discriminating as a partial part-containing partial image that captures a part, and for each of the individual parts, a reference value relating to a predetermined variation factor characterizing a region corresponding to each of the individual parts obtained from the multiple reference images On the other hand, a deviation calculating means for calculating a factor deviation related to the predetermined variation factor is provided, and the authentication means is provided for each individual part with respect to each individual part feature amount. The performing weighting according to the factor deviation that, and performs authentication regarding the authentication object.
The invention of claim 3 is an authentication system for authenticating whether or not the authentication object is the same as the comparison object, and is different from a common part provided in common for many similar objects. And the individual site | part detection means which detects the individual site | part comprised separately from the at least one image of the authentication object image which caught the said authentication object, and the comparison object image which caught the said comparison object, An individual feature amount recognizing means for recognizing an individual part feature amount related to an individual part; and an authentication unit for performing authentication related to the authentication object using the individual part feature amount. The individual part is detected, and the individual part detection means determines a common partial space from a large number of reference images each capturing a large number of the same kind of objects, and the authentication target image and the comparison target image An image region formed by a pixel group related to a pixel projection region that is not included in the common partial space when the parameter related to the pixel is projected onto the partial space for each pixel constituting at least one of the images. It has the area | region recognition means recognized as an individual site | part area | region corresponded to a site | part, It is characterized by the above-mentioned.
Further, the invention of claim 4 is an authentication system for authenticating whether or not the authentication object is the same as the comparison object, and is different from a common part provided in common for many similar objects. And the individual site | part detection means which detects the individual site | part comprised separately from the at least one image of the authentication object image which caught the said authentication object, and the comparison object image which caught the said comparison object, An individual feature amount recognizing unit for recognizing an individual part feature amount related to an individual part; and an authentication unit for performing authentication related to the authentication target object using the individual part feature amount, wherein the individual part detection unit includes the authentication target A first detection unit that detects the individual part for an image, and a second detection unit that detects the individual part for the comparison target image, and the individual feature amount recognition unit includes: The first A first recognizing unit that recognizes the first feature amount for the individual part detected by the detecting unit, and a second recognizing the second feature amount for the individual part detected by the second detecting unit. A recognition unit, and the authentication unit matches the feature of the object in both of the one-direction determination using the first feature amount and the reverse-direction determination using the second feature amount. When it is determined that a predetermined criterion indicating that the authentication target is satisfied, it is determined that the authentication target object and the comparison target object are the same.
Further, the invention of claim 5 is an authentication system for authenticating whether or not the authentication object is the same as the comparison object, and is different from a common part provided in common for many similar objects. And the individual site | part detection means which detects the individual site | part comprised separately from the at least one image of the authentication object image which caught the said authentication object, and the comparison object image which caught the said comparison object, An individual feature amount recognizing unit for recognizing an individual part feature amount related to an individual part; and an authentication unit for performing authentication related to the authentication target object using the individual part feature amount, wherein the individual part detection unit includes the authentication target A first detection unit that detects the individual part for an image, and a second detection unit that detects the individual part for the comparison target image, and the individual feature amount recognition unit includes: The first A first recognizing unit that recognizes the first feature amount for the individual part detected by the detecting unit, and a second recognizing the second feature amount for the individual part detected by the second detecting unit. A recognizing unit, and the authenticating unit performs a one-way determination using the first feature and a reverse direction using the second feature A determination unit, and the authentication system further includes a mode switching unit that switches between a first mode in which the determination in the one direction is performed and a second mode in which the determination in the reverse direction is performed. .

The invention of claim 6 is the authentication system according to any one of claims 1 to 5 , wherein the authentication target image and the comparison target image are common to the common part. A common feature amount recognizing unit for recognizing a region feature amount is further provided, wherein the authentication unit performs authentication related to the authentication object using the individual region feature amount and the common region feature amount.

The invention according to claim 7 is the authentication system according to claim 4 or claim 5 , wherein the authentication means performs the first determination using the individual part feature amount and the common part feature amount. In both of the second determinations used, when it is determined that a predetermined criterion indicating that the features of the objects match is satisfied, the authentication object and the comparison object are the same It is characterized by determining.

The invention according to claim 8 is the authentication system according to any one of claims 4 to 6 , wherein the individual part detecting means captures a large number of the same kind of objects. Detecting the individual parts by comparing a reference image and at least one of the authentication target image and the comparison target image for each partial image generated by dividing each image according to the same rule. Features.

The invention according to claim 9 is the authentication system according to claim 8 , wherein the individual part detection unit targets at least one of the authentication target image and the comparison target image as the target. Means for calculating a deviation relating to the predetermined parameter with respect to a reference value relating to the predetermined parameter characterizing the partial image obtained from the multiple reference images for each image, and the deviation exceeds a predetermined reference And a discriminating means for discriminating the partial image related to the deviation as an individual part-containing partial image capturing the individual part.

The invention according to claim 10 is the authentication system according to claim 9 , wherein when the individual part detection unit is adjacent to a plurality of the individual part-containing partial images, the plurality of adjacent individual part-containing partial images are adjacent to each other. It further has area recognition means for recognizing as an individual part area corresponding to one individual part.

The invention according to claim 11 is the authentication system according to claim 3 , wherein the individual part detection unit is configured to recognize the individual part region exceeding a predetermined number by the region recognition unit. Means that selectively adopts individual part regions related to a predetermined number of pixel projection areas from the one having a larger deviation angle from the common partial space among a plurality of individual part regions, but does not adopt the remaining individual part regions It further has these.

The invention according to claim 12 is the authentication system according to claim 3 or claim 11, wherein the authentication unit includes a pixel related to the individual part with respect to the individual part feature amount for each individual part. It is characterized in that authentication relating to the authentication object is performed by weighting according to a deviation angle from the common partial space of the projection area.

  The invention according to claim 13 is the authentication system according to any one of claims 1 to 12, wherein the position of the individual part is expressed by a relative value based on the position of the common part. The apparatus further comprises position calculation means for calculating the relative position.

  The invention according to claim 14 is the authentication system according to claim 13, wherein the relative position includes a three-dimensional relative position based on the position of the common part.

  The invention according to claim 15 is the authentication system according to claim 13 or claim 14, wherein the individual part feature amount includes information indicating the relative position.

  The invention according to claim 16 is the authentication system according to any one of claims 1 to 15, wherein the individual feature amount recognizing means includes a luminance value of an image region corresponding to the individual part and the image. It is characterized by comprising a feature quantity calculation means for calculating the individual part feature quantity using at least one of the differential values of the pixel values constituting the region.

Further, the invention of claim 17 is an authentication method in an authentication system for authenticating whether or not the authentication object is the same as the comparison object, and (a) is provided in common for many similar items. A step of detecting an individual part that is different from the common part and individually provided from at least one of an authentication target image capturing the authentication target object and a comparison target image capturing the comparison target object And (b) recognizing an individual part feature amount related to the individual part, and (c) performing authentication related to the authentication object using the individual part feature amount , in the step (a) , Comparing each reference image generated by dividing a large number of reference images each capturing a large number of similar objects and at least one of the authentication target image and the comparison target image according to the same rule. You Thus, the individual part is detected, and the step (a) includes (a-1) at least one of the authentication target image and the comparison target image for each of the partial images. Calculating a deviation relating to the predetermined parameter with respect to a reference value relating to the predetermined parameter characterizing the partial image obtained from the reference image, and (a-2) the deviation exceeds a predetermined reference Determining a partial image related to the deviation as an individual part-containing partial image that captures the individual part; and (a-3) if the individual part-containing partial images are adjacent to each other, Recognizing a part-containing partial image as an individual part region corresponding to one individual part, and (a-4) when a plurality of individual part regions exceeding a predetermined number are recognized in step (a-3), The individual parts Calculating a deviation relating to the predetermined variation factor with respect to a reference value relating to the predetermined variation factor characterizing the individual part region obtained from the multiple reference images for each region; and (a-5) the step among the plurality of discrete site region, while selectively employing discrete site region according to the predetermined number of factors deviation from the larger, and wherein Rukoto which have a a step is not employed for the remaining individual site region To do.

The invention of claim 18 is a program for causing the authentication system to function as the authentication system according to any one of claims 1 to 16 by being executed by a computer included in the authentication system. It is.

The invention of claim 19 is an authentication system for authenticating whether or not the authentication object is the same as the comparison object, and an information storage system for storing the feature quantity of the authentication object in a storage medium; An authentication execution system that performs authentication related to the authentication target object using the feature amount of the authentication target object stored in the storage medium, and the information storage system is configured for a number of images that capture the authentication target object. While recognizing the feature quantity of the common part that is commonly provided for the same type of object and storing it in the first storage medium, the feature quantity of the individual part that is different from the common part and is provided individually is recognized. Storage control means for storing in a second storage medium different from the first storage medium, wherein the authentication execution system receives the first storage medium, the first reception means, and the second Storage media Second authentication means for performing authentication, and first authentication for authenticating the authentication object using the feature quantity of the common part stored in the first storage medium received by the first reception means. Means, and second authentication means for performing authentication related to the authentication object using the feature quantity of the individual part stored in the second storage medium received by the second receiving means , An information storage system includes a plurality of reference images each capturing a large number of similar objects, an authentication target image capturing the authentication target object, and at least one image of a comparison target image capturing the comparison target object; For each of the partial images generated by dividing each of them by the same rule, thereby detecting the individual part, and the information storage system includes the authentication target image and the comparison target image. For at least one of the images, for each partial image, a deviation relating to the predetermined parameter is calculated with respect to a reference value relating to the predetermined parameter characterizing the partial image obtained from the multiple reference images. And a plurality of discriminating means for discriminating a partial image related to the deviation as an individual part-containing partial image capturing the individual part when the deviation exceeds a predetermined reference, and a plurality of the individual part-containing partial images adjacent to each other. In this case, a region recognizing unit that recognizes the plurality of adjacent individual part-containing partial images as individual part regions corresponding to one individual part, and a plurality of individual part regions exceeding a predetermined number are recognized by the area recognizing unit. In addition, for each individual part region, a reference value relating to a predetermined variation factor characterizing the individual part region obtained from the multiple reference images, Means for calculating a deviation relating to a predetermined variation factor, and selectively adopting the individual part region related to the predetermined number of factor deviations from the larger one of the plurality of individual part regions, while remaining individual part regions It will be characterized by Rukoto to have a means for not adopted.

Further, the invention of claim 20 is an authentication method in an authentication system for authenticating whether or not the authentication object is the same as the comparison object, and (i) a plurality of similar types from an image capturing the authentication object. The feature quantity of the common part that is commonly provided in the object is recognized and stored in the first storage medium, and the feature quantity of the individual part that is different from the common part and is individually provided from the image. Recognizing and storing it in a second storage medium different from the first storage medium, and (ii) using the feature value of the common part stored in the first storage medium And (iii) performing authentication related to the authentication object using the feature quantities of the individual parts stored in the second storage medium, and in step (i), Each of the same kind of A partial image generated by dividing a plurality of reference images, an authentication target image capturing the authentication target object, and at least one of the comparison target images capturing the comparison target according to the same rule. By comparing the individual parts, the step (i), for each partial image, the step (i) (i-1) for at least one of the authentication target image and the comparison target image And (i-2) calculating a deviation relating to the predetermined parameter with respect to a reference value relating to the predetermined parameter characterizing the partial image obtained from the multiple reference images; and (i-2) A step of determining a partial image related to the deviation as an individual part-containing partial image capturing the individual part when exceeding a reference; and (i-3) a plurality of the individual part-containing partial images adjacent to each other. You A step of recognizing a plurality of individual part-containing partial images as individual part regions corresponding to one individual part; and (i-4) a plurality of individual part regions exceeding a predetermined number are recognized in step (i-3). In this case, for each individual part region, calculating a deviation relating to the predetermined variation factor with respect to a reference value relating to the predetermined variation factor characterizing the individual part region obtained from the multiple reference images; (i-5) selectively adopting the individual part region related to the predetermined number of factor deviations from the larger one of the plurality of individual part regions, while not adopting the remaining individual part regions; and wherein the Rukoto that Yusuke.

According to the invention described in any one of claims 1 to 18 , at least one of the authentication object and the comparison object is provided separately from the common part provided in common for the same kind of object. By adopting a configuration that detects individual parts that have been detected, and recognizes and recognizes the feature quantities of the individual parts, authentication using different characteristic parts can be made for each object, that is, individually. Therefore, it is possible to provide an authentication technique capable of performing highly accurate authentication when authenticating the identity of an object including a person.

Further, according to the invention described in claim 6 , by adopting a configuration in which authentication is performed using the feature values of the individual parts and the feature values of the common parts, both rough features and fine features are used. Authentication can be performed with high accuracy.

Further, according to the invention described in claim 7 , when it is determined that the features match in both the determination using the feature amount of the individual part and the determination using the feature amount of the common part, By adopting a configuration in which it is determined that the authentication object and the comparison object are the same, it is possible to perform authentication with higher accuracy by authentication according to strict standards.

Further, according to claim 1, I also by the invention of any of claims 2 and claim 8, a plurality of reference image captures of the same type each of the authentication object and the comparison object Computation time for detecting individual parts is shortened by adopting a configuration that detects individual parts by comparing each image that captures at least one of them with each partial image divided according to the same rule. can do. Moreover, the individual site | part which is a big difference other than the subtle difference which concerns on a common site | part can be detected more reliably.

Further, according to claim 1, also I by the invention described in any one of claims 2 and claims 9, deviation from the reference value for the characteristic of the partial images obtained from the plurality of reference image you exceed a predetermined reference By adopting a configuration in which a partial image is discriminated as an image that captures an individual part, it is possible to easily detect an individual part that is a somewhat different difference.

Moreover, I also by the invention as claimed in any of claims 1 and claim 10, when a partial image captured individual sites to multiple adjacent, one a plurality of partial images of the adjacent individual sites By adopting a configuration for recognizing as a partial image region corresponding to, it is possible to more reliably detect a single individual part.

According to the first aspect of the present invention, when a plurality of partial image regions corresponding to the number of individual parts exceeding the predetermined number are recognized, the deviation is different from the reference value obtained from a large number of reference images. By adopting a configuration in which a predetermined number of partial image areas are selectively adopted from a plurality of partial image areas while the remaining partial image areas are not adopted, the individual parts having somewhat large features are adopted. Since narrowed-down authentication is possible, the calculation time required for authentication can be shortened.

According to the invention described in claim 2 , the configuration is such that authentication is performed by weighting the feature quantity of each individual part for each individual part according to a deviation with respect to a reference value obtained from a large number of reference images. By adopting it, it is possible to perform authentication with an emphasis on individual parts having larger features, and therefore the authentication accuracy can be further improved.

Further, according to the invention described in claim 3 , it is more stable and reliable by adopting a configuration in which an individual part is detected for at least one of the authentication object and the comparison object using the subspace method. It is possible to detect individual parts.

  According to the invention of claim 11, when a number of individual part regions exceeding a predetermined number are recognized, a large number of standards each capturing a large number of similar objects from the plurality of individual part regions. By adopting a configuration in which a predetermined number of individual part regions are selectively adopted from the one with the larger deviation angle from the common partial space determined based on the image, while the remaining individual part regions are not adopted, Since it is possible to perform authentication narrowed down to individual parts having a certain large feature, the calculation time required for authentication can be shortened.

  According to the invention described in claim 12, a configuration is adopted in which authentication is performed by weighting the feature quantity of the individual part for each individual part according to the deviation angle from the common partial space. Thus, authentication can be performed with an emphasis on individual parts having larger features, so that the authentication accuracy can be further improved.

  According to the invention of claim 13, the authentication target object such as a change in facial expression is adopted by adopting a configuration in which the position of the individual part is expressed by a relative value based on the position of the common part. Regardless of the influence of changes in the situation, it is possible to perform highly accurate authentication.

  According to the invention described in claim 14, by adopting a configuration in which the position of the individual part is expressed by a three-dimensional relative position based on the position of the common part, the orientation of the authentication object such as the face Regardless of this, stable authentication is possible.

  Also, according to the invention described in any one of claims 15 and 16, a configuration is adopted in which authentication is performed by including information indicating the relative position of the individual part with respect to the position of the common part in the feature amount of the individual part. This makes it possible to perform more reliable and accurate authentication.

According to the invention described in claim 4 , the one-way determination performed using the feature quantity by recognizing the feature quantity of the individual part related to the authentication object, and the feature quantity of the individual part related to the comparison target person In both of the determinations in the reverse direction performed by recognizing the feature and using the feature amount, if it is determined that the predetermined criteria indicating that the features match is satisfied, the authentication object and the comparison object are the same By adopting a configuration that is determined to be, authentication with higher accuracy becomes possible.

Further, according to the invention described in claim 5 , a mode for recognizing the feature quantity of the individual part related to the authentication object and performing authentication using the feature quantity, and the feature quantity of the individual part related to the comparison subject are obtained. By adopting a configuration that recognizes and switches a mode for performing authentication using the feature amount, high-accuracy authentication according to the number of comparison objects is possible.

Further, according to the invention described in any one of claims 19 and 20 , the feature amount of the common part provided in common to the same kind of object from the image capturing the authentication object, and the common part Different from and individually recognized feature quantities of individual parts, stored in separate storage media, and using the common part feature quantities stored in one storage medium By adopting a configuration that separately executes authentication and authentication using the feature quantity of the individual part stored in the other storage medium, authentication according to the required authentication accuracy can be executed. it can.

  Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the following embodiment, face authentication will be described, but the present invention can also be applied to authentication of other objects.

<Embodiment>
<Outline of authentication system>
FIG. 1 is a configuration diagram showing an authentication system 1 according to an embodiment of the present invention. As shown in FIG. 1, the authentication system 1 includes a controller 10 and two image capturing cameras (hereinafter also simply referred to as “cameras”) CA1 and CA2.

  The camera CA1 and the camera CA2 are arranged so that the face of the person HM that is the subject to be photographed can be photographed from different positions. When a face image of a person (a person to be authenticated or a person to be registered) is photographed by the camera CA1 and the camera CA2, appearance information of the person obtained by the photographing, that is, two types of face images are transmitted to the controller 10 via a communication line. Sent. The communication method of image data between each camera and the controller 10 is not limited to the wired method, and may be a wireless method.

  FIG. 2 is a diagram showing a configuration outline of the controller 10. As shown in FIG. 2, the controller 10 includes a CPU 2, a storage unit 3, a media drive 4, a display unit 5 such as a liquid crystal display, an input unit (operation unit such as a keyboard 6 a and a mouse 6 b that is a pointing device). ) 6 and a general computer such as a personal computer provided with a communication unit 7 such as a network card.

  The storage unit 3 includes a plurality of storage media, specifically, a hard disk drive (HDD) 3a and a RAM (semiconductor memory) 3b capable of processing at higher speed than the HDD 3a. The media drive 4 can read information recorded in a portable recording medium 8 such as a CD-ROM, a DVD (Digital Versatile Disk), a flexible disk, or a memory card. In addition, the storage unit 3 temporarily stores various data generated in the course of various data processing in the controller 10 in the RAM 3b or the HDD 3a.

  Note that the information supplied to the controller 10 is not limited to the case of being supplied via the recording medium 8, and may be supplied via a network such as a LAN and the Internet.

<Functional configuration of controller>
FIG. 3 is a block diagram showing various functions provided in the controller 10. 4 is a block diagram illustrating a detailed functional configuration of the image normalization unit 14, FIG. 5 is a block diagram illustrating a detailed functional configuration of the feature recognition unit 15, and FIG. 6 illustrates details of the registration unit 18. FIG. 7 is a block diagram showing a detailed functional configuration of the authentication unit 19.

  The various functions of the controller 10 conceptually indicate functions realized by executing predetermined software programs (hereinafter also simply referred to as “programs”) using various hardware such as a CPU in the controller 10. Is.

  As shown in FIG. 3, the controller 10 includes an image input unit 11, a face area search unit 12, a face part detection unit 13, an image normalization unit 14, a feature recognition unit 15, an operation input unit 16, a mode switching unit 17, A registration unit 18, an authentication unit 19, and an output unit 20 are provided.

  The image input unit 11 has a function of inputting two images taken by the cameras CA1 and CA2 to the controller 10.

  The face area search unit 12 has a function of specifying a face area from the input face image.

  The face part detection unit 13 is a characteristic part of the face (for example, eyes, eyebrows, nose, mouth, etc.) that is commonly provided to a large number of persons from the specified face region. The function of detecting the position of the generic name).

  The image normalization unit 14 has a function of normalizing information related to the authentication target person (or registration target person). Details of the image normalization unit 14 will be described later.

  The feature recognition unit 15 has a function of recognizing the three-dimensional information and the two-dimensional information related to the part constituting the face of the person to be authenticated (or the person to be registered) from the information obtained by the image normalization unit 14. Yes. The feature recognizing unit 15 is not limited to the information related to the common feature part provided in the general human face, but is different from the common feature part in the characteristic part of the face provided for each person ( For example, moles, scratches, wrinkles, etc., hereinafter collectively referred to as “individual characteristic portions”) and a function of recognizing various information such as position information related to the individual characteristic portions. Details of the feature recognition unit 15 will be described later.

  The operation input unit 16 has a function of receiving a signal input in response to a user operation on the operation unit 6 and realizing various operations and controls.

  In response to a signal from the operation input unit 16, the mode switching unit 17 switches to any one of four modes including a registration mode, a detailed verification mode, a high-speed verification mode, and a 1-to-n identification mode. It has a function of selectively switching the controller 10. The registration mode is a mode for registering the facial features of the person to be registered, and the detailed verification mode is a mode for checking whether or not the authentication target person and the comparison target person are the same person with extremely high accuracy. The mode is a mode in which whether or not the authentication subject and the comparison subject are the same person is compared in a relatively high accuracy and in a short time, and the one-to-n identification mode is a large number (n people) already registered. This mode identifies whether or not the person to be authenticated is included in the person. Hereinafter, the detailed collation mode, the high-speed collation mode, and the one-to-n identification mode are collectively referred to as “authentication mode” as appropriate.

  The registration unit 18 has a function of registering information recognized by the feature recognition unit 15 in the HDD 3a in the storage unit 3 when the controller 10 is set to the registration mode. Details of the registration unit 18 will be described later.

  The authentication unit 19 is configured mainly for face authentication, and has a function of authenticating each individual with a face image. Details of the authentication unit 19 will also be described later.

  The output unit 20 has a function of outputting the authentication result obtained by the authentication unit 19.

  Next, a detailed configuration of the image normalization unit 14 will be described with reference to FIG.

  As illustrated in FIG. 4, the image normalization unit 14 includes a three-dimensional reconstruction unit 141, an optimization unit 142, and a correction unit 143.

  The three-dimensional reconstruction unit 141 refers to the camera parameters indicating the positions and orientations of the cameras CA1 and CA2 stored in the camera parameter storage unit 51 in advance, so that the common feature parts constituting the face obtained from the input image It has a function of calculating the three-dimensional coordinates of each part from the coordinates.

  The optimization unit 142 uses the three-dimensional coordinates of each part calculated by the three-dimensional reconstruction unit 141 to generate an individual model from the standard model of the face stored in the three-dimensional model database (DB) 52. have.

  The correction unit 143 has a function of correcting the individual model generated by the optimization unit 142.

  By each of these processing units 141 to 143, the information regarding the person to be authenticated (or the person to be registered) is normalized and converted into a state that can be easily compared. Specifically, the face direction and size (number of pixels) are normalized. In addition, the individual model created by the function of each processing unit is formed as including both three-dimensional information and two-dimensional information related to the authentication target person (or registration target person). “3D information” is information related to a three-dimensional structure composed of 3D coordinate values, etc., and “2D information” is composed of surface information (texture information) and / or planar position information, etc. This is information related to the planar configuration.

  Next, a detailed configuration of the feature recognition unit 15 will be described with reference to FIG.

  As shown in FIG. 5, the feature recognition unit 15 includes a common part feature amount recognition unit 151, an information compression unit 152, an individual part region recognition unit 153, an individual part region selection adoption unit 154, and an individual part feature amount recognition unit 155. Have.

  The common part feature amount recognizing unit 151 uses the three-dimensional information and the two-dimensional information from the three-dimensional face model obtained by the image normalization unit 14 as feature quantities related to the common feature part (hereinafter also referred to as “common part feature amount”). It has the function to recognize as.

  The information compression unit 152 converts the three-dimensional information and the two-dimensional information extracted by the common part feature amount recognition unit 151 into appropriate common part feature amounts for face authentication, respectively, and thereby uses the three-dimensional information used for face authentication. It has a function of compressing information and two-dimensional information. This information compression function is realized by using information stored in the part basis vector database (DB) 61.

  The individual part region recognition unit 153 analyzes the image obtained by differentiating the three-dimensional face model (hereinafter also referred to as “normalized image”) obtained by the image normalization unit 14, thereby identifying each individual characteristic part from the normalized image. It has an individual part region recognition function for recognizing each captured image region (hereinafter also referred to as “individual part region”). This individual part region recognition function is realized using information stored in the statistical differential image information database (DB) 62 or the like.

  The individual part region selection adopting unit 154, when the individual part region recognized by the individual part region recognizing unit 153 exceeds a predetermined number (for example, 10), the predetermined number of individual parts in order from the degree of distinction from others. It has a function of selectively adopting a region. The individual part region selection and adoption unit 154 adopts all the individual part regions as they are when the number of individual part regions recognized by the individual part region recognition unit 153 does not exceed a predetermined number (for example, 10).

  The individual part feature amount recognizing unit 155 is a feature amount of the individual part region adopted by the individual part region selection adopting unit 154, that is, a feature amount related to the individual part such as a position and a pixel value (hereinafter referred to as “individual part feature amount”). It has a function of recognizing

  Next, a detailed configuration of the registration unit 18 will be described with reference to FIG.

  The registration unit 18 includes a storage control unit 181.

  The storage control unit 181 constructs the feature amount database (DB) 71 by distinguishing the common part feature amount and the individual part feature amount recognized by the feature recognition unit 15 for each registration target person and storing them in the HDD 3a. It has a function to control.

  Next, a detailed configuration of the authentication unit 19 will be described with reference to FIG.

  The authentication unit 19 includes a common feature amount reading unit 191, a common site comparison unit 192, an individual site comparison position determination unit 193, an individual site comparison unit 194, and an overall determination unit 195.

  The common feature amount reading unit 191 has a function of reading the common feature amount related to the comparison target person from the feature amount DB 71.

  The common part comparison unit 192 is a similarity between the common part feature value related to the authentication target recognized by the feature recognition unit 15 and the common part feature value related to the comparison target read by the common feature value reading unit 191. It has a function to calculate.

  In the high-speed verification mode, the individual part comparison position determining unit 193 determines a position corresponding to the position of the individual characteristic part related to the comparison target person as the comparison position related to the individual characteristic part in the image capturing the authentication target person. The anti-n identification mode has a function of determining a position corresponding to the position of the individual characteristic part related to the authentication target person as the comparison position related to the individual characteristic part in the image capturing the comparison target person.

  The individual part comparing unit 194 compares the comparison position determined in advance by the individual part comparison position determining unit 193 and the comparison target registered in advance in the individual part feature quantity and feature quantity DB 71 related to the authentication target recognized by the feature recognition unit 15. A function for calculating the similarity to the individual part feature quantity related to the person, the individual part feature quantity related to the comparison target person registered in advance in the feature quantity DB 71, and the individual part related to the authentication target person recognized by the feature recognition unit 15 And a function of calculating the similarity with the feature amount.

  Based on the similarity related to the common feature part calculated by the common part comparison unit 192 and the similarity related to the individual feature part calculated by the individual part comparison unit 194, the comprehensive determination unit 195 and the comparison target Has a function of determining whether or not the person is the same person.

<Operation in registration mode>
Hereinafter, each function related to the registration mode of the controller 10 described above will be described in more detail.

  FIG. 8 is a flowchart showing the operation of the controller 10 when the registration mode is set, and FIG. 9 is a detailed flowchart of the image normalization processing step (step SP4). FIG. 10 is a diagram illustrating feature points of common feature parts in a face image. FIG. 11 is a schematic diagram showing how three-dimensional coordinates are calculated from feature points in a two-dimensional image using the principle of triangulation. 11 indicates an image G1 captured by the camera CA1 and input to the controller 10, and a reference G2 indicates an image G2 captured by the camera CA2 and input to the controller 10. A point Q20 in the images G1 and G2 corresponds to the right end of the mouth in FIG.

  In the following, a case where face authentication information is actually registered using a person photographed by the cameras CA1 and CA2 as a registration target person will be described. Here, a case where three-dimensional information is mainly used as the two-dimensional information using three-dimensional shape information measured according to the principle of triangulation using the images obtained by the cameras CA1 and CA2 is exemplified. .

  As shown in FIG. 8, in the process from step SP1 to step SP6, the controller 10 relates to the face of the registration target person based on an image obtained by photographing the face of the registration target person (hereinafter also referred to as “registration target image”). In addition to acquiring the common part feature value, in the processes from step SP7 to step SP9, the individual part feature value related to the face of the person to be registered is acquired based on the registration target image, and the registration target object is obtained through the process of step SP10. The person's facial features are registered.

  First, in step SP1, a face image (registration target image) of a person (registration target person) photographed by the cameras CA1 and CA2 is input to the controller 10 via a communication line. The cameras CA1 and CA2 that capture a face image are each configured by a general imaging device that can capture a two-dimensional image. Further, camera parameters Bi (i = 1 ·· N) indicating the positions and orientations of the respective cameras CAi are known and stored in advance in the camera parameter storage unit 51 (FIG. 4). Here, N indicates the number of cameras. In the present embodiment, the case of N = 2 is illustrated, but N ≧ 3 may be used (three or more cameras may be used). The camera parameter Bi will be described later.

  Next, in step SP2, a region where a face exists is detected in each of the two registration target images input from the cameras CA1 and CA2. For example, the face area can be detected from each of the two images by template matching using a standard face image prepared in advance.

  Next, in step SP3, the position of the characteristic part (common characteristic part) provided in common with the face of a general person is detected from the image of the face area detected in step SP2. . For example, eyes, eyebrows, nose, mouth or the like can be considered as the common feature part, and in step SP3, the coordinates of the feature points Q1 to Q23 of each part as shown in FIG. 10 are calculated.

  The common feature part can be detected by, for example, template matching performed using a standard template of the common feature part. The calculated feature point coordinates are expressed as coordinates on the images G1 and G2 input from the camera. For example, with respect to the feature point Q20 corresponding to the right end of the mouth in FIG. 10, the coordinate values in each of the two images G1 and G2 are obtained as shown in FIG. Specifically, the coordinates (x1, y1) of the feature point Q20 on the image G1 are calculated using the upper left end point of the image G1 as the origin O. Similarly, in the image G2, the coordinates (x2, y2) of the feature point Q20 on the image G2 are calculated.

  In addition, the luminance value of each pixel in a region having each feature point as a vertex in the input image is acquired as information (hereinafter also referred to as “texture information”) of the region. The texture information in each area is pasted on the individual model in step SP42 and the like described later. In the present embodiment, since two images are input, the average luminance value of the pixels belonging to the corresponding area of each image is used as the texture information of the area.

  In the next step SP4 (image normalization step), the image information related to the person to be registered is normalized based on the coordinate value of each feature point detected in step SP3 and the texture information of each region. As shown in FIG. 9, the image normalization process (step SP4) includes a three-dimensional reconstruction process (step SP41), a model fitting process (step SP42), and a correction process (step SP43). Through these steps, the information related to the registration target person is generated in a normalized state as an “individual model” including both the three-dimensional information and the two-dimensional information related to the registration target person. Hereinafter, each process (step SP41-SP43) is demonstrated in detail.

First, in the three-dimensional reconstruction process (step SP41), the two-dimensional coordinates Ui (j) in each image Gi (i = 1,..., N) of each feature point Qj detected in step SP3, and each image Based on the camera parameters Bi of the camera that captured Gi, the three-dimensional coordinates M (j) (j = 1... M ) of each feature point Qj are calculated. Note that m indicates the number of feature points.

Hereinafter, the calculation of the three-dimensional coordinate M (j) will be specifically described.

The relationship between the three-dimensional coordinates M (j) of each feature point Qj, the two-dimensional coordinates Ui (j) of each feature point Qj, and the camera parameter Bi is expressed as in Expression (1).

  Note that μi is a parameter that indicates a change in the scale. The camera parameter matrix Bi is a value unique to each camera obtained by photographing an object with known three-dimensional coordinates in advance, and is represented by a 3 × 4 projection matrix.

For example, as a specific example of calculating the three-dimensional coordinates using the above equation (1), a case where the three-dimensional coordinates M (20) of the feature point Q20 is calculated is considered with reference to FIG. Expression (2) shows the relationship between the coordinates (x1, y1) of the feature point Q20 on the image G1 and the three-dimensional coordinates (x, y, z) when the feature point Q20 is represented in a three-dimensional space. Similarly, Expression (3) represents the relationship between the coordinates (x2, y2) of the feature point Q20 on the image G2 and the three-dimensional coordinates (x, y, z) when the feature point Q20 is represented in a three-dimensional space. Show.

The unknowns in the above formulas (2) and (3) are a total of five of the two parameters μ1, μ2 and the three component values x, y, z of the three-dimensional coordinate M (20) . On the other hand, since the number of equations included in the equations (2) and (3) is 6, each unknown, that is, the three-dimensional coordinates (x, y, z) of the feature point Q20 can be calculated. Similarly, three-dimensional coordinates M (j) for all feature points Qj can be acquired.

In the next step SP42, model fitting is performed. This “model fitting” is performed by modifying a “standard model of a face”, which is a general (standard) three-dimensional model of a face prepared in advance, by using information related to the registration target person. This is a process of generating an “individual model” in which input information related to the face of the person is reflected. Specifically, a process of changing the 3D information of the standard model using the calculated 3D coordinates M (j) and a process of changing the 2D information of the standard model using the texture information are performed. Is called.

  FIG. 12 shows a standard model of a three-dimensional face.

  As shown in FIG. 12, the standard model of the face is composed of vertex data and polygon data, and is included in the three-dimensional model database (DB) 52 (FIG. 4) stored in the storage unit 3. The vertex data is a set of coordinates of the vertices (hereinafter also referred to as “standard control points”) COj of the feature parts in the standard model, and corresponds one-to-one with the three-dimensional coordinates of the feature points Qj calculated in step SP41. is doing. The polygon data is obtained by dividing the surface of the standard model into minute polygons (for example, triangles) and expressing the polygons as numerical data. FIG. 10 illustrates the case where the vertexes of each polygon are also configured by intermediate points other than the standard control point COj, and the coordinates of the intermediate point are appropriately complemented using the coordinate values of the standard control point COj. Can be obtained by:

  Here, the model fitting which comprises an individual model from a standard model is explained in full detail.

First, the vertex (standard control point COj) of each feature part of the standard model is moved to each feature point calculated in step SP41. Specifically, the three-dimensional coordinate value of each feature point Qj is substituted as the three-dimensional coordinate value of the corresponding standard control point COj, and the moved standard control point (hereinafter also referred to as “individual control point”) Cj obtain. As a result, the standard model can be transformed into an individual model represented by the three-dimensional coordinates M (j) . Note that the coordinates of intermediate points other than the individual control points Cj in the individual model can be obtained by an appropriate interpolation method using the coordinate values of the individual control points Cj.

  Further, the scale, inclination, and position of the individual model based on the standard model used in step SP43, which will be described later, can be obtained from the amount of movement of each vertex due to this deformation (movement). Specifically, the position change of the individual model with respect to the standard model can be obtained from the deviation amount between the predetermined reference position in the standard model and the corresponding reference position in the individual model after deformation. In addition, a change in the inclination of the individual model with respect to the standard model is determined by the amount of deviation between the reference vector connecting the two predetermined points in the standard model and the reference vector connecting the corresponding two points in the individual model after deformation. And a change in scale can be determined. For example, the position of the individual model is obtained by comparing the coordinates of the midpoint QM of the right eye feature point Q1 and the left eye feature point Q2 with the coordinates of the point corresponding to the midpoint QM in the standard model. Furthermore, the scale and inclination of the individual model can be calculated by comparing the midpoint QM with other feature points.

  The following equation (4) represents a conversion parameter (vector) vt that represents the correspondence between the standard model and the individual model. As shown in the equation (4), the conversion parameter (vector) vt includes a scale conversion index sz of both, a conversion parameter (tx, ty, tz) indicating translational displacement in the orthogonal three-axis directions, and a rotational displacement (inclination). ) Representing a conversion parameter (φ, θ, ψ).

As described above, the process of changing the three-dimensional information of the standard model using the three-dimensional coordinates M (j) regarding the person to be authenticated is performed.

  Thereafter, processing for changing the two-dimensional information of the standard model using the texture information is also performed. Specifically, the texture information of each area in the input images G1 and G2 is pasted (mapped) to the corresponding area (polygon) on the three-dimensional individual model. Each region (polygon) to which texture information is pasted on a three-dimensional model (individual model or the like) is also referred to as a “patch”.

  As described above, the model fitting process (step SP42) is performed.

  In the next step SP43, the individual model is corrected based on the standard model. In this step, alignment correction and shading correction are executed. The alignment correction is a correction process related to three-dimensional information, and the shading correction is a correction process related to two-dimensional information.

  The alignment (face orientation) correction is performed based on the scale, inclination, and position of the individual model based on the standard model obtained in step SP42. More specifically, by converting the coordinates of the individual model using the conversion parameter vt (see Expression (4)) indicating the relationship between the standard model and the individual model when the standard model is used as a reference, A three-dimensional face model having the same posture can be created. That is, this alignment correction can properly normalize the three-dimensional information related to the authentication subject.

  The shading correction is a process for correcting the luminance value (texture information (see FIG. 13)) of each pixel in the patch mapped to the individual model. According to this shading correction, both models are generated when the positional relationship between the light source and the subject person is different between when shooting a person for creating a standard model and when shooting a target person of an individual model (when shooting a registered person). Differences in texture information between the standard model and the individual model can be corrected. That is, according to the shading correction, the texture information that is one of the two-dimensional information related to the authentication subject can be appropriately normalized.

  As described above, in the image normalization step (step SP4), the information related to the registration target person is generated in a normalized state as an individual model including both the three-dimensional information and the two-dimensional information related to the registration target person. .

  In the next step SP5 (FIG. 8), three-dimensional shape information (three-dimensional information) and texture information (two-dimensional information) are extracted as information representing the features of the individual models, that is, the features of the common feature parts.

As the three-dimensional information, three-dimensional coordinate vectors of m individual control points Cj in the individual model are extracted. Specifically, as shown in Expression (5), a vector h having three-dimensional coordinates (Xj, Yj, Zj) of m individual control points Cj (j = 1,..., M) as elements. S is extracted as three-dimensional information (three-dimensional shape information).

  As the two-dimensional information, texture (luminance) information (hereinafter referred to as “local 2”) included in a characteristic part of the face that is important information for personal authentication, that is, a patch or a group of patches (local region) in the vicinity of an individual control point. Also referred to as “dimensional information”).

The local two-dimensional information is, for example, a group GR (individual control points C20, C22, and C23 having a vertex at the individual control points C20, C22, and C23 and the individual control points C21 and C22 in FIG. And the luminance information of each pixel included in each local region such as a region composed of patches R2) having C23 as vertices or a region composed of only one patch. The local two-dimensional information h (k) (k = 1,..., L; L is the number of local regions) is represented by n as the number of pixels in the local region and BR1 as the luminance value of each pixel. , BRn, it is expressed in a vector format as shown in Equation (6). In addition, information obtained by collecting local two-dimensional information h (k) for L local regions is also expressed as comprehensive two-dimensional information.

  As described above, in step SP5, three-dimensional shape information (three-dimensional information) and texture information (two-dimensional information) are recognized as information representing individual model features, that is, common part feature amounts.

  The extracted information is used for authentication described later. In the authentication, authentication may be performed by using the information obtained by Expression (6) as it is, but in that case, when the number of pixels in the local region is large, the amount of calculation in the authentication becomes very large. End up. Therefore, in this embodiment, with the intention of reducing the amount of calculation and performing authentication efficiently, the information obtained by Equation (6) is further compressed, and the authentication operation is performed using the compressed information. To do.

  For this reason, in the next step SP6, the information compression processing described below for converting the information extracted in step SP5 into a state suitable for authentication is performed.

The information compression processing is performed using the same method for each of the three-dimensional shape information h S and each local two-dimensional information h (k) . Here, for the local two-dimensional information h (k) , A case where information compression processing is performed will be described in detail.

The local two-dimensional information h (k) is calculated in advance by KL expansion of the average information (vector) have (k) of the local area acquired in advance from a plurality of sample face images and the plurality of sample face images. Using the matrix P (k) (described below) expressed by the set of eigenvectors of the local region, it can be expressed in the form of basis decomposition as shown in Equation (7). As a result, the local two-dimensional face information amount (vector) c (k) is acquired as compressed information about the local two-dimensional information h (k) .

As described above, the matrix P (k) in the equation (7 ) is calculated from a plurality of sample face images. Specifically, the matrix P (k) is obtained as a set of several eigenvectors (base vectors) having large eigenvalues among a plurality of eigenvectors obtained by KL expansion of a plurality of sample face images. These basis vectors are stored in the partial basis vector DB 61. By expressing the face image using an eigenvector indicating a larger feature of the face image as a base vector, it is possible to efficiently express the feature of the face image.

For example, consider a case where the local two-dimensional information h (GR) of the local region including the group GR shown in FIG. 14 is expressed in a basis-decomposed format. Assuming that the set P of eigenvectors of the local region is expressed as P = (P1, P2, P3) by three eigenvectors P1, P2, and P3, the local two-dimensional information h (GR) The average information have (GR) and the set of eigenvectors P1, P2, and P3 are used to express the equation (8). The average information have (GR) is a vector obtained by averaging a plurality of local two-dimensional information (vectors) for various sample face images for each corresponding element. In addition, what is necessary is just to use the some standard face image which has moderate dispersion | variation for a some sample face image.

Further, the above equation (8) indicates that the original local two-dimensional information can be reproduced by the face information amount c (GR) = (c1, c2, c3) T. That is, it can be said that the face information amount c (GR) is information obtained by compressing the local two-dimensional information h (GR) of the local region including the group GR.

The local two-dimensional face information amount c (GR) acquired as described above may be used for the authentication operation as it is, but in this embodiment, further information compression is performed. Specifically, a process of converting the feature space represented by the local two-dimensional face information amount c (GR) into a partial space that increases separation between individuals is further performed. More specifically, consider a transformation matrix A that reduces the local two-dimensional face information amount c (GR) of the vector size f to the local two-dimensional feature amount d (GR) of the vector size g as expressed in Expression (9). . Thereby, the feature space represented by the local two-dimensional face information amount c (GR) can be converted into the partial space represented by the local two-dimensional feature amount d (GR) , and the difference in information between individuals is remarkable. become.

  Here, the transformation matrix A is a matrix having a size of f × g. Using a multiple discriminant analysis (MDA) method, the transformation matrix A is determined by selecting g principal components having a large ratio (F ratio) between intra-class variance and inter-class variance from the feature space. Can do.

Further, by executing the same processing as the information compression processing performed on the local two-dimensional information h (GR) described above for all other local regions, the local two-dimensional common part feature value d ( k) can be obtained. Further, by applying the same method to the three-dimensional shape information h S , the three-dimensional common part feature value d S can be acquired.

The common part feature value d obtained by combining the three-dimensional common part feature value d S and the local two-dimensional common part feature value d (k) acquired through the above step SP6 is expressed as a formula (10) in a vector format. be able to.

  In the steps SP1 to SP6 described above, the common part feature amount d of the subject person is recognized from the inputted face image of the subject person.

  Then, in the next steps SP7 to SP9, the individual part feature amount is recognized.

  In step SP7 (individual feature part detection step), an individual feature part is detected based on the image normalized in step SP4. This individual feature part detection step (step SP7) performs steps SP71 to SP74 shown in FIG. Hereinafter, each process will be described with reference to an example in which a mole located diagonally below the right eye is recognized as an individual characteristic part as shown in FIG.

  In step SP71, a candidate to be adopted as an individual feature part (hereinafter also referred to as “individual feature part candidate”) is recognized. As a method for recognizing individual feature region candidates, for example, a method of analyzing an image (differential image) obtained by differentiating texture information normalized in step SP4 (hereinafter also referred to as “normalized image”) can be employed. .

  For example, as shown in FIG. 13, by differentiating a texture image that captures a face with a mole under the right eye, a differential image as shown in FIG. 16 is obtained, and the presence of the mole HK under the right eye is present. It becomes an emphasized aspect. Next, as shown in FIG. 17, the differential image is divided into a grid shape to divide the image into a plurality of square-shaped partial images CU each having the same shape and size, and the total pixel value for each partial image CU is calculated. calculate. A large number of persons are divided in advance according to the same rule, and the total pixel value (pixel total value) for each partial image is compared with statistically processed information (hereinafter also referred to as “statistical differential image information”). As a result, the individual characteristic part candidates are recognized.

  Here, among the partial images UC related to the registration target image, a reference value relating to a predetermined parameter (here, the total pixel value of the differential image) that characterizes each partial image obtained from a large number of reference images obtained by capturing a large number of persons. On the other hand, a partial image whose deviation exceeds a predetermined reference value is determined as a partial image capturing an individual characteristic part (hereinafter also referred to as “individual part-containing image”).

  For example, when statistical information relating to a large number of reference images as shown in FIG. 18A is obtained for a certain partial image CU, the partial images constituting the differential image relating to the registration target image are shown in FIG. As shown in (b), when a pixel total value Vp exceeding a predetermined deviation value Vth with respect to the reference value Vave is calculated, the partial image is an individual part-containing image obtained by capturing an individual characteristic part, that is, an individual characteristic part. Recognized as a candidate. Thus, in order to discriminate a partial image whose deviation from a reference value obtained from a large number of reference images exceeds a predetermined reference as an individual part-containing image that captures an individual characteristic part, the individual characteristic part candidate having a somewhat large difference point Individual feature parts can be easily detected.

  Further, when a plurality of individual part-containing images are adjacent to each other, the plurality of individual part-containing images adjacent to each other corresponds to one individual part (hereinafter also referred to as “individual part region”). Recognized as By recognizing the individual part region in this way, a group of individual characteristic part candidates can be detected more reliably.

  The statistical differential image information may be stored in advance in the statistical differential image information database (DB) 62, and when performing authentication other than a person, the statistical differential image information captures a large number of similar objects. The information may be information obtained by statistically processing differential images related to a large number of reference images.

  Moreover, as a partial image after dividing | segmenting into a grid shape, the partial image comprised by a fixed pixel area | region, such as 16 * 16 pixel etc., is mentioned, for example. In addition, although the method of recognizing an individual part area | region by comparing the pixel value for every pixel and recognizing the pixel containing an individual characteristic part is also considered, the method of comparing for every partial image which has a certain size is employ | adopted. However, it is possible to shorten the calculation time for detecting the individual characteristic part candidates and thus the individual characteristic parts.

  In addition, if comparison is made for each pixel, even subtle differences related to the common feature part between individuals are recognized as pixels containing the individual feature part, and the individual feature part is detected separately from the common feature part. I can't. On the other hand, as described above, when a method for comparing each partial image having a certain size is adopted, it is possible to distinguish from the common feature portion and to detect the individual feature portion that is a more significant difference more reliably. Can do.

  In step SP72 of FIG. 15, it is determined whether or not the number of individual part regions exceeding a predetermined number (for example, 10) has been recognized in step SP71. Here, when the number of the individual part regions exceeds the predetermined number, the process proceeds to step SP73, and when it does not exceed the predetermined number, the process proceeds to step SP74.

  In step SP73, while selectively adopting a predetermined number (for example, 10) of individual feature part candidates (ie, individual part regions) in order from the individual feature part candidate recognized in step SP71, the individual feature part candidates are sequentially selected. The remaining individual characteristic part candidates are not adopted. As a method for judging the order having the remarkable features, for example, for each individual part region, a predetermined variation factor characterizing each individual part region obtained from a large number of reference images each capturing a large number of persons (here, There is a method of using a degree of deviation (deviation) from a reference value related to a pixel total value of a differential image). Here, the reference value related to the predetermined variation factor can be obtained from the statistical differential image information described above, the degree of separation (deviation) related to each individual region is calculated from the pixel total value related to the differential image, and the deviation It can be judged that the characteristic is remarkable from the larger one.

  In step SP74, if the process proceeds from step SP72, the individual feature part candidates equal to or less than the predetermined number recognized in step SP71 are determined as individual feature parts as they are, and if the process proceeds from step SP73, step SP73 is performed. A predetermined number of individual characteristic part candidates selectively adopted in SP73 are determined as individual characteristic parts.

  Note that the number of individual feature parts is narrowed down to a predetermined number or less because calculation time required for authentication, etc., which will be described later, can be shortened by narrowing down to individual feature parts having features that are somewhat large. is there.

  In this way, in step SP7 (individual characteristic part detection step), a portion obtained by differentiating a plurality of reference images (reference images) and registration target images, each of which captures a large number of persons, and dividing them according to the same rule. By comparing each image CU, the individual characteristic part candidate, and thus the individual characteristic part can be recognized.

  In step SP8 (individual feature part position calculating step) in FIG. 8, the three-dimensional position of the individual feature part detected in step SP7 is calculated. In the position calculation step (step SP8) of the individual feature part, the steps SP81 and SP82 shown in FIG. 19 are performed. Hereinafter, each step will be described.

  In step SP81, a three-dimensional absolute position is calculated for each individual feature portion detected in step SP7. Here, the position of the center of gravity of each individual characteristic part is calculated in a coordinate system similar to the coordinate system in the three-dimensional reconstruction process in step SP41.

  In step SP82, for each individual feature part detected in step SP7, a three-dimensional position (three-dimensional relative position) expressed by a relative value based on the common feature part is calculated. Here, for each individual feature part, a relative position is calculated based on three or more common feature parts from the closest to the individual feature part among the common feature parts after normalization. For example, as shown in FIG. 20, for individual feature parts (here, moles) PP, feature points Q3 and Q3 belonging to three common feature parts (here, right eye, nose, and mouth) closest to the individual feature parts PP, respectively. A relative position with respect to Q17 and Q20 is calculated.

More specifically, for example, as represented in formula (11), i-th three-dimensional relative position P i p individual characteristic sites, j-th coefficient in the three-dimensional position P j c of the common characteristic portion ( Here, it is calculated by the sum (ie, linear sum) of the vectors multiplied by the coefficient related to the jth common feature part expressing the ith individual feature part.

  Here, if the position of the individual feature part is to be expressed by a relative position with respect to three or more common feature parts, the common feature part whose three-dimensional positions are indicated by vectors at least in three different directions, respectively. It may be expressed based on the feature points.

  As described above, the configuration in which the position of the individual feature part is expressed by a relative numerical value based on the position of the common feature part is that the individual feature part is changed with respect to the entire face when the facial expression slightly changes. However, the relative positional relationship of the individual feature parts with respect to the plurality of common feature parts is relatively difficult to deviate. Therefore, as described above, the position of the individual feature part is expressed by a relative value based on three or more common feature parts. Regardless of the influence, authentication described later can be performed with high accuracy. In particular, by expressing the position of the individual feature part as a three-dimensional relative position with respect to the position of the common feature part, stable authentication can be performed regardless of the orientation of the authentication target such as a face.

  In step SP9 (feature amount recognition step for individual feature parts) in FIG. 8, for each individual feature part, the added value of the luminance value (pixel value) of the individual part region in the registration target image, the differential image related to the registration target image. Of the individual part region, the added value of the pixel value in the individual part region, the size (size) of the individual part region, the three-dimensional relative position from the common feature part, and the like (hereinafter referred to as “individual part feature amount”) Also referred to as). Here, for example, the individual part feature amount is recognized in the form of a vector in which each value is expressed as each vector component.

  In step SP10 (storage processing step), the common part feature quantity recognized in step SP5 and subjected to the information compression process in step SP6 and the individual part feature quantity recognized in step SP9 are stored in the feature quantity DB 71. The registration process related to the feature amount of one person's face is completed. In this step SP10, the degree of separation (deviation) of the pixel total value related to the differential image with respect to the reference value obtained from the statistical differential image information for the individual part region corresponding to each individual characteristic part is used as the feature amount related to the registration target person. Together with the feature amount DB 71.

  In the above, the registration process related to one person to be registered has been described. However, by performing the above registration process a plurality of times, it is possible to construct a feature value DB 71 that accumulates feature values of many human faces. The facial feature amount of the person to be registered accumulated in the feature amount DB 71 in this manner is a target to be compared with the facial feature amount of the authentication subject person in an authentication process described later. That is, the person who was the registration target person in the registration process becomes the comparison target person in the authentication process.

<Operation in authentication mode>
Hereinafter, each function related to the authentication mode of the controller 10 described above will be described in more detail.

  FIG. 21 is a flowchart showing the operation of the controller 10 when the authentication mode is set, and FIG. 22 is a detailed flowchart of the detailed matching process (step ST8). FIG. 23 is a detailed flowchart of the one-way similarity calculation step (step ST82) related to the individual feature part. FIG. 24 is a detailed flowchart of the backward similarity calculation step (step ST83) related to the individual feature part. FIG. 25 is a detailed flowchart of the high-speed verification processing step (step ST9). FIG. 26 is a detailed flowchart of the one-to-n identification processing step (step ST10).

  In the following, a description will be given of a case where face authentication information is actually authenticated with a person photographed by the cameras CA1 and CA2 as an authentication target person. Here, a case where three-dimensional information is mainly used as the two-dimensional information using three-dimensional shape information measured according to the principle of triangulation using the images obtained by the cameras CA1 and CA2 is exemplified. .

  As shown in FIG. 21, in the process from step ST1 to step ST6, the controller 10 relates to the face of the authentication target person based on an image obtained by photographing the face of the authentication target person (hereinafter also referred to as “authentication target image”). In addition to acquiring the common part feature amount and performing the actual authentication corresponding to each authentication mode in the processes from step ST7 to step ST10, the authentication process related to the face of the person to be authenticated is realized through the process of step ST11. Is done.

  First, in step ST1, as in step SP1 of FIG. 8, a face image (authentication target image) of a person (authentication target image) captured by the cameras CA1 and CA2 is input to the controller 10 via a communication line.

  In step ST2, a region where a face exists is detected in each of the two authentication target images input from the cameras CA1 and CA2 by the same processing as in step SP2 of FIG.

  In step ST3, the position of the common feature part common to the face of a general person is detected from the face area image detected in step ST2 by the same process as step SP3 in FIG. Is done.

  In step ST4 (image normalization process), the same processing as step SP4 in FIG. 8, that is, the image normalization process in FIG. That is, the information related to the authentication target is generated in a normalized state as an individual model including both three-dimensional information and two-dimensional information related to the authentication target.

  In step ST5, three-dimensional shape information (three-dimensional information) and texture information (two-dimensional information) are recognized as information representing the features of the individual models, that is, the features of the common feature parts, by the same processing as step SP5 of FIG. Is done. That is, the common part feature amount d of the person to be authenticated is recognized.

  In step ST6, information compression processing for converting the information recognized in step ST5 into a state suitable for authentication is performed by the same processing as in step SP6 of FIG.

  As described above, in the steps ST1 to ST6, the common part feature quantity d of the subject person is recognized from the face image of the subject person to be inputted.

  Then, in the next steps SP7 to SP10, actual authentication corresponding to each authentication mode is performed.

  In step ST7, it is determined which of the three modes of the detailed verification mode, the high-speed verification mode, and the one-to-n identification mode is set as the currently set authentication mode. Here, when the controller 10 is set to the detailed collation mode by the user's operation on the operation unit 6, the process proceeds to step ST8, and when the controller 10 is set to the high speed collation mode, the process proceeds to step ST9. If the controller 10 is set to the 1: n identification mode, the process proceeds to step ST10.

  First, the details of the detailed matching process (step ST8) performed from step ST7 to step ST8 will be described.

  In step ST8 (detail collation processing step), [A] a step of calculating similarity (hereinafter also referred to as “common portion similarity”) using the feature quantity of the common feature portion (hereinafter referred to as “common portion similarity calculation step”). [B] One-way similarity that serves as a scale for determining whether or not the features of the subject of authentication match the features of the subject of comparison on the basis of the feature quantity of the individual feature part of the subject of authentication (Hereinafter also referred to as “one-way similarity related to individual feature part”) (one-way similarity calculation step) and [C] individual feature part related to the comparison target already registered in the feature amount DB 71 Similarity in the reverse direction that serves as a scale for determining whether or not the features of the person to be authenticated and the features of the person to be compared match based on the feature amount (hereinafter, also referred to as “reverse direction similarity related to individual feature parts”) Process of calculating the reverse similarity Degree) and sequentially performed. And it is collated by the comprehensive determination using three similarities whether an authentication object person and a comparison object person are the same persons. In the detailed collation process (step ST8), steps ST81 to ST84 shown in FIG. 22 are performed. Here, a description will be given assuming that one comparison target person is selectively designated from among a large number of registered persons by various operations of the operation unit 6 by the user.

  In step ST81, the common part feature quantity (comparison feature quantity) d (Ad) related to the comparison target person registered in advance in the feature quantity DB 71 and the common part related to the authentication target person calculated through steps ST1 to ST6. Evaluation of similarity with the feature quantity d (Bd) is performed (similarity calculation).

Specifically, (also referred to as the "common region similarity") common feature similarity according to the site between the object's (the authentication object) and comparison subjects (comparative object) Re c is calculated, This is used in a comprehensive determination process (step ST84) described later. Common sites similarity Re c is 3-dimensional face feature amount d 3 dimensional similarity is calculated from S Re S and local two-dimensional similarity calculated from the local 2-dimensional face feature amount d (k) Re (k) In addition to the above, appropriate weighting factors (hereinafter also simply referred to as “weighting factors”) WT and WS that define the weights of the three-dimensional similarity Re S and the local two-dimensional similarity Re (k ) WT, WS (formula (12) Reference).

More specifically, the common part feature amount (comparison feature amount) (d SM and d (k) M ) related to the registered comparison subject and the common part feature amount (d SI and d (k ) related to the authentication subject person are registered. ) I ) and the similarity calculation is executed, and the three-dimensional similarity Re S and the local two-dimensional similarity Re (k) are calculated.

Now, the three-dimensional similarity Re S between the person to be authenticated and the person to be compared is acquired by obtaining the Euclidean distance Re S between corresponding vectors as shown in the equation (13).

Further, the local two-dimensional similarity Re (k) is obtained by obtaining the Euclidean distance Re (k) for each vector component of the feature quantity in the corresponding local regions as shown in the equation (14). .

Then, as shown in Expression (15), the three-dimensional similarity Re S and the local two-dimensional similarity Re (k) are synthesized using predetermined weighting factors WT and WS, and the person to be authenticated ( it is possible to calculate the consensus site similarity of the feature (consensus site similarity) Re c with an authentication object) and comparison subjects (comparative object).

Next, in step ST82 (one-way similarity calculation step), a scale for determining whether or not the characteristics of the authentication target person and the characteristics of the comparison target person match based on the individual part feature amount related to the authentication target person ( Unidirectional similarity) is calculated. In this way the similarity calculating process (step ST82), by performing the steps ST821~ST825 shown in FIG. 23, the one-way similarity Re pa according to individual characteristic site is calculated. Hereinafter, each process (steps ST821 to ST825) will be described in detail.

  In step ST821, an individual feature part is detected from the authentication target image normalized in step ST4 (authenticated authentication target image). Here, for example, the individual characteristic part is detected by a process similar to the process in step SP7 of FIG.

  In step ST822, the position of the individual characteristic part detected in step ST821 is calculated. Here, for example, the three-dimensional relative position of the individual feature part with respect to the common feature part is calculated by a process similar to the process in step SP8 of FIG.

  In step ST823, the feature quantity (individual part feature quantity) related to the individual feature part detected in step ST821 is recognized. Here, by the same process as the process in step SP9 of FIG. 8, for each individual characteristic part, the addition value of the luminance value (pixel value) of the individual part region in the authentication target image, the differential image related to the authentication target image Among them, an added value of pixel values in the individual part region, a size (size) of the individual part region, a three-dimensional relative position from the common feature part, and the like are recognized as individual part feature amounts.

  In step ST824, the position of the individual feature part in the authentication target image (three-dimensional relative position calculated in step ST822) is determined as the position (comparison position) for comparing the feature quantity of the individual feature part in the comparison target image. This comparison position is determined in units of the partial image (for example, 16 × 16 pixels) described above. In this way, since the comparison position has a certain area, even if the position of the individual feature part is slightly shifted between the comparison target image and the authentication target image capturing the same person due to a difference in facial expression, etc. This is to prevent the part from easily deviating from the comparison position.

  In step ST825, for the comparison position determined in step ST824, the individual part feature amount related to the comparison target registered in advance in the feature amount DB 71 and the individual part feature amount related to the authentication target recognized in step ST823. Is used to calculate the one-way similarity.

Here, for example, as in step ST81 of FIG. 22, the Euclidean distance between vectors representing the feature quantities related to the individual feature parts at each comparison position is represented by the similarity (the similarity of the jth individual part is represented by Re j pa ) Then, each comparison position is weighted according to the saliency of the feature to calculate the one-way similarity Repa for all the comparison positions. The one-way similarity Re pa is as shown in Expression (16).

W j pa in Expression (16) is a weighting coefficient corresponding to the feature saliency related to the j-th individual feature part. For example, with respect to the individual part region corresponding to the jth individual characteristic part, a predetermined variation factor characterizing each individual part region obtained from a large number of reference images each capturing a large number of persons (here, the total pixel value of the differential image) The degree of deviation (deviation) from the reference value can be adopted as the weighting coefficient W j pa . Here, the reference value relating to the predetermined variation factor can be obtained from the statistical differential image information described above, and the degree of separation (deviation) relating to each individual region can be calculated from the pixel total value relating to the differential image.

  In this way, authentication is performed by performing weighting according to the deviation with respect to the reference value obtained from the reference image obtained by capturing a large number of faces with respect to the feature amount of the individual part for each individual part. Authentication that emphasizes the captured individual parts is possible. As a result, the authentication accuracy can be further improved.

In the next step ST83 (reverse direction similarity calculation step) of FIG. 22, it is determined whether or not the characteristics of the person to be authenticated and the characteristics of the person to be compared match based on the individual part feature amount relating to the person to be compared. A scale (reverse similarity) is calculated. In the reverse similarity calculating process (step ST83), by performing the steps ST831~ST834 shown in FIG. 24, backward similarity Re pb according to individual characteristic site is calculated. Hereinafter, each process (steps ST831 to ST834) will be described in detail.

  In step ST831, information indicating the three-dimensional relative position of the individual feature portion related to the comparison subject (that is, the comparison target image) is acquired from the feature amount DB 71.

  In step ST832, the position of the individual feature part in the comparison target image (three-dimensional relative position acquired from the feature quantity DB 71 in step ST831) is determined as the position (comparison position) for comparing the feature quantity of the individual feature part in the authentication target image. To do. The comparison position is also determined in units of the partial image (for example, 16 × 16 pixels) described above.

  In step ST833, the feature amount at each comparison position determined in step ST832 is recognized for the authentication target image. Here, for each comparison position, the addition value of the luminance value (pixel value) of the individual part region in the authentication target image, the addition value of the pixel value in the individual part region of the differential image related to the authentication target image, etc. Recognized as individual part feature quantities.

  In step ST834, for the comparison position determined in step ST832, the individual part feature value related to the comparison target person registered in advance in the feature value DB 71 and the feature value related to the comparison position of the authentication target person recognized in step ST833. The similarity is calculated using.

Here, for example, in substantially the same manner as in step ST825 in FIG. 23, the Euclidean distance between the vectors representing the feature quantities related to the individual feature parts at each comparison position is represented by the similarity (the similarity of the jth individual part is represented by Re j pb ). Then, weighting according to the saliency of the features is performed for each comparison position, and the backward similarity Rep p related to all the comparison positions is calculated. The reverse direction similarity Repb is as shown in Expression (17).

W j pb in Expression (17) is a weighting coefficient corresponding to the saliency of the feature related to the j-th individual feature part. For example, with respect to the individual part region corresponding to the jth individual characteristic part related to the comparison target person, a predetermined variation factor characterizing each individual part region obtained from a large number of reference images each capturing a large number of persons (here, a differential image) The degree of deviation (deviation) from the reference value related to the pixel total value) can be used as the weighting coefficient W j pb . Here, the reference value relating to the predetermined variation factor can be obtained from the statistical differential image information described above, and the degree of separation (deviation) relating to each individual region can be calculated from the pixel total value relating to the differential image. Note that, as described above, the deviation is registered when the feature amount related to the person to be compared is registered in the feature amount DB 71.

At step ST 84, to calculate the overall similarity Re1 of three similarities determined at each step ST81~ST83 (common region similarity Re c, unidirectional similarity Re pa, backward similarity Re pb), the Authentication determination is performed based on the total similarity Re1. For example, as shown in Expression (18), the total similarity Re1 includes three similarities Re c , Re pa , and Rep b , a weighting coefficient W c relating to the common feature part, and a weighting relating to the individual feature part. it can be a value calculated using the coefficient W p of use.

  Specifically, the identity between the authentication subject and the comparison subject is determined by comparing the total similarity Re1 with a certain threshold value THa. Specifically, when the total similarity Re1 is smaller than a certain threshold THa, it is determined that the person to be authenticated is the same person as the person to be compared.

  Next, the content of the high-speed collation process (step ST9) performed from step ST7 to step ST9 will be described.

  In step ST9 (high-speed matching processing step), [A] a step of calculating the common part similarity and [C] a step of calculating the reverse direction similarity related to the individual feature parts (reverse direction similarity calculation step) are sequentially performed. Do. And it is collated by the comprehensive determination using two similarities whether an authentication object person and a comparison object person are the same persons. In this high-speed collation processing step (step ST9), steps ST91 to ST93 shown in FIG. 25 are performed. Here, a description will be given assuming that one comparison target person is selectively designated from among a large number of registered persons by various operations of the operation unit 6 by the user.

  In step ST91, the common part similarity is calculated as in step ST81 of FIG.

  In step ST92, similar to step ST83 in FIG. 22, the reverse direction similarity related to the individual feature part is calculated.

In step ST93, step ST 91, overall similarity Re2 are calculated from the two similarities obtained in each step ST92 (common region similarity Re c, backward similarity Re pb), based on the overall similarity Re2 Judgment is made. For example, as shown in Expression (19), the total similarity Re2 includes two similarities Re c and Re pb , a weighting coefficient W c relating to the common feature part, and a weighting coefficient relating to the individual feature part. It can be a value calculated using W p .

  Specifically, the identity between the authentication subject and the comparison subject is determined by comparing the total similarity Re2 with a certain threshold value THb. More specifically, when the similarity Re2 is smaller than a certain threshold value THb, it is determined that the person to be authenticated is the same person as the person to be compared.

  Next, the contents of the one-to-n identification process (step ST10) performed from step ST7 to step ST10 will be described.

  In step ST10 (one-to-n identification processing step), it is determined whether any of the n registered persons is the same person as one person to be authenticated. In this one-to-n identification step (step ST10), all of the n registered users are designated as comparison subjects in time sequence, [A] calculating the common part similarity, and [B] The step of calculating the one-way similarity related to the individual feature part (one-way similarity calculating step) is sequentially performed. Then, by comprehensive determination of the two similarities for n persons, it is determined which of the n registered persons is the same person as one person to be authenticated. In this one-to-n identification step (step ST10), steps ST101 to ST108 shown in FIG. 26 are performed.

  In step ST101, a count N for determining which registered user is designated as a comparison target is set to one.

  In step ST102, according to the setting result in step ST101, the Nth registered person among n registered persons is designated as a comparison target person.

  In step ST103, the common part similarity is calculated for the comparison target person and the authentication target person specified in step ST102, as in step ST81 of FIG.

  In step ST104, the one-way similarity related to the individual feature part is calculated for the comparison target person and the authentication target person specified in step ST102, as in step ST82 of FIG.

In step ST105, the total similarity Re3 is calculated using the common part similarity calculated in step ST103 and the one-way similarity calculated in step ST104. Overall similarity Re3, for example, as shown in equation (20), the two similarity Re c, and Re pa, and the coefficient W c for weighting according to the common characteristic portions, coefficients for weighting according to the individual characteristic site It can be a value calculated using W p .

  In step ST106, it is determined whether or not all n registered users have been designated as comparison subjects (specifically, whether or not the count N has reached n). If the count N has not reached n, the count N is incremented by 1 in step ST107, the process returns to step ST102, and the processes of steps ST102 to ST105 are performed for the next person to be compared. On the other hand, if the count N has reached n, the process proceeds to step ST108.

  In step ST108, it is determined whether any of the n registered persons is the same person as one person to be authenticated. Specifically, for example, among the total similarity Re3 for the n comparison target persons calculated in step ST105, the authentication target person and the comparison target person whose total similarity Re3 smaller than a certain threshold THc is calculated. It is determined that the combination is a combination of the same person.

  And in step ST11 of FIG. 21, the determination result in any one process of step ST8-ST10 is suitably output with respect to a desired function.

  As described above, according to the authentication system 1 of the above-described embodiment, at least one of the authentication object (here, the authentication object) and the comparison object (here, the comparison object) is the same type (here, Individual feature portions that are different from the common feature portions that are commonly provided to (person) and are individually provided are detected. Then, the feature amount of each individual feature part is recognized and authenticated. By adopting such a configuration, it is possible to authenticate using different characteristic parts individually (for each individual), so when authenticating the identity of an object including a person, high-accuracy authentication is required. It can be carried out. That is, the recognition rate is improved. And by performing authentication using partial images related to common feature parts and individual feature parts, it is possible to authenticate using even minute features that are not reflected in recognition using the entire face image capturing the face. As a result, the recognition rate is improved.

  Also, authentication is performed using the feature amount of the individual feature portion and the feature amount of the common feature portion. By adopting such a configuration, it is possible to perform authentication using both a rough feature and a fine feature, thereby enabling highly accurate authentication.

  Further, the numerical value indicating the relative position of the individual feature part with respect to the position of the common feature part is included in the feature amount of the individual feature part for authentication. By adopting such a configuration, authentication including position information, which is a large element that characterizes an individual characteristic part, can be performed more reliably and accurately.

  In addition, in response to the operation of the operation unit 6 by the user, the feature amount of each individual feature part in an image capturing a registration object (here, a registration target person and / or a comparison target person) included in a plurality of registration target images. And a high-speed verification mode for authenticating an authentication target object (here, an authentication target person) and a pair of authenticating an authentication target object by recognizing a feature quantity of an individual feature portion of an image capturing the authentication target object Switch between n identification modes. By adopting such a configuration, it is possible to perform high-accuracy authentication according to the number of comparison objects such as when there are a large number of comparison objects and when there are only one comparison object.

<Modification>
As mentioned above, although embodiment of this invention was described, this invention is not limited to the thing of the content demonstrated above.

  For example, in the feature amount recognition step of the individual feature part in the above embodiment, the addition value of the luminance value (pixel value) of the individual part region corresponding to each individual feature part and the pixel value in the individual part region of the differential image Both values of the added value were included in the individual part feature amount. However, the present invention is not limited to this. For example, the individual part feature amount may be calculated using at least one of the luminance value of the individual part region and the pixel value of the differential image related to the partial image. Similar effects can be obtained.

  In the one-to-n identification process in the above embodiment, the total similarity Re3 of all the n comparison subjects and the authentication subject is calculated, and the total similarity Re3 smaller than a certain threshold THc is calculated. The combination of the person to be authenticated and the person to be compared was determined to be a combination of the same person. However, the present invention is not limited to this. For example, each time the total similarity Re3 related to each comparison target is calculated, it is determined whether or not the total similarity Re3 is smaller than a certain threshold value THc. When the small total similarity Re3 is calculated, it is determined that the combination of the authentication target person and the comparison target person is a combination of the same person, and the similarity level is not calculated for the remaining registered users. good. If such a configuration is adopted, the calculation time of the 1-to-n identification process can be shortened.

  Further, the total similarity Re3 of all the n comparison target persons and the authentication target persons is calculated, and among the combinations of the authentication target persons and the comparison target persons whose total similarity Re3 smaller than a certain threshold THc is calculated, The combination of the person to be authenticated and the person to be compared with the lowest total similarity Re3 may be determined to be a combination of the same person. With such a configuration, the authentication accuracy in the 1: n identification is further improved.

◎ Further, in the detailed collation mode in the above embodiment, three similarity Re c, Re pa, identical to the authentication target person compared subjects when overall similarity Re1 obtained from Re pb is smaller than a predetermined threshold value THa It was determined to be a person. However, not limited to this, for example, the similarity Re c, Re pa, for Re pb, determines whether individually smaller than a predetermined threshold, the similarity Re c, Re pa, Re pb is given It may be determined that the person to be authenticated is the same person as the person to be compared when it is smaller than the threshold values TH c , TH pa , and TH pb . Specifically, equation (21) may be established.

In the high-speed collation mode, it is determined that the person to be authenticated is the same person as the person to be compared when the total similarity Re2 obtained from the two similarities Re c and Re pb is smaller than a certain threshold value THb. However, the present invention is not limited to this. For example, it is determined whether or not each of the similarities Re c and Re pb is smaller than a predetermined threshold, and the similarities Re c and Re pb are determined as predetermined thresholds TH c and TH pb. It may be determined that the person to be authenticated is the same person as the person to be compared when each is smaller. Specifically, equation (22) may be established.

Further, in the one-to-n identification mode, two similarity Re c, comparison subject combination overall similarity Re3 obtained from Re pa is the smaller becomes the object's than a certain threshold THc is a combination of the same person It was determined. However, the present invention is not limited to this. For example, it is determined whether or not each of the similarities Re c and Re pa is smaller than a predetermined threshold, and the similarities Re c and Re pa are set to the predetermined thresholds TH c and TH pa. It may be determined that the person to be authenticated is the same person as the person to be compared when each is smaller. Specifically, equation (23) may be established.

  As described above, when both the determination using the common part feature amount and the determination using the individual part feature amount satisfy a predetermined criterion (less than a threshold value) indicating that the features of the object including the person match. The authentication object and the comparison object may be determined to be the same. By adopting such a configuration, it is possible to perform authentication with higher accuracy by performing authentication according to strict standards.

Further, in the detailed collation mode according to the modified example, the determination using the one-way similarity Repa obtained by using the feature amount of the individual feature portion related to the authentication target person (one-way determination), and the comparison target person A predetermined criterion indicating that the features of an object such as a person match in both of the determinations using the reverse direction similarity Repb (reverse direction determination) obtained using the feature quantities of the individual feature parts related to If both (threshold values TH pa and TH pb ) are satisfied, it is determined that the authentication object and the comparison object are the same person. For this reason, authentication with higher accuracy is possible.

  Note that any one of four modes in which the mode switching unit 17 is configured to include a registration mode, a detailed collation mode, a high-speed collation mode, and a one-to-n identification mode by various operations of the operation unit 6 by the user. In order to selectively switch the controller 10 to one mode, the mode switching function switches the mode between a plurality of modes including a one-to-n identification mode for performing one-way determination and a high-speed collation mode for performing reverse direction determination.

  In the above embodiment, the individual feature portion is detected by simply using the pixel value of the differential image. However, the present invention is not limited to this, and other methods such as a so-called subspace method may be used. An individual characteristic part may be detected from the authentication target image. As this subspace method, well-known literature (Masashi Nishiyama, Osamu Yamaguchi, Kazuhiro Fukui, "Gesture recognition using constrained mutual subspace method," Proceedings of 10th Symposium on Image Sensing Symposium SSII04, pp.439-444, 2004. etc.) can be employed.

  Hereinafter, the process of detecting the individual characteristic part from the registration target image and / or the authentication target image using the subspace method will be briefly described by taking the process of detecting the individual characteristic part from the authentication target image as an example.

  First, a large number of differential images are generated by differentiating a large number of images each capturing a large number of similar objects (for example, persons). Then, a partial space (that is, a common partial space) is determined from the pixels (specifically, parameters based on position information, pixel values, and the like) constituting the multiple differential images. Next, a differential image related to the authentication target image is generated by normalizing and differentiating the authentication target image. Further, the pixels (specifically, parameters based on position information, pixel values, etc.) constituting the differential image related to the authentication target image are projected onto the partial space. An image area formed by a pixel group related to a pixel projection area that is not included in the common partial space in an area where the pixel is projected (hereinafter also referred to as a “pixel projection area”) is an individual corresponding to an individual feature part. Recognized as a region. That is, it recognizes and registers as an individual characteristic part candidate.

  At this time, when a plurality of individual part regions exceeding a predetermined number (for example, 10) are recognized, a deviation angle between the space where each individual part region is projected onto the partial space and the common partial space is calculated. Then, the individual part regions related to the predetermined number of image projection areas are selectively adopted from the larger individual angle of the recognized individual part regions, while the remaining individual part regions are not adopted. In addition, when the recognized individual part area | region does not exceed predetermined number, all the recognized individual part area | regions are employ | adopted. That is, a predetermined number or less of individual part regions are detected by adopting a predetermined number or less of individual part regions as image regions corresponding to the individual characteristic parts.

  In this way, when a configuration is adopted in which the individual feature part is detected for at least one of the authentication target object and the comparison target object using the subspace method for detecting the individual feature part by comparing the entire image. Thus, it is possible to detect the individual characteristic portions more stably and reliably. In particular, in the method of detecting individual feature parts by dividing and evaluating the differential image in a grid shape as in the above embodiment, when the individual feature parts exist on the boundary line that divides the differential image, Although it is difficult to correctly detect individual feature parts, the subspace method can evaluate the entire image for each pixel, so that the individual feature parts can be detected more reliably and stably.

  In addition, as described above, when a plurality of pixel groups corresponding to the number of individual feature portions exceeding the predetermined number are determined, a configuration in which the predetermined number of pixel groups is selectively employed by the above method is employed. Then, the authentication narrowed down to the individual characteristic part which has a certain big characteristic is attained. As a result, the calculation time required for authentication can be shortened.

  Further, in the above embodiment, when calculating the similarity for each individual feature part, as an individual part region corresponding to the individual feature part as a weighting coefficient according to the saliency of the feature related to the individual feature part, The degree of deviation (deviation) from a predetermined reference value obtained from statistical differential image information was adopted as a weighting coefficient. However, in this modification, the deviation angle between the partial space related to the individual part region corresponding to the individual characteristic part and the common partial space, which can be easily calculated in the process of detecting the individual characteristic part by the subspace method, is used for weighting. It may be adopted as a coefficient. That is, for each individual feature part, the feature amount of the individual feature part is weighted according to the deviation angle from the common partial space of the pixel projection area related to the individual feature part, and authentication regarding the authentication target is performed. Anyway. Even if such a configuration is adopted, authentication can be performed with emphasis on individual feature portions capturing larger features, as in the above-described embodiment, so that the authentication accuracy can be further improved.

  In the above embodiment, one authentication is performed using both the feature quantity related to the common feature part and the feature quantity related to the individual feature part. However, the present invention is not limited to this. Accordingly, authentication using the feature amount related to the individual feature portion may be appropriately performed. Hereinafter, the modification will be described in more detail.

  FIG. 27 is a configuration diagram illustrating an authentication system 1A according to a modification.

  Of the mechanical configuration of the authentication system 1A, the part different from the mechanical configuration of the authentication system 1 is that the media drive 4 is increased to two media drives 4a and 4b. Then, the memory cards 8a and 8b, which are separate storage media, can be attached to the media drives 4a and 4b, respectively. That is, the memory cards 8a and 8b can be received separately.

  As for the functional configuration, in the authentication system 1, the authentication target 1 (for example, the person to be authenticated) is photographed to recognize the feature amount related to each part at the time of authentication. The controller 10A recognizes the feature quantity related to the authentication target object by attaching the memory cards 8a and 8b storing the feature quantity related to the target object in advance to the media drives 4a and 4b.

  Hereinafter, an operation at the time of setting the registration mode and an operation at the time of setting the authentication mode for performing authentication will be described for the authentication system 1A according to the modification. The controller 10A functions as an information storage system that stores the feature quantity of the authentication object in the storage medium when the registration mode is set, and the authentication object feature quantity stored in the memory cards 8a and 8b when the authentication mode is set. It functions as an authentication execution system that performs authentication related to the authentication object.

  The flowchart showing the operation of the controller 10A when setting the registration mode is the same as the flowchart shown in FIG. However, in the present modification, in the storage process of step SP10, the feature amount related to the common feature portion is stored in the memory card 8a attached to the media drive 4a, and the feature amount related to the individual feature portion is stored in the media drive 4b. It is stored in the attached memory card 8b.

  FIG. 28 is a flowchart illustrating the operation of the controller 10A when setting the authentication mode.

  In step Step 1, it is determined whether or not the feature amount of the common feature part has been input. Here, the determination in Step 1 is repeated until the memory card 8a storing the feature quantity of the common feature portion related to the authentication target is loaded in the media drive 4a and read into the controller 10A, and the common related to the authentication target is determined. When the feature amount of the feature part is read into the controller 10A, the process proceeds to Step Step2.

  In Step 2, the common feature portion is used by using the feature amount of the common feature portion related to the comparison target registered in advance in the HDD 3 a and the like and the feature amount of the common feature portion related to the authentication target input in Step Step 1. The similarity is calculated. The similarity may be calculated by the same method as in the above embodiment.

  In step Step 3, it is determined whether there is a request for authentication using the individual feature part. Here, when high-precision authentication is required, such as when using information with a high security level, there may be a mode in which authentication using individual feature parts is required. If there is a request for authentication, the process proceeds to step Step 4, and if there is no request for authentication using the individual feature part, the process proceeds to step Step6.

  In Step Step 4, it is determined whether or not the feature amount of the individual feature part has been input. Here, the determination in Step 4 is repeated until the memory card 8b in which the feature amount of the individual feature portion related to the authentication target is stored is loaded in the media drive 4b and read into the controller 10A, and the individual related to the authentication target is determined. When the feature amount of the feature part is read by the controller 10A, the process proceeds to Step Step5.

  In Step 5, the individual feature part is used by using the feature amount of the individual feature part related to the comparison target registered in advance in the HDD 3 a or the like and the feature value of the individual feature part related to the authentication target input in Step Step 4. The similarity is calculated. The similarity may be calculated by the same method as in the above embodiment.

  In Step 6, it is determined whether or not the authentication target person and the comparison target person are the same person. More specifically, when the process proceeds from step Step 3 to Step Step 6, if the similarity related to the common feature part calculated in Step Step 2 is smaller than a predetermined threshold, the person to be authenticated and the person to be compared are It is determined that they are the same person. Further, when the process proceeds from step Step 5 to step Step 6, for example, the similarity related to the common feature part calculated in step Step 2 and the similarity related to the individual feature part calculated in step Step 4, respectively, If it is smaller than the predetermined threshold, it is determined that the person to be authenticated and the person to be compared are the same person. If the total similarity calculated from the similarity related to the common feature part calculated in step Step 2 and the similarity related to the individual feature part calculated in step Step 5 is smaller than a predetermined threshold, the person to be authenticated And the person to be compared may be determined to be the same person.

  Then, in Step Step 7 of FIG. 28, the determination result in the process of Step Step 6 is appropriately output for a desired function.

  By adopting such a configuration, the authentication accuracy required by performing authentication using the feature amount of the individual feature part with high authentication accuracy according to the strictness of the authentication accuracy required. If it is low, the time required for authentication can be shortened. If the required authentication accuracy is high, the time required for authentication is slightly increased, but the authentication accuracy can be increased.

  In the above modification, the function of the information storage system and the function as the authentication execution system are provided in one controller 10A. However, the present invention is not limited to this, and the two functions are separately provided in separate devices. You may make it have.

  In addition, a function of recognizing the feature amount of the common feature portion related to the authentication target person and storing it in the memory card 8a, and a function of recognizing the feature amount of the individual feature portion related to the authentication target person and storing it in the memory cart 8b. These may be provided separately in separate devices. When such a configuration is adopted, a highly versatile device is used as a device for recognizing the feature amount of the common feature portion and storing it in the storage medium, while recognizing the feature amount of the individual feature portion and storing it in the storage medium. It is possible to construct a system in which the device to be stored is provided as a special device.

  Further, in the above modification, the feature amount of the common feature part related to the authentication target person and the feature amount of the individual feature part related to the authentication target person are stored in separate storage media, respectively, but the feature related to the individual feature part A configuration in which only the amount is stored in the storage medium is also conceivable.

  Furthermore, in the above-described modification, authentication using the feature amount of the common feature portion and authentication using the feature amount of the individual feature portion are performed by one apparatus, but the present invention is not limited to this. A device that performs authentication using the feature amount of the device and a device that performs authentication using the feature amount of the individual feature portion may be separately provided. When such a configuration is adopted, for example, when two gates having different security levels are provided, authentication using the feature amount of the common feature part is performed at the first gate, and then the second gate is used. Various modes for performing authentication using the feature quantities of the individual feature parts can also be realized.

  As described above, in the authentication system 1A according to the modified example, the feature amount of the common feature portion and the feature amount of the individual feature portion relating to the authentication target are recognized and stored in different storage media. . Then, authentication using the feature quantity of the common feature part stored in one storage medium and authentication using the feature quantity of the individual feature part stored in the other storage medium can be executed separately. it can. By adopting such a configuration, authentication according to the required authentication accuracy can be executed.

  In the above-described embodiment, the registration target object and the authentication target object are photographed using two cameras to detect and calculate the three-dimensional position of the part constituting each photographing object. However, for example, by capturing the registration target object and the authentication target object using one camera, the two-dimensional position of the part constituting each imaging target object may be detected and calculated. It is possible to detect and recognize the feature amount related to the individual feature part to determine whether or not the authentication object and the comparison object are the same.

  However, the method of photographing the registration object and the authentication object from two or more directions and detecting and calculating the three-dimensional position of the part constituting each photographing object is related to the direction of the authentication object such as the face. Therefore, highly accurate authentication can be realized stably.

  In the above embodiment, the example of performing face authentication has been described. However, the present invention is not limited to this. For example, the authentication target may be a part unique to a person other than the face, such as a palm. Further, animals other than humans such as dogs and cats may be targeted, and any object other than living organisms such as buildings may be subject to authentication.

  In the above embodiment, the three-dimensional shape information of an object such as a face is acquired using a plurality of images input from a plurality of cameras, but the present invention is not limited to this. Specifically, the reflected light of the laser irradiated by the laser beam emitting unit L1 is measured by the camera LCA using a three-dimensional shape measuring device constituted by the laser beam emitting unit L1 and the camera LCA as shown in FIG. By doing so, the three-dimensional shape information of the face of the person to be authenticated and / or the person to be registered may be acquired. However, according to the method of acquiring three-dimensional shape information using an input device including two cameras as in the above embodiment, a three-dimensional shape can be obtained with a relatively simple configuration compared to an input device using laser light. Shape information can be acquired.

It is a block diagram which shows the authentication system which concerns on embodiment of this invention. It is a figure which shows the structure outline | summary of a controller. It is a block diagram which shows the various functions with which a controller is provided. It is a block diagram which shows the detailed functional structure of an image normalization part. It is a block diagram which shows the detailed functional structure of a feature recognition part. It is a block diagram which shows the detailed function structure of a registration part. It is a block diagram which shows the detailed function structure of an authentication part. It is a flowchart which shows operation | movement of the controller in registration mode. It is a detailed flowchart of an image normalization processing process. It is a figure which shows the feature point of the characteristic site | part in a face image. It is a figure which shows a mode that a three-dimensional coordinate is calculated from the feature point in a two-dimensional image. It is a figure which shows the standard model of a three-dimensional face. It is a figure which illustrates texture information. It is a figure which shows the individual control point of the characteristic site | part after normalization. It is a detailed flowchart of an individual characteristic site | part detection process. It is a figure which illustrates the differential image which concerns on the texture image after normalization. It is a figure for demonstrating the method of comparing a differential image for every partial image. It is a figure which illustrates the statistical information which concerns on the total pixel value for every differential image. It is a detailed flowchart of the position calculation process of an individual characteristic part. It is a figure for demonstrating the method to calculate the relative position of an individual characteristic part. It is a flowchart which shows operation | movement of the controller in authentication mode. It is a detailed flowchart of a detailed collation process process. It is a detailed flowchart of the unidirectional similarity calculation which concerns on an individual feature part. It is a detailed flowchart of the reverse direction similarity calculation which concerns on an individual feature part. It is a detailed flowchart of a high-speed collation processing process. It is a detailed flowchart of a 1 to n identification processing process. It is a block diagram which shows the authentication system which concerns on a modification. It is a flowchart which shows the authentication operation | movement which concerns on a modification. It is a figure which shows the three-dimensional shape measuring device which has a laser beam emission part and a camera.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1,1A Authentication system 3 Memory | storage part 4, 4a, 4b Media drive 6 Operation part 8, 8a, 8b Memory card 10, 10A Controller 11 Image input part 12 Face area search part 13 Face part detection part 14 Image normalization part 15 Feature Recognizing unit 17 Mode switching unit 18 Registration unit 19 Authentication unit 71 Feature quantity database 151 Common site feature quantity recognition unit 153 Individual site area recognition unit 154 Individual site area selection and adoption unit 155 Individual site feature quantity recognition unit 181 Storage control unit 191 Common feature Quantity reading part 192 Common part comparison part 193 Individual part comparison position determination part 194 Individual part comparison part 195 Comprehensive judgment part HK mole PP Individual characteristic part

Claims (20)

  1. An authentication system for authenticating whether an authentication object is the same as a comparison object,
    An authentication target image that captures the authentication target object, and a comparison target that captures the comparison target object, each of which is different from a common part that is commonly provided for many similar items Individual part detecting means for detecting from at least one of the images;
    Individual feature amount recognizing means for recognizing the individual portion feature amount relating to the individual portion;
    Authentication means for performing authentication related to the authentication object using the individual part feature amount;
    Equipped with a,
    The individual part detecting means is
    A large number of reference images each capturing a large number of similar objects and at least one of the authentication target image and the comparison target image are compared for each partial image generated by dividing each of them according to the same rule. By detecting the individual parts,
    The individual part detecting means is
    With respect to at least one of the authentication target image and the comparison target image, for each partial image, a reference value relating to a predetermined parameter that characterizes the partial image obtained from the multiple reference images Means for calculating a deviation relating to the predetermined parameter;
    When the deviation exceeds a predetermined reference, a determination unit that determines a partial image related to the deviation as an individual part-containing partial image that captures the individual part;
    When the individual part-containing partial images are adjacent to each other, area recognition means for recognizing the plurality of adjacent individual part-containing partial images as individual part areas corresponding to one individual part;
    When a plurality of individual part regions exceeding a predetermined number are recognized by the region recognizing means, a reference relating to a predetermined variation factor characterizing the individual part region obtained from the multiple reference images for each individual part region Means for calculating a deviation related to the predetermined variation factor with respect to the value;
    Among the plurality of individual part regions, while selectively adopting the individual part region related to the predetermined number of factor deviation from the larger one, means for not adopting the remaining individual part region,
    Authentication system and said Rukoto to have a.
  2. An authentication system for authenticating whether an authentication object is the same as a comparison object ,
    An authentication target image that captures the authentication target object, and a comparison target that captures the comparison target object, each of which is different from a common part that is commonly provided for many similar items Individual part detecting means for detecting from at least one of the images;
    Individual feature amount recognizing means for recognizing the individual portion feature amount relating to the individual portion;
    Authentication means for performing authentication related to the authentication object using the individual part feature amount;
    With
    The individual part detecting means is
    A large number of reference images each capturing a large number of similar objects and at least one of the authentication target image and the comparison target image are compared for each partial image generated by dividing each of them according to the same rule. By detecting the individual parts,
    The individual part detecting means is
    With respect to at least one of the authentication target image and the comparison target image, for each partial image, a reference value relating to a predetermined parameter that characterizes the partial image obtained from the multiple reference images Means for calculating a deviation relating to the predetermined parameter;
    When the deviation exceeds a predetermined reference, a determination unit that determines a partial image related to the deviation as an individual part-containing partial image that captures the individual part;
    Deviation calculation for calculating a factor deviation relating to the predetermined variation factor with respect to a reference value relating to the predetermined variation factor characterizing a region corresponding to each individual portion obtained from the multiple reference images for each individual portion Means,
    With
    The authentication means is
    An authentication system characterized by performing authentication related to the authentication object by weighting each individual part feature amount in accordance with the factor deviation related to each individual part .
  3. An authentication system for authenticating whether an authentication object is the same as a comparison object ,
    An authentication target image that captures the authentication target object, and a comparison target that captures the comparison target object, each of which is different from a common part that is commonly provided for many similar items Individual part detecting means for detecting from at least one of the images;
    Individual feature amount recognizing means for recognizing the individual portion feature amount relating to the individual portion;
    Authentication means for performing authentication related to the authentication object using the individual part feature amount;
    With
    The individual part detecting means is
    By using the subspace method, the individual parts are detected,
    The individual part detecting means is
    A common subspace is determined from a large number of reference images each capturing a large number of similar objects, and a parameter related to the pixel is partially set for each pixel constituting at least one of the authentication target image and the comparison target image. Area recognition means for recognizing an image area formed by a pixel group related to a pixel projection area not included in the common partial space as an individual part area corresponding to the individual part when projected onto a space;
    Authentication system characterized in that it have a.
  4. An authentication system for authenticating whether an authentication object is the same as a comparison object ,
    An authentication target image that captures the authentication target object, and a comparison target that captures the comparison target object, each of which is different from a common part that is commonly provided for many similar items Individual part detecting means for detecting from at least one of the images;
    Individual feature amount recognizing means for recognizing the individual portion feature amount relating to the individual portion;
    Authentication means for performing authentication related to the authentication object using the individual part feature amount;
    With
    The individual part detecting means is
    First detection means for detecting the individual part for the authentication target image;
    Second detection means for detecting the individual part for the comparison target image;
    Have
    The individual feature amount recognition means includes
    First recognition means for recognizing a first feature amount for each individual part detected by the first detection means;
    A second recognizing unit for recognizing a second feature amount for each individual part detected by the second detecting unit;
    Have
    The authentication means is
    When a predetermined criterion indicating that the feature of an object matches in both the one-direction determination using the first feature amount and the reverse-direction determination using the second feature amount is satisfied. An authentication system , wherein when determined, the authentication object and the comparison object are determined to be the same .
  5. An authentication system for authenticating whether an authentication object is the same as a comparison object ,
    An authentication target image that captures the authentication target object, and a comparison target that captures the comparison target object, each of which is different from a common part that is commonly provided for many similar items Individual part detecting means for detecting from at least one of the images;
    Individual feature amount recognizing means for recognizing the individual portion feature amount relating to the individual portion;
    Authentication means for performing authentication related to the authentication object using the individual part feature amount;
    With
    The individual part detecting means is
    First detection means for detecting the individual part for the authentication target image;
    Second detection means for detecting the individual part for the comparison target image;
    Have
    The individual feature amount recognition means includes
    First recognition means for recognizing a first feature amount for each individual part detected by the first detection means;
    A second recognizing unit for recognizing a second feature amount for each individual part detected by the second detecting unit;
    Have
    The authentication means is
    Unidirectional determination means for performing unidirectional determination using the first feature amount;
    Reverse direction determination means for performing reverse direction determination using the second feature amount;
    Have
    The authentication system is
    Mode switching means for switching between the first mode for determining the one direction and the second mode for determining the reverse direction;
    Further comprising the authentication system according to claim Rukoto a.
  6. An authentication system according to any one of claims of claims 1 to 5,
    A common feature amount recognizing means for recognizing a common portion feature amount related to the common portion for the authentication target image and the comparison target image;
    Further comprising
    The authentication means is
    An authentication system that performs authentication related to the authentication object using the individual part feature and the common part feature .
  7. An authentication system according to claim 4 or claim 5 , wherein
    The authentication means is
    In both the first determination using the individual part feature amount and the second determination using the common part feature amount, it is determined that a predetermined criterion indicating that the feature of the object matches is satisfied If it is determined , the authentication system determines that the authentication object and the comparison object are the same .
  8. An authentication system according to any one of claims 4 to 6 , comprising:
    The individual part detecting means is
    A large number of reference images each capturing a large number of similar objects and at least one of the authentication target image and the comparison target image are compared for each partial image generated by dividing each of them according to the same rule. Thus , the authentication system is characterized in that the individual part is detected .
  9. The authentication system according to claim 8 ,
    The individual part detecting means is
    With respect to at least one of the authentication target image and the comparison target image, for each partial image, a reference value relating to a predetermined parameter that characterizes the partial image obtained from the multiple reference images Means for calculating a deviation relating to the predetermined parameter;
    When the deviation exceeds a predetermined reference, a determination unit that determines a partial image related to the deviation as an individual part-containing partial image that captures the individual part;
    Authentication system characterized in that it have a.
  10. The authentication system according to claim 9,
    The individual part detecting means is
    Area recognition means for recognizing a plurality of adjacent individual part-containing partial images as individual part areas corresponding to one individual part when the individual part-containing partial images are adjacent to each other;
    An authentication system further comprising :
  11. The authentication system according to claim 3 ,
    The individual part detecting means is
    When a plurality of individual part regions exceeding a predetermined number are recognized by the region recognition means, the plurality of individual part regions are related to a predetermined number of pixel projection regions from the one with the larger deviation angle from the common partial space. Means that selectively adopt individual part areas while not adopting the remaining individual part areas;
    An authentication system further comprising:
  12. An authentication system according to claim 3 or claim 11, wherein
    The authentication means is
    For each individual part, the individual part feature value is weighted according to a deviation angle from the common partial space of the pixel projection region related to the individual part, and authentication about the authentication object is performed. Authentication system.
  13. The authentication system according to any one of claims 1 to 12,
    Position calculating means for calculating a relative position in which the position of the individual part is expressed by a relative value based on the position of the common part;
    An authentication system, further comprising:
  14. The authentication system according to claim 13,
    The relative position is
    An authentication system comprising a three-dimensional relative position based on the position of the common part.
  15. The authentication system according to claim 13 or claim 14,
    The individual part feature amount is
    An authentication system comprising information indicating the relative position.
  16. The authentication system according to any one of claims 1 to 15,
    The individual feature amount recognition means includes
    Feature quantity calculation means for calculating the individual part feature quantity using at least one of a luminance value of an image area corresponding to the individual part and a differential value of a pixel value constituting the image area;
    An authentication system comprising:
  17. An authentication method in an authentication system for authenticating whether or not an authentication object is the same as a comparison object ,
    (a) An authentication target image that captures the authentication target object and an individual target part that is different from a common part that is commonly provided for a large number of the same type of objects, and the comparison target object Detecting from at least one of the comparison target images;
    (b) recognizing the individual part feature amount related to the individual part;
    (c) performing authentication related to the authentication object using the individual part feature amount;
    With
    In step (a),
    A large number of reference images each capturing a large number of similar objects and at least one of the authentication target image and the comparison target image are compared for each partial image generated by dividing each of them according to the same rule. By detecting the individual parts,
    Step (a)
    (a-1) With respect to at least one of the authentication target image and the comparison target image, for each partial image, a predetermined parameter characterizing the partial image obtained from the multiple reference images Calculating a deviation related to the predetermined parameter with respect to a reference value;
    (a-2) when the deviation exceeds a predetermined reference, determining a partial image related to the deviation as an individual part-containing partial image capturing the individual part;
    (a-3) when a plurality of the individual part-containing partial images are adjacent to each other, recognizing the plurality of adjacent individual part-containing partial images as individual part regions corresponding to one individual part;
    (a-4) When a plurality of individual part regions exceeding a predetermined number are recognized in step (a-3), for each individual part region, the individual part region obtained from the multiple reference images is Calculating a deviation relating to the predetermined variation factor with respect to a reference value relating to the predetermined variation factor to be characterized;
    (a-5) selectively adopting the individual part region related to the predetermined number of factor deviations from the larger one of the plurality of individual part regions, while not adopting the remaining individual part region;
    Authentication method according to claim Rukoto to have a.
  18. The program which makes the said authentication system function as an authentication system as described in any one of Claims 1-16 by being performed by the computer contained in an authentication system .
  19. An authentication system for authenticating whether an authentication object is the same as a comparison object ,
    An information storage system for storing the feature quantity of the authentication object in a storage medium;
    An authentication execution system for performing authentication related to the authentication object using the feature quantity of the authentication object stored in the storage medium;
    With
    The information storage system is
    For the image capturing the object to be authenticated, the feature quantity of the common part provided in common for many similar objects is recognized and stored in the first storage medium, but different from the common part and individually. Storage control means for recognizing the feature quantities of the individual parts included in the storage unit and storing the feature amounts in a second storage medium different from the first storage medium;
    Have
    The authentication execution system is
    First receiving means for receiving the first storage medium;
    Second receiving means for receiving the second storage medium;
    First authentication means for performing authentication relating to the authentication object using the feature quantity of the common part stored in the first storage medium received by the first reception means;
    Second authentication means for performing authentication relating to the authentication object using the feature quantity of the individual part stored in the second storage medium received by the second reception means;
    With
    The information storage system is
    A plurality of reference images each capturing a large number of similar objects, an authentication target image capturing the authentication target object, and at least one of the comparison target images capturing the comparison target object are the same. By comparing each partial image generated by dividing by the rule, the individual part is detected,
    The information storage system is
    With respect to at least one of the authentication target image and the comparison target image, for each partial image, a reference value relating to a predetermined parameter that characterizes the partial image obtained from the multiple reference images Means for calculating a deviation relating to the predetermined parameter;
    When the deviation exceeds a predetermined reference, a determination unit that determines a partial image related to the deviation as an individual part-containing partial image that captures the individual part;
    When the individual part-containing partial images are adjacent to each other, area recognition means for recognizing the plurality of adjacent individual part-containing partial images as individual part areas corresponding to one individual part;
    When a plurality of individual part regions exceeding a predetermined number are recognized by the region recognizing means, a reference relating to a predetermined variation factor characterizing the individual part region obtained from the multiple reference images for each individual part region Means for calculating a deviation related to the predetermined variation factor with respect to the value;
    Among the plurality of individual part regions, while selectively adopting the individual part region related to the predetermined number of factor deviation from the larger one, means for not adopting the remaining individual part region,
    Authentication system and said Rukoto to have a.
  20. An authentication method in an authentication system for authenticating whether or not an authentication object is the same as a comparison object,
    (i) Recognizing and storing in the first storage medium a feature value of a common part that is commonly provided for a number of similar objects from an image obtained by capturing the authentication object; Recognizing and storing a feature quantity of an individual part that is different and individually stored in a second storage medium different from the first storage medium;
    (ii) performing authentication related to the authentication object using the characteristic amount of the common part stored in the first storage medium;
    (iii) performing authentication related to the authentication object using the feature amount of the individual part stored in the second storage medium;
    With
    In step (i),
    A plurality of reference images each capturing a large number of similar objects, an authentication target image capturing the authentication target object, and at least one of the comparison target images capturing the comparison target object are the same. By comparing each partial image generated by dividing by the rule, the individual part is detected,
    Step (i)
    (i-1) With respect to at least one of the authentication target image and the comparison target image, for each partial image, a predetermined parameter characterizing the partial image obtained from the multiple reference images Calculating a deviation related to the predetermined parameter with respect to a reference value;
    (i-2) when the deviation exceeds a predetermined reference, determining a partial image related to the deviation as an individual part-containing partial image capturing the individual part;
    (i-3) when a plurality of the individual part-containing partial images are adjacent to each other, recognizing the plurality of adjacent individual part-containing partial images as individual part regions corresponding to one individual part; and
    (i-4) When a plurality of individual part regions exceeding a predetermined number are recognized in the step (i-3), the individual part regions obtained from the multiple reference images are determined for each individual part region. Calculating a deviation relating to the predetermined variation factor with respect to a reference value relating to the predetermined variation factor to be characterized;
    (i-5) selectively adopting the individual part region related to the predetermined number of factor deviations from the larger one of the plurality of individual part regions, while not adopting the remaining individual part region;
    An authentication method characterized by comprising:
JP2006132580A 2006-05-11 2006-05-11 Authentication system, authentication method, and program Expired - Fee Related JP4992289B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006132580A JP4992289B2 (en) 2006-05-11 2006-05-11 Authentication system, authentication method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006132580A JP4992289B2 (en) 2006-05-11 2006-05-11 Authentication system, authentication method, and program

Publications (2)

Publication Number Publication Date
JP2007304857A JP2007304857A (en) 2007-11-22
JP4992289B2 true JP4992289B2 (en) 2012-08-08

Family

ID=38838731

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006132580A Expired - Fee Related JP4992289B2 (en) 2006-05-11 2006-05-11 Authentication system, authentication method, and program

Country Status (1)

Country Link
JP (1) JP4992289B2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4919118B2 (en) * 2008-01-21 2012-04-18 日本電気株式会社 Pattern matching system, pattern matching method, and program for pattern matching
JP5121506B2 (en) * 2008-02-29 2013-01-16 キヤノン株式会社 Image processing apparatus, image processing method, program, and storage medium
JP5228872B2 (en) * 2008-12-16 2013-07-03 富士通株式会社 Biometric authentication apparatus, biometric authentication method, biometric authentication computer program, and computer system
JP2010146502A (en) * 2008-12-22 2010-07-01 Toshiba Corp Authentication processor and authentication processing method
JP2015158848A (en) * 2014-02-25 2015-09-03 株式会社日立製作所 Image retrieval method, server, and image retrieval system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07332951A (en) * 1994-06-13 1995-12-22 Hiyuu Burein:Kk Apparatus and method for inspecting image
JP2005242432A (en) * 2004-02-24 2005-09-08 Nec Soft Ltd Face authentication system and processing method for the system and program therefor
JP4351982B2 (en) * 2004-10-07 2009-10-28 株式会社東芝 Personal authentication method, apparatus and program

Also Published As

Publication number Publication date
JP2007304857A (en) 2007-11-22

Similar Documents

Publication Publication Date Title
Mian et al. An efficient multimodal 2D-3D hybrid approach to automatic face recognition
Breitenstein et al. Real-time face pose estimation from single range images
Kollreider et al. Real-time face detection and motion analysis with application in “liveness” assessment
US7881524B2 (en) Information processing apparatus and information processing method
ES2385041T3 (en) 3D object recognition
US7127087B2 (en) Pose-invariant face recognition system and process
JP4501937B2 (en) Face feature point detection device, feature point detection device
De Marsico et al. Robust face recognition for uncontrolled pose and illumination changes
US9235902B2 (en) Image-based crack quantification
US8811726B2 (en) Method and system for localizing parts of an object in an image for computer vision applications
JP2008310796A (en) Computer implemented method for constructing classifier from training data detecting moving object in test data using classifier
US7853085B2 (en) Viewpoint-invariant detection and identification of a three-dimensional object from two-dimensional imagery
US20070258627A1 (en) Face recognition system and method
Dornaika et al. Fast and reliable active appearance model search for 3-D face tracking
US20110286628A1 (en) Systems and methods for object recognition using a large database
Erdogmus et al. Spoofing face recognition with 3D masks
US20090310828A1 (en) An automated method for human face modeling and relighting with application to face recognition
JP2005149506A (en) Method and apparatus for automatic object recognition/collation
US20050105779A1 (en) Face meta-data creation
JP5845365B2 (en) Improvements in or related to 3D proximity interaction
JP5174045B2 (en) Illumination detection using a classifier chain
US8391590B2 (en) System and method for three-dimensional biometric data feature detection and recognition
JP4946730B2 (en) Face image processing apparatus, face image processing method, and computer program
Wang et al. 3D facial expression recognition based on primitive surface feature distribution
Zhu et al. Multimodal biometric identification system based on finger geometry, knuckle print and palm print

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090428

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20090615

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20111226

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120117

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120315

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120410

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120423

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150518

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150518

Year of fee payment: 3

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

LAPS Cancellation because of no payment of annual fees