WO2008056777A1 - Système d'authentification et procédé d'authentification - Google Patents

Système d'authentification et procédé d'authentification Download PDF

Info

Publication number
WO2008056777A1
WO2008056777A1 PCT/JP2007/071807 JP2007071807W WO2008056777A1 WO 2008056777 A1 WO2008056777 A1 WO 2008056777A1 JP 2007071807 W JP2007071807 W JP 2007071807W WO 2008056777 A1 WO2008056777 A1 WO 2008056777A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
face
local
shape
feature
Prior art date
Application number
PCT/JP2007/071807
Other languages
English (en)
Japanese (ja)
Inventor
Hiroshi Yamato
Yuichi Kawakami
Original Assignee
Konica Minolta Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Holdings, Inc. filed Critical Konica Minolta Holdings, Inc.
Priority to JP2008543143A priority Critical patent/JP4780198B2/ja
Publication of WO2008056777A1 publication Critical patent/WO2008056777A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to an authentication system and an authentication method for performing face authentication.
  • biometrics authentication technology biometric technology
  • face authentication technology that performs face authentication is known. This face authentication technology is a non-contact type authentication method, and the demand for it is very high in offices because of its convenience.
  • authentication 3D face authentication
  • 3D shape information 3D information
  • this 3D face recognition uses the entire face, which causes the partial hiding problem. That is, the problem that the authentication accuracy is lowered due to the loss of data at the location where the concealment occurs in the authentication process cannot be solved.
  • dense 3D information there is a problem that the authentication process takes time S.
  • Patent Document 1 discloses the following technique.
  • the reference point of the face is extracted by examining the change in curvature of the face surface.
  • the reference point of the face is These include points where the absolute value of curvature is maximum (for example, the tip of the nose) and points where the absolute value of curvature is maximum near the center of the side of the face (for example, ear hole points).
  • the face orientation (inclination), that is, the face orientation is corrected by calculating the reference posture based on these face reference points.
  • the corrected 3D shape data of the face is approximated to a plane with an arbitrary size, and the unit normal vector and area of this plane are obtained.
  • the normal distribution in which the size of the unit normal vector is expressed by this area is used as a feature amount, and authentication is performed.
  • Patent Document 1 the technique disclosed in Patent Document 1 is premised on using the entire three-dimensional shape, so-called global patch information. For this reason, since it is necessary to determine the reference direction of the face, it is not possible to determine this reference direction when the face is partially hidden due to the above-mentioned change in posture etc. The process cannot be executed.
  • Patent Document 2 discloses the following technique. First, color information is used to extract 3D shape information and color information of only the face portion of a person, and face data is obtained by combining the 3D shape information and color information. Next, the center of the entire three-dimensional shape of this face data (collation face data) and dictionary face data prepared in advance is obtained, and translated so that the positions of these centroids coincide. At the same time, rotated face data is obtained by slightly rotating around the matched center of gravity. Then, a minimum error is obtained by calculating an error between the rotated face data and the dictionary face data, and determination (authentication) is performed based on the minimum error.
  • color information is used to extract 3D shape information and color information of only the face portion of a person, and face data is obtained by combining the 3D shape information and color information.
  • the center of the entire three-dimensional shape of this face data (collation face data) and dictionary face data prepared in advance is obtained, and translated so that the positions of these centroids coincide.
  • rotated face data is obtained by slightly rotating around
  • Patent Document 1 Japanese Patent Laid-Open No. 5-215531
  • Patent Document 2 JP-A-9 259271
  • the present invention has been made in view of the above circumstances, and it is an object of the present invention to provide an authentication system and an authentication method capable of reducing a decrease in authentication accuracy and improving an authentication speed. Say it.
  • a plurality of local three-dimensional local regions in the person to be authenticated are determined, and the three-dimensional feature amount force of the face in each of these three-dimensional local regions
  • the three-dimensional face feature value is calculated as a three-dimensional face feature value, and a comparison facial feature value prepared in advance is compared to perform an authentication operation for the person to be authenticated.
  • FIG. 1 is a schematic configuration diagram showing an example of an authentication system according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing an example of the overall configuration of a controller in the authentication system.
  • FIG. 3 is a functional block diagram for explaining a face authentication function provided in the controller.
  • FIG. 4 is a schematic diagram showing an example of coordinates of feature points in each feature part of a face.
  • FIG. 5 is a schematic diagram for explaining calculation of three-dimensional coordinates of each characteristic part.
  • FIG. 6 is a schematic diagram showing an example of a standard model.
  • FIG. 7 is a three-dimensional graph for conceptually explaining the Gabor filter.
  • FIG. 8 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data.
  • FIG. 9 is a schematic diagram for explaining a method of extracting (determining) a local patch region from 3D face part shape data using the rectangular region information set in FIG. 8.
  • FIG. 10 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data.
  • FIG. 11 is a schematic diagram showing an example of each 3D point and each local patch region in 3D face part shape data.
  • FIG. 12] (A) (B) and (C) are diagrams for explaining the intersection determination.
  • FIG. 13 is a schematic diagram showing an example of a Bezier curved surface in extracting a three-dimensional face feature quantity.
  • FIG. 14 is a flowchart showing an example of face authentication operation according to the present embodiment.
  • FIG. 15 is a flowchart showing an example of the operation in step S9 of FIG.
  • FIG. 16 is a functional block diagram for explaining the face authentication function provided in another controller.
  • FIG. 17 is a flowchart showing an example of the operation of the authentication system shown in FIG.
  • FIG. 1 is a schematic configuration diagram showing an example of an authentication system 1 according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram illustrating an example of the overall configuration of the controller 10.
  • FIG. 3 is a functional block diagram for explaining the face authentication function provided in the controller 10.
  • FIG. 4 is a schematic diagram showing an example of the coordinates of feature points in each feature part of the face.
  • the authentication system 1 performs personal authentication by face (hereinafter referred to as face authentication), and includes a controller 10 and two photographing cameras (two-dimensional camera; 2D camera) (hereinafter referred to as “face authentication”). Just “camera”! /, U) with CA1 and CA2! /
  • the cameras CA1 and CA2 are arranged so that the face of the authentication target person HM can be photographed from different positions (angles) with respect to the face position of the authentication target person HM.
  • the appearance information of the person HM to be authenticated HM obtained by this photographing that is, two types of face images are transmitted to the controller 10 via the communication line.
  • the image data communication method between the cameras CA1 and CA2 and the controller 10 is not limited to the wired method, and may be a wireless method.
  • the face image may be an image including a background image as well as a face image. As shown in FIG.
  • the controller 10 is embodied by an information processing device such as a personal computer (PC), for example, and includes a CPU 2, a storage unit 3, a media drive 4, and a liquid crystal display, for example.
  • Display unit 5 a keyboard 6a and an input unit 6 such as a mouse 6b as a pointing device, and a communication unit 7 such as a network card.
  • the storage unit 3 includes a plurality of storage media such as a hard disk drive (HDD) 3a and a RAM (semiconductor memory) 3b.
  • the media drive 4 also stores information recorded therein from a portable storage medium 8 such as a CD-ROM (Compact Disc Read Only Memory), DVD (Digital Versatile Disk), flexible disk, or memory card.
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disk
  • flexible disk or memory card.
  • the controller 10 is equipped with drive devices such as CD-ROM drive devices, DVD drive devices, flexible disk drive devices, and memory card drive devices that can be read.
  • the information supplied to the controller 10 is not limited to being supplied via the recording medium 8, and may be supplied via a network such as a LAN (Local Area Network) or the Internet. .
  • the controller 10 may be a dedicated controller (main unit control device) manufactured for this system. It has the functions described below!
  • the controller 10 includes an image input unit 11, a face region detection unit 12, a face part detection unit 13, a face part 3D calculation unit 14, and a posture / light source correction unit 15.
  • a standard model storage unit 16 a two-dimensional authentication unit 17, a face area 3D calculation unit 18, a three-dimensional authentication unit 19, a similarity calculation unit 20, a registered data storage unit 21, and a determination unit 22. It has.
  • the image input unit 11 inputs a face image of the person HM to be authenticated obtained by photographing with the cameras CA1 and CA2 from the cameras CA1 and CA2 to the controller 10.
  • the image input unit 11 includes a first image input unit 11a and a second image input unit l ib corresponding to the cameras CA1 and CA2, and the face images transmitted from the cameras CA1 and CA2 respectively. Is entered. Therefore, a total of two face images are input from the cameras CA1 and CA2.
  • the authentication system 1 of this embodiment performs two-dimensional authentication (2D authentication) and three-dimensional authentication (3D authentication) using the input face image! / (If this is multiple-authenticated! / , U), and a configuration for making a determination based on these results.
  • a 2D image (2D image) and 3D shape data (3D shape data) are required.
  • the input device (2D image ⁇ 3D (3D) measurement input device) for acquiring the 2D image and 3D shape data is for multiple (2 to N) general 2D cameras (stereo cameras). There is a way. In this case, the 3D shape of the face (3D shape) is calculated from two or more 2D images.
  • 3D shape data can be acquired by using a 3D measuring device (3D measuring device; 3D camera) such as a non-contact 3D digitizer using a light cutting method! /.
  • 3D measuring device 3D measuring device; 3D camera
  • the 3D such as the non-contact 3D digitizer described above is required.
  • 3D shape data can be acquired directly by the 3D measurement device, and there is no need to calculate it from a 2D image.
  • the type of 3D measurement device that combines a camera for acquiring 3D shape data and a camera for acquiring 2D images, it is not necessary to prepare a separate camera for acquiring 2D images as described above.
  • the face area detection unit 12 detects (identifies and extracts) a face area from the face image input to the image input unit 11.
  • the face area detection unit 12 includes a first face area detection unit 12a and a second face area detection unit 12b corresponding to the first image input unit 11a and the second image input unit l ib of the image input unit 11. Then, a face area (face area image) is detected from the face images transmitted from the first image input unit 11a and the second image input unit l ib, respectively. More specifically, the face area detection unit 12 extracts (cuts out) an area where a face image force face exists by performing template matching using a standard face image prepared in advance, for example. ) Perform processing.
  • a so-called neural network that trains images of face regions of a plurality of people, stores the results as a learning dictionary, and compares the newly input face images with each other to determine face area detection.
  • a so-called neural network that trains images of face regions of a plurality of people, stores the results as a learning dictionary, and compares the newly input face images with each other to determine face area detection.
  • Viola-Jones detector which stores various face area identifiers and uses them step by step, ie, comparison.
  • the method of determining facial region detection while reducing the number of identifiers used as the system progresses for example, P. Viola and M. Jones.Rapid object detection using a boosted cascade of simple features.In Proc. Of IEEE conference on Computer Vision and Pattern Recognition, Kauai, HI, December 2001.). Note that this method can be configured by combining a plurality of simple discriminant functions using simple image feature amounts.
  • the first face area detection unit 12a and the second face area detection unit 12b may individually detect the face area, but only one of them detects the face area. Also good.
  • the first face area detection unit 12a and the second face area detection unit 12b may individually detect the face areas, and may employ a highly accurate detection result. The face area can be detected with high accuracy by the corresponding area search process. The same applies to the face part detection unit 13.
  • the face part detection unit 13 detects (extracts or calculates) a characteristic part (referred to as a characteristic part) of the face with respect to the image power of the face area detected by the face area detection unit 12. Detecting a characteristic part of the face is called “face part detection”.
  • the face part detection unit 13 includes a first face region detection unit 13a and a second face region detection unit 13b. The position of the characteristic part (coordinates on the image) is detected from the face area images transmitted from the first face area detecting unit 12a and the second face area detecting unit 12b, respectively.
  • the facial features include the eyes (eg, the center of the pupil, the corner of the eye, the top of the eye, the upper and lower parts of the pupil), the eyebrows (eg, both ends and the middle of the eyebrows), and the nose (eg, the edges of the nose, the lower center of the nose, Hole), mouth (eg, left and right mouth edges, upper and lower lip center) or mandibular tip.
  • the face part detection unit 13 calculates the coordinates of the feature points Q1 to Q23 of each feature part as shown in FIG.
  • Feature points Ql, Q3; Q2, Q4 are the ends of the left and right eyes, and feature points Q7, Q5; Q8, Q6 are the upper and lower parts of the left and right pupils, and feature points Q9, Q13; Q10, Q14 Are the left and right eyebrows, and feature points Ql l and Q12 are the approximate center of the left and right eyebrows, and feature points Q15 and Q16; Q17 and Q18 are the ends of the nose and feature points Q19 Is the lower center of the nose, the feature points Q20 and Q21 are both ends of the mouth, and the feature points Q22 and Q23 are the upper and lower portions of the center of the lips. It should be noted that the portion of the feature points to be extracted can be increased or decreased as necessary if it is set appropriately. Further, this feature part can be detected by various methods such as template matching using a standard template of the feature part.
  • the coordinates of the calculated feature points Q1 to Q23 are expressed as two-dimensional coordinates on each image input from the cameras CA1 and CA2.
  • the coordinate value of the feature point Q20 is obtained in each of two images Gl and G2 (see FIG. 5 described later). More specifically, the coordinates (xl, yl) of the feature point Q20 on the image G1 are calculated with the end point of the images G1 and G2 as the origin O, and the coordinates of the feature point Q20 on the image G2 (x2 , Y2) is calculated.
  • the face part detection unit 13 calculates the coordinates of each feature point from the image of the face region, and calculates the luminance value of each pixel in the region having the feature point as a vertex (referred to as a feature region). This is acquired as information (called texture information) in this area.
  • texture information information
  • the face part detection unit 13 since two images are input, the face part detection unit 13 performs, for example, an average of corresponding pixels in corresponding feature regions in each of these images (images Gl and G2). The average brightness value of each pixel is used as texture information in the feature region.
  • the method of detecting the facial part is not limited to this! /.
  • a face part detection method a method as proposed in Japanese Patent Laid-Open No. 9 102043 “Detection of element positions in an image” may be employed.
  • a method for detecting a facial part a method for detecting from the shape of a facial part by using auxiliary light, A method using learning by Rennet, or a method using frequency analysis by Gabor wavelet transform or normal wavelet transform which is not Gabor may be adopted.
  • the three-dimensional face data that combines the 3D coordinates M G) of each feature point Qj is called “3D face part shape data”.
  • the symbol “w” in the equation (1 ⁇ 1) is a constant (w ⁇ 0) that is not 0 (zero), and the symbol “P” represents the perspective projection matrix (camera parameter Pi).
  • (1 ⁇ 1) is expressed by the following expression (1 ⁇ 3).
  • the perspective projection matrix P is a 3 ⁇ 4 row IJ equation, and if each component is expressed by the following equation (1 ⁇ 4), the above (1 * 1
  • the relationship between the coordinates of the space and the image becomes as shown in the following equations (1 ⁇ 5) and (1 ⁇ 6).
  • FIG. 5 is a schematic diagram for explaining the calculation of the three-dimensional coordinates of each feature part.
  • P 2 represents the (i, j) component of P.
  • This equation (1 ⁇ 9) is a simultaneous linear equation of X, Y, and ⁇ .
  • the coordinates (X, ⁇ , ⁇ ⁇ ) of the feature point in the three-dimensional space can be obtained. That power S.
  • (1 ⁇ 9) four equations are given for the three unknowns X, ⁇ , and ⁇ . This means that the four components (xl, yl) and (x2, y2) are not independent. Similarly, the coordinates of the other feature points are calculated in space.
  • the posture / light source correction unit 15 performs posture variation correction and light source variation correction on the texture information calculated by the face part detection unit 13.
  • Posture fluctuation correction corrects the effect on the texture due to the difference in face posture, that is, orientation (tilt).
  • Light source fluctuation correction corrects the effect on the texture due to the difference in the direction (tilt) of the light source relative to the face.
  • the posture / light source correction unit 15 is a standard model (standard stereo model; see Fig. 7 described later) that is a general (standard) face model prepared in advance for posture change correction and light source change correction for this texture information. ) Is used. [0040] ⁇ Attitude variation correction>
  • the shape of the 3D face part shape data (3D coordinates M (j) of each feature point Qj) is corrected.
  • the posture 'light source correction unit 15 corrects the three-dimensional position so that the 3D face part shape data, that is, the 3D shape, most closely matches the 3D shape of the standard model (the shape of the 3D face part shape data itself changes). Shina! /,).
  • the posture / light source correction unit 15 performs so-called model fitting based on the standard model when the face according to the 3D face part shape data is facing sideways, and the face facing the side is the orientation of the standard model face ( Reference position), for example, correct the position so that it faces the front. This position is corrected based on the posture parameter t (pose parameter) shown in the following equation (2).
  • the symbol “s” represents the scale conversion index
  • the symbol “ ⁇ , ⁇ , ⁇ ” represents the transformation parameter indicating the rotational displacement (tilt)
  • the symbol “tx, ty, tz” is orthogonal 3 Represents a conversion parameter indicating translational displacement in the axial direction.
  • the symbol “Tt” on the right shoulder represents “transposition”.
  • the two-dimensional texture (2D of each feature region acquired by the face part detection unit 13 is obtained.
  • the texture information is corrected so that the (texture) is in the front direction (reference direction).
  • the texture information corresponding to the case where the face was photographed from the front is reconstructed.
  • a proper normalized texture image is created.
  • the texture information correction is not limited to the above method! /.
  • the texture information correction is performed by pasting (mapping) the texture (texture image) of each feature region acquired by the face part detection unit 13 to the corresponding region (polygon described later) of the standard model. Similar to the above, there is a method of correcting so that a front textured face image can be obtained. It will be adopted. This makes it possible to handle texture information without being affected by the difference in posture!
  • the front texture face image obtained by the correction may be projected onto cylindrical coordinates (cylindrical surface) arranged around the standard model so as to be easily compared with each other.
  • the texture information of the projection image obtained by this projection is not affected by posture fluctuations, but is also not affected by changes in facial shape due to changes in facial expressions! /, Because it is pure facial texture information, It is very useful as information used for authentication.
  • the luminance is corrected for each feature region. More specifically, for example, the inside of the feature area is inclined so that the brightness of each pixel (node) in each feature area acquired by the face part detection unit 13 is equal to the brightness of the pixel corresponding to the standard model. In other words, the luminance is corrected by controlling the luminance value with the parameter of the tilt angle (orientation).
  • the standard model storage unit 16 stores information on the standard model of the face in advance.
  • FIG. 6 is a schematic diagram showing an example of a standard model.
  • this standard model is composed of vertex data and polygon data.
  • the vertex data is a set of coordinates of the vertex U of the feature part in the standard model, and has a one-to-one correspondence with the 3D coordinate of each feature point Qj.
  • Polygon data is obtained by dividing the surface of a standard model into small polygons, for example, polygons such as triangles and quadrangles, and expressing these polygons as numerical data. Each polygon includes pixel luminance information and the like used in the light source fluctuation correction.
  • the standard model may be average face data obtained by averaging the data of a plurality of people's faces.
  • the vertex of each polygon of the standard model may be configured by using an intermediate point other than the feature point Qj together with the feature point Qj. This midpoint is calculated by interpolation.
  • the two-dimensional authentication unit (2D authentication unit) 17 is shown in the posture / light source correction unit 15.
  • the 2D face feature value (2D face feature value: local 2D face feature value; local 2D face feature value) is calculated from the texture information of each feature region that has been corrected for the power fluctuation and the light source fluctuation.
  • the 2D authentication unit 17 includes a corrected image acquisition unit 17a and a 2D feature quantity extraction unit 17b.
  • the corrected image acquisition unit 17a acquires a corrected image (corrected texture image! /, U) obtained by the posture / light source correction unit 15 in which the texture image is subjected to posture change correction and light source change correction. That is, the corrected image from the attitude 'light source correction unit 15 is input to the corrected image acquisition unit 17a.
  • the 2D feature amount extraction unit 17b extracts a 2D face feature amount from the corrected texture image acquired by the corrected image acquisition unit 17a.
  • This 2D face feature extraction is performed by a method that uses Gabor wavelet transform, which is a technique that extracts local grayscale information (contour lines in a specific direction, etc.) as a feature.
  • This Gabor wavelet transform can be used to detect the above-mentioned facial part, and can also be used to extract the grayscale information here. More specifically, the grayscale information obtained by applying the Gabor filter to the corrected texture image with the 2D coordinate point of the corrected texture image as a reference is extracted as a 2D face feature amount.
  • FIG. 7 is a three-dimensional graph for conceptually explaining the Gabor filter.
  • the Gabor filter is a spatial filter using a kernel in which a sin function (imaginary part) and a cos function (real part) are localized by a Gaussian function, and is a local filter of an image. It is a filter that performs transformation (Gabor wavelet transformation) that can extract information with high contrast.
  • the Gabor wavelet transform creates a kernel with various periods by fixing the shape of the kernel, and expanding and contracting this kernel, and features corresponding to the spatial period (Gabor feature value; grayscale information here) This is a transformation that extracts.
  • the feature vector (two-dimensional feature vector; 2D feature vector) representing the feature quantity of the spatial period is an array of Gabor wavelet coefficients having different size and direction characteristics.
  • the Gabor Wemblet transform is a function that minimizes the uncertainty of position and frequency, and is expressed by the following equation (3).
  • the k vector in the above equation (3) is a constant that determines the wavelength and direction of the wave.
  • the second term in [] is that the DC component of the function is 0 (zero) to satisfy the wavelet reconstruction condition, that is, It is a term added so that the following equation (4) can be obtained in the Fourier transform.
  • the face area 3D calculation unit 18 obtains a high-density face from the image of the face area detected by the face area detection unit 12, that is, from the stereo image by the stereo camera in this embodiment.
  • 3D shape (referred to as 3D close-fitting shape data) is calculated.
  • the “high-density data” referred to here is data of only the face eyes and nose detected by the face part detection unit 13!
  • the dense data acquisition points that make up the 3D dense face shape data This is called “3D point (3D point; or 3D measurement point)”.
  • the 3D face shape data is face shape data composed of a plurality of 3D points.
  • phase-only correlation method is one of the correlation calculation methods using Fourier transform. Two Fourier images are normalized for each spectrum and then synthesized. In other words, in the phase-only correlation method, when two images are given, the two-dimensional discrete Fourier transform of each image is normalized by the amplitude component, and a composite phase spectrum is obtained by calculating these products. Then, the inverse Fourier transform is performed on this. If the two images are similar, the POC function has a very sharp peak. The height of the correlation peak is useful as a measure of image similarity.
  • the coordinates of the peak correspond to the relative misalignment of the two images. Since the phase-only correlation method has such characteristics, it is possible to obtain corresponding points between images with high precision under the influence of luminance fluctuation and noise. In other words, the phase-only correlation method is a process of searching for matching points between different images with high accuracy, that is, matching. In addition, highly accurate 3D close-fitting shape data is obtained by performing 3D reconstruction processing on the acquired corresponding points. As described above, since it is assumed that a plurality of 2D cameras are used in the present embodiment, when the force S and the 3D measurement device that calculates a high-density 3D shape by the phase-only correlation method are used. Since it is possible to obtain a high-density 3D shape without calculating from a plurality of images, it is not necessary to use such a method.
  • a reduced image is created as a multi-resolution image.
  • a corresponding point search is executed at the pixel level (pixel level).
  • the reduced image is enlarged by a predetermined size by narrowing down the corresponding point candidates.
  • corresponding point search is executed around the candidate at the pixel level.
  • the third and fourth are repeated until the same size as the original image before being reduced in the first one.
  • a sub-pixel level corresponding point search is performed with the same size as the original image.
  • the three-dimensional authentication unit (3D authentication unit) 19 is a 3D face dense shape calculated by the face region 3D calculation unit 18.
  • 3D face feature values (local 3D face feature values; local 3D face feature values) are calculated based on the shape data and the 3D face part shape data calculated by the face part 3D calculation unit 14.
  • the 3D authentication unit 19 includes a 3D local patch extraction unit (3D local patch extraction unit) 19a and a 3D feature quantity extraction unit (3D feature quantity extraction unit) 19b.
  • the 3D local patch extraction unit 19a extracts (calculates) a 3D local patch region from the 3D face dense shape data and the 3D face part shape data (feature part).
  • the three-dimensional local patch region is simply referred to as “local patch region”.
  • FIG. 8 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data.
  • FIG. 9 is a schematic diagram for explaining a method of extracting (determining) a local patch region from 3D face part shape data using the rectangular region information set in FIG.
  • FIG. 10 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of the 3D face part shape data.
  • FIG. 11 is a schematic diagram showing an example of each 3D point and each local patch region in the 3D face part shape data.
  • the 3D coordinates M ( j ) (referred to as feature point coordinates) of each feature point Qj in each feature part of the 3D face part shape data exist on a high-density 3D shape (3D face part shape data).
  • the local patch area is an area defined by a relative relationship from the feature point coordinates of the 3D face part shape data to the 3D face dense shape data. More specifically, for example, as shown in FIG. 8, on a plane T (local patch extraction plane) T defined by three points of the right eye a, right eye b, and right nose c, which are feature points Qj,
  • a rectangular region S defined by four points defined as a linear sum of vectors ca and cb is defined as a starboard region, for example.
  • FIG. 9 shows conceptual 3D points when 3D face shape data is looked down from above the face. Also, the local patch extraction plane is composed of 4 or more feature points. You may ask for it.
  • each local patch extraction plane is set from each feature point coordinate 201 of the 3D face part shape data, and a predetermined number is set on this local patch extraction plane.
  • a rectangular area 211 (portion part) and a rectangular area 212 (forehead part) are set.
  • the rectangular area may be arbitrarily set to an area including facial features such as eyes, nose, mouth, and eyebrows as shown in rectangular areas 213, 214, and 215, for example. It is preferable that the facial feature part to be set is a part where the facial feature appears more prominently.
  • Each rectangular area is set in this way, and as shown in FIG. 11, local patch areas 301, 302, 303,... Corresponding to these rectangular areas are determined. However, in FIG.
  • a plurality of points (plot points) 311 arranged on the entire face indicate each 3D point in the 3D close-fitting shape data), and in particular, the point indicated by reference numeral 312 (in the figure)
  • the dark dots are the 3D points that make up the local patch area.
  • the local patch region 302 corresponds to the local patch region P at the wrinkle point described in FIG.
  • each extracted local patch region is preferably arranged at a symmetric position on the face.
  • the eye area may be hidden by sunglasses, etc., and the mouth area may not be 3D-measurable due to the influence of wrinkles, etc., so the local patch area to be extracted is at least a part that is not easily hidden or 3D-measurable. It is desirable to include the nose and heel (the forehead is likely to be hidden by the hair).
  • the method of extracting the local patch region is not limited to this.
  • the local patch region can be extracted by preparing in advance a partial model shape (reference model shape) as a reference that the starboard portion has such a shape, and this partial model shape is a 3D face.
  • a method may be used in which the force that is applied to the dense shape data is found, and the applied position is used as the local patch region of the starboard portion.
  • a reference three-dimensional (3D) patch shape reference patch shape, reference partial model shape
  • corresponding to the local patch region to be extracted that is, the average face (standard face) of the local patch itself, for example.
  • the patch model obtained from the data is stored and saved in advance, and this patch model is compared with the 3D close-fitting shape data, for example, the similarity of each other's shape is compared, and the most A region having a shape similar (approximate) to the patch model shape is locally It may be a method of determining as a notch area.
  • the local patch region extraction method may be a method of determining a region of 3D close-fitting shape data included in a region defined in advance on a two-dimensional image as a local patch region. More specifically, as shown in FIG. 10, a region that can be defined based on the feature point Qj detected by the face part detection unit 13 is defined as a selection region on the two-dimensional image, and this defined two-dimensional The 3D face shape data area of the selected area on the image is determined as the local patch area.
  • the region on the 2D image is defined in advance by the calculation of the facial part 3D calculation unit 14, and all the 3D face shape data is measured. By searching for the corresponding points only in the upper region and performing 3D reconstruction, it is possible to measure the shape of only the local patch region, and to shorten the processing time.
  • the local patch region extraction method may be a method of determining the local patch region by performing an intersection determination with the shape of the standard model calculated from the average face. More specifically, first, a standard model is prepared in advance and extracted to this standard model! /, Local patch areas are defined, and these standard models and local patch areas on the standard model are stored and stored. . Next, the 3D position is corrected so that the 3D face part shape data best matches the 3D shape of the standard model. Next, after this position correction, a triangular patch that is a triangular area on the standard model is projected onto the triangular patch of the 3D face part shape data around the projection center point of the standard model.
  • This triangular patch of 3D face part shape data is given as a patch composed of two points, the reference measurement point and the adjacent measurement point. Then, it is determined whether or not the projected triangular patch on the standard model and the triangular patch on the 3D face part shape data intersect, and if it intersects, the 3D face part shape is determined.
  • a triangular patch on the data is determined as a local patch region. There are three cases of this intersection. If either case is satisfied, it is determined that the triangle patches intersect.
  • FIG. 12 is a diagram for explaining the intersection determination.
  • Figure 12 (A) shows the first case determined to be an intersection
  • Figure 12 (B) shows the second case determined to be an intersection
  • Figure 12 (C) is determined to be an intersection. The third case is shown.
  • the net pattern represents a standard model
  • the hatched pattern represents measurement data. [0064] 1.
  • is the angle between OA and ⁇ axis at point A (r, ⁇ , ⁇ ) on spherical coordinates
  • is the intersection of the perpendicular line of point A (r, ⁇ , ⁇ ) on the xy plane and the xy plane, z
  • B is the angle between OB and the X axis. And width and height. This is the width of the projected image.
  • the point cloud of 3D face part shape data is processed in the same manner and projected onto the map image. Then, the region on the 3D face part shape data included in the labeled region is determined as the local patch region.
  • the 3D feature quantity extraction unit 19b extracts a 3D face feature quantity from the information of the local patch region extracted by the 3D local patch extraction unit 19a. More specifically, a curved surface is calculated for each local patch based on information on a plurality of 3D points in each local patch region. The calculation of the curved surface is executed by a method using a curvature map, for example. In this case, first, normalization of the local patch area is performed. For example, in the case of a rectangular local patch area, the normalization is performed by performing three-dimensional affine transformation so that the vertex of the rectangular area becomes the vertex of a predetermined standard rectangular area (standard rectangular area). Is real Is done.
  • a transformation three-dimensional affine transformation
  • the coordinate value indicating the 3D point of the local patch region is matched with the standard coordinate value.
  • the normalized local patch region is uniformly sampled, and the curvature at each sampling point is used as the shape feature (3D face feature amount) of the local patch region.
  • this method also uses the curvature map to compare the curvature of the local patch area and the standard rectangular area.
  • the curvature is “Face Identification Using 3D Curvature 3D Shape Feature Extraction 1”, IEICE Transactions Vol. J76-D2 No. 8 (August 1993) pp.1595- It can be calculated by using the method disclosed in 1603.
  • the extraction of the 3D face feature value is not limited to the above method.
  • a method of extracting the 3D face feature value by surface approximation may be used.
  • This curved surface approximation can use various curved surfaces such as a Bezier curved surface, a bicubic curved surface, a rational Bezier curved surface, a B-spline curved surface, and a NURBS (Non Uniform Rational B-Spline) curved surface.
  • a Bezier curved surface is used for example, a Bezier curved surface is used.
  • FIG. 13 is a schematic diagram showing an example of a Bezier curved surface in the extraction of the three-dimensional face feature amount.
  • the Bezier curved surface is a curved surface F (Bézier curved surface F) defined by control points P arranged in a grid with P00, P01,.
  • the control point P defines the four corner points and the rough shape of the curved surface F.
  • a Bezier surface is a polynomial surface defined in the parameter region ue [0, l], ve [0, 1].
  • An n-order expression for u and an m-order expression for V is called an n X m-order surface, and is expressed by (n + 1) * (m + 1) control points.
  • Such a Bezier surface is given by the following equation (5). ", ⁇ ) ⁇ ⁇ ( ⁇ ) ⁇ ⁇ ⁇ ) ⁇ ⁇ ⁇ (5)
  • the shape information (curved surface information) of the approximated Bezier curved surface F is obtained as the patch shape information of the local patch region.
  • the patch shape information for each local patch area of the face that is, 3
  • 3D feature vector that is, a 3D face feature value
  • the present invention is not limited to this, and the information on the relative positional relationship between each local patch region (or each patch shape information), that is, the mutual distance, inclination, etc., is further added to the total 3D facial feature information. Information may be added. In this case, since it becomes possible to obtain “global shape information” indicating the overall characteristics of the face, the 3D face feature amount is more suitable for personal authentication.
  • the local patch region from which the 3D face feature value is extracted is preferably a three-local patch region including at least a part other than a facial feature part (the eyes, eyebrows, nose, mouth, etc.).
  • the 3D face feature amount has no or little feature, that is, a part such as “forehead” or “ ⁇ ”, which is a part where the feature is difficult to appear with the 2D feature quantity (feature part, 2D image) (surface irregularities)
  • the patch is extracted from a local patch region including a flat portion with little change.
  • the 3D face feature quantity can be handled as a 3D feature vector (vector quantity) in this way, the calculated 3D face feature quantity (3D feature vector) or a comparison feature quantity prepared in advance (described later)
  • the 3D feature vector for comparison corresponding to the 3D feature vector of the 3D face feature amount (comparison vector amount) is registered (stored) in the storage unit 3 of the controller 10, for example, the above-mentioned 3D face dense shape data
  • registering 3D feature vectors requires less registration data. In other words, data handling is improved, for example, memory capacity can be reduced.
  • the similarity calculation unit 20 compares the facial feature value of the comparison target person registered in advance (referred to as a comparison feature value) and the facial feature value of the authentication target person HM calculated above, that is, the 2D face feature value ( 2D feature vectors) and 3D face features (3D feature vectors) are evaluated for similarity. More specifically, the similarity calculation unit 20 performs similarity calculation based on the comparison feature quantity, the 2D face feature quantity, and the 3D face feature quantity, and each of the two-dimensional similarity degrees. (2D similarity; U and 3D similarity (3D similarity; Dij) are calculated, and multiple similarity is calculated using these 2D similarity and 3D similarity. First, calculation of 2D similarity explain.
  • the 2D similarity L between the authentication target HM and the comparison target is calculated by the 2D feature quantity extraction unit 17b! It is given as the average value of the sum of the similarity SD (Ji, Ji ') of the F feature vectors extracted (generated).
  • the 2D similarity L is calculated by assuming that the feature quantity of the calculated feature vector is G and the registered feature quantity is G ′. ') Is expressed as.
  • This equation (8) has a form in which the correlation of amplitude is weighted by the similarity of the phase angle.
  • N is complex Gabor filter
  • the symbol “a” represents the amplitude
  • the symbol “ ⁇ ” represents the phase
  • the k vector is jth A vector having the direction of the two-dimensional wave and the magnitude of the frequency, and is given by the following equation (9).
  • the calculation of the 2D similarity can be performed by the Euclidean distance as in the 3D similarity calculation described later.
  • the multiple similarity which is the overall similarity between the HM (authentication object) and the comparison object (comparison object), is the 2D similarity and 3D similarity. It is calculated by the weighted sum for each degree of similarity.
  • the multiple similarity is indicated by Re.
  • the registration data storage unit 21 stores information on face feature amounts (comparison feature amounts and comparison face feature amounts) of a comparison target prepared in advance.
  • the determination unit 22 performs authentication determination based on the multiple similarity Re.
  • the method differs between the case of face verification (Verification) and the case of face identification (Identification) as shown in (a) and (b) below.
  • the input face (input face; face of the person to be authenticated HM) is a specific registrant (special It is determined whether it is a regular registrant.
  • the degree of similarity between the face feature amount of a specific registrant, that is, the comparison target person (comparison feature amount) and the face feature amount of the authentication target person HM is compared with a predetermined threshold value to compare with the authentication target person HM.
  • Identity with the subject is determined. More specifically, when the multiple similarity Re is smaller than a predetermined threshold TH1, it is determined that the authentication target person HM is the same person (person) as the comparison target person.
  • information on the threshold TH1 is stored in the determination unit 22.
  • the information on the threshold TH 1 in this case is stored in the registration data storage unit 21! /.
  • Face identification is to determine who the input face belongs to. In this face identification, all the similarities between the face feature amount of the registered person (compared person) and the face feature amount of the person HM to be authenticated are calculated, and the person HM to be authenticated and each comparison object The identity of each person is determined. Then, the comparison target person having the highest identity among the plurality of comparison target persons is determined to be the same person as the authentication target person HM. More specifically, the comparison target person corresponding to the minimum multiple similarity Re (Remi n) among the multiple similarity degrees Re of the authentication target person HM and multiple comparison target persons is the authentication target person HM. It is determined that they are the same person.
  • Remi n minimum multiple similarity Re
  • FIG. 14 is a flowchart showing an example of the face authentication operation according to the present embodiment.
  • the face image of the person HM to be authenticated is acquired by photographing with the cameras CA1 and CA2 (step Sl).
  • the two face images obtained by the photographing are input to the controller 10 (image input unit 11) (step S2).
  • the face area detection unit 12 detects a face area image from each face image input to the image input unit 11 (step S3).
  • the face part detection unit 13 detects the facial feature part, that is, the coordinates of the feature point and the texture information of the feature area (step S4).
  • the face part 3D calculation unit 14 calculates the three-dimensional coordinates (3D face part shape data) of each feature part from the coordinates (feature point coordinates) of the feature part of the face detected by the face part detection unit 13. Calculated (step S5). Further, the posture / light source correction unit 15 performs posture variation correction and light source variation correction on the texture information detected by the face part detection unit 13 (step S6). The 2D authentication unit 17 calculates a 2D face feature amount from the corrected texture image of each feature region that has been corrected for the posture variation correction and the light source variation correction (step S7).
  • the face area 3D calculation unit 18 calculates 3D face shape data composed of a plurality of 2D points from the face area image (stereo image) detected by the face area detection unit 12 (step). S8).
  • the 3D local patch extraction unit 19a calculates the 3D face shape data calculated by the face region 3D calculation unit 18 and the face part 3D calculation unit 14 in step S5.
  • a three-dimensional local patch region is calculated from the 3D face part shape data (step S9).
  • the 3D feature quantity extraction unit 19b calculates a 3D face feature quantity from the information of the local patch region calculated by the 3D local notch extraction unit 19a (step S10).
  • the similarity calculation unit 20 compares the face feature amount (comparison feature amount) of the comparison target registered in advance with the local 2D face feature amount and 3D face feature amount calculated in steps S7 and S10. The similarity is calculated based on the comparison feature quantity, the 2D face feature quantity, and the 3D face feature quantity, and the 2D similarity degree and 3D similarity degree. The multiple similarity is calculated from (Step Sl l). Then, based on the multiple similarity, the determination unit 22 performs face collation or face identification authentication determination (step S12).
  • FIG. 15 is a flowchart showing an example of the operation in step S9 in FIG.
  • the local patch extraction unit 19a first extracts a local patch from each feature point (3D coordinate) (3D face part shape data) in each feature part calculated by the face part 3D calculation unit 14.
  • the plane T for use is set (calculated) (step S21).
  • a rectangular area S (a partial area described later) is set on the set local patch extraction plane T (step S22).
  • a local patch area P corresponding to the rectangular area S is set, that is, a perpendicular line drawn perpendicularly to the local patch extraction plane ⁇ ⁇ ⁇ out of a plurality of 3D points ⁇ constituting the 3D face part shape data is a rectangular area S.
  • the 3D point ⁇ that falls within is identified, and the region that also has the identified 3D point ⁇ force is set as the local patch region ⁇ (step S23).
  • the entire face of the authentication target person's face is obtained by the three-dimensional shape acquisition unit (the face region detection unit 12, the face region 3D calculation unit 18).
  • 3D shape information (overall 3D shape) is acquired, and the 3D shape information acquired by the 3D shape acquisition unit by the local region determination unit (3D local patch extraction unit 19a) is acquired.
  • 3D close-fitting shape data From (3D close-fitting shape data), a local region in the overall 3D shape A number of 3D local regions (3D local regions; local patch regions) are determined.
  • the 3D feature amount calculation unit (3D feature amount extraction unit 19b) determines the shape of the 3D local region from the local 3D shape information (local 3D shape information) in the 3D local region determined by the local region determination unit.
  • 3D face feature value which is the local area shape information about the face and is a three-dimensional feature value of the face. Then, the 3D face feature amount calculated by the 3D feature amount calculation unit is prepared in advance by the feature amount comparison unit (similarity calculation unit 20 and determination unit 22) to perform the authentication operation for the authentication target person HM. The comparison face feature amount is compared.
  • the authentication method of the present embodiment in the first step, information on the entire 3D shape, which is the overall 3D shape of the face of the person to be authenticated, is acquired, and in the second step ! /, From the entire 3D shape information, a plurality of 3D local regions that are local regions in the entire 3D shape are determined.
  • 3D face feature amount which is local region shape information related to the shape of the 3D local region and is a three-dimensional feature amount of the face, is calculated.
  • the 3D face feature value is compared with the comparison face feature value prepared in advance to perform the authentication operation for the person to be authenticated HM.
  • a plurality of 3D local regions are determined from the entire 3D shape of the face of the person HM to be authenticated, and the 3D face feature amount is calculated from the local 3D shape information in the 3D local region. Then, the authentication operation for the person to be authenticated is performed by comparing the 3D face feature quantity with the comparison face feature quantity. Therefore, multiple 3D shape force local areas (3D local areas) of the entire face are extracted without using the information on the entire 3D shape of the face as it is, and based on these extracted 3D local areas!
  • the 3D shape acquisition unit includes a 2D image acquisition unit (camera CA1, CA2) that acquires a 2D image of a face, and a feature region extraction unit (face region detection)
  • the part 13) extracts a characteristic part that is a characteristic part of the face from the 2D image acquired by the two-dimensional image acquisition part.
  • the 3D coordinate calculation unit (facial part 3D calculation unit 14) calculates the 3D coordinates ( ⁇ ⁇ ) of the feature part extracted by the feature part extraction unit, and the local region determination unit calculates the 3D coordinate. Based on the 3D coordinates of the feature part calculated by the calculation unit, a 3D local area is determined!
  • the first step is a step including a fifth step of acquiring a 2D image of a face, and in the sixth step, a feature portion that is a characteristic portion of the face from the 2D image Is extracted.
  • the third step the 3D coordinates of the characteristic part are calculated.
  • a 3D local region is determined based on the 3D coordinates of the feature part.
  • the characteristic part that is a characteristic part of the face is extracted from the 2D image, the 3D coordinate of the characteristic part is calculated, and the 3D local region is calculated based on the 3D coordinate. Therefore, when determining a 3D local area, it can be associated with the information of the two-dimensional feature part, and it is possible to perform high-accuracy authentication using the feature part information together with the 3D local area information. It becomes.
  • the local region determination unit sets a partial region (for example, a rectangular region S) having a predetermined shape in a plane (local patch extraction plane ⁇ ) determined from the 3D coordinates, and The region corresponding to the partial region in the 3D shape is determined as the 3D local region.
  • a partial area of a predetermined shape is set in the plane determined from the 3D coordinates of the feature part, and the area corresponding to this partial area in the overall 3D shape is determined as a 3D local area.
  • the 3D local region can be easily determined from the 3D coordinates of the feature part.
  • the entire 3D shape information is made into face shape data composed of a plurality of 3D points ( ⁇ ), and the local region determining unit determines the plane from the 3D points ( ⁇ ).
  • a region composed of 3D points where the perpendicular line drawn vertically to the region is included in the partial region is determined as the 3D local region (local patch region ⁇ ).
  • the region composed of 3D points where the perpendicular line that is virtually perpendicularly dropped from the 3D point to the plane is included in the partial region is determined as the 3D local region.
  • 3D local corresponding to partial area easily An area can be determined.
  • the local region determination unit compares the entire 3D shape with a reference three-dimensional partial model shape (reference 3D partial model shape; reference patch) prepared in advance.
  • the part that is the most similar to the reference 3D partial model shape in the 3D shape is determined as the 3D local region.
  • the overall 3D shape and the reference 3D partial model shape are compared, and the portion of the overall 3D shape that has the most similar shape to the reference 3D partial model shape is determined as the 3D local region. It is possible to easily determine a 3D local region in the entire 3D shape without acquiring a structure and operation for acquiring a feature part (2D face feature amount) from this 2D image.
  • the 3D feature amount calculation unit converts local 3D shape information in the 3D local region into predetermined curved surface information (for example, by a method using a Bezier curved surface). Is calculated as local region shape information.
  • the local region shape information of the 3D local region information obtained by converting the local 3D shape information in the 3D local region into predetermined curved surface information is used, that is, the 3D shape information cannot be used as it is. Since this is converted to be treated as curved surface information (for example, curvature), dimensional compression is possible and the processing speed is increased.
  • the 3D feature amount calculation unit calculates a 3D face feature amount including information on the relative positional relationship of each 3D local region as the 3D face feature amount.
  • the 3D face feature amount includes information on the relative positional relationship of each 3D local region, the entire face that is composed of only individual features in each 3D local region is determined by this 3D face feature amount. It is possible to represent the characteristics over a range of times (the global shape information of the face can be obtained), and more accurate authentication can be performed.
  • the 3D local area in the overall 3D shape is determined by the local area determining unit so that a plurality of 3D local areas are arranged at positions symmetrical with respect to the face.
  • the in this way since the 3D local area is placed in a symmetrical position on the face, the 3D local area (position) can be efficiently determined in the overall 3D shape, and the processing time is shortened. Data handling is improved.
  • a plurality of 3D local regions are created by the local region determining unit.
  • the 3D local region in the overall 3D shape is determined so that at least the nose and the heel region of the face are included.
  • the 3D local area in the overall 3D shape is determined so that at least the face nose and wrinkle are included, the 3D local area is measured by, for example, a part (for example, forehead) or a part that is hidden by hair. Difficult! /, It can be set avoiding parts (for example, mouth when having a mustache), and it is possible to calculate 3D facial features with high accuracy from this 3D local area. Can be performed.
  • the two-dimensional feature amount of the face is obtained from the feature part information extracted by the feature part extraction unit by the two-dimensional feature quantity calculation unit (2D feature quantity extraction unit 17b).
  • a 2D face feature is calculated.
  • the feature amount comparison unit combines the 2D face feature amount calculated by the 2D feature amount calculation unit and the 3D face feature amount calculated by the 3D feature amount calculation unit, for example, by a weighted sum.
  • the feature quantity (multiple similarity) is compared with the comparison face feature quantity.
  • the 2D face feature value which is a two-dimensional feature value of the face
  • the total face feature value that is a combination of the 2D face feature value and the 3D face feature value is compared with the comparison face feature value.
  • the 2D face feature amount of the face is calculated, and the comprehensive face feature amount combining the 2D face feature amount and the 3D face feature amount is compared with the comparison face. Since the feature quantities are compared, it is possible to perform more accurate authentication using the 2D face feature quantities and the 3D face feature quantities.
  • the 3D feature quantity calculation unit calculates the 3D face feature quantity from the local 3D shape information in the 3D local region including at least a part other than the facial feature part.
  • 3D face feature quantities are calculated from local 3D shape information in a 3D local area that includes at least parts other than facial feature parts. Therefore, authentication using multiple 2D face feature quantities and 3D face feature quantities (multiple authentication) ), Features of parts other than feature parts that are difficult to extract as 2D face feature quantities can be included as 3D face feature quantities, that is, feature quantities that cannot be covered by 2D face feature quantities. It can be covered with facial features, and as a result, more accurate authentication can be performed.
  • information on feature parts for calculating 2D face feature values Is texture information
  • the correction unit performs posture fluctuation correction that is correction related to the posture of the face and light source fluctuation correction that is correction related to the direction of the light source relative to the face.
  • the posture variation correction which is correction related to the posture of the face, and the correction of the direction of the light source with respect to the face are performed on the texture information of the feature part for calculating the 2D face feature amount.
  • a 2D image of the face is captured by at least two imaging devices (cameras CA1 and CA2) in the 3D shape acquisition unit, and each of the imaging is performed by the 3D shape calculation unit.
  • the 2D image force obtained from the device is calculated by performing 3D reconstruction on the corresponding points obtained by the computation process using the phase-only correlation method. According to this, since the entire 3D shape is calculated from the two 2D images obtained from at least two imaging devices by calculation using the phase-limited correlation method, the cost can be reduced without using an expensive 3D imaging device or the like.
  • the power S can be used to calculate the entire 3D shape with high accuracy using the phase-only correlation method.
  • the 3D face feature amount calculated by the three-dimensional feature amount calculation unit is set as a vector amount (3D feature vector), and the storage unit (storage unit 3) sets the vector nore amount.
  • the comparison vector feature (comparison 3D feature vector) is stored as the corresponding comparison facial feature (comparison feature), that is, the data stored as the comparison facial feature is measured. Since the amount of data is not so-called dense 3D shape data (3D face shape data), the amount of data to be stored can be reduced (the memory capacity can be reduced), and the data can be handled. It becomes easy.
  • multiple similarity is calculated based on the 2D face feature amount and the 3D face feature amount, and face matching or face identification is performed based on the multiple similarity degree.
  • the similarity is calculated based on the local area shape information and the global area shape information, and the face matching or face identification authentication determination is performed based on the similarity. It may be configured.
  • FIG. 16 is a functional block diagram for explaining the face authentication function provided in another controller. It is.
  • FIG. 17 is a flowchart showing an example of the operation of the authentication system shown in FIG.
  • the authentication system of this embodiment is shown in FIGS. 1 to 3 in that it includes a controller 30 shown in FIG. 16 instead of the controller 10 in the authentication system 1 shown in FIG. Different from authentication system 1. Therefore, the description of the schematic configuration of the authentication system as shown in FIG. 1 and the overall configuration of the controller as shown in FIG. 2 will be omitted, and the functional blocks of the controller 30 will be described below.
  • the controller 30 functionally includes an image input unit 31, a face region detection unit 32, a face region detection unit 33, a face region 3D calculation unit 34, and a face region 3D calculation unit 35.
  • image input unit 31 first and second image input units 31a and 31b
  • face region detection unit 32 first and second face region detection units 32a and 32b
  • face part detection unit 33 first And the second facial part detection unit 33a, 33b
  • the facial part 3D calculation unit 34 and the facial region 3D calculation unit 35
  • image input unit 11 first and second image input units 11a, 1 lb
  • Face area detector 12 first and second face area detectors 12a, 12b
  • face part detector 13 first and second face part detectors 13a, 13b
  • face part 3D calculator 14 Since it is the same as that of the face area 3D calculation unit 18, the description thereof is omitted.
  • the 3D local region extraction unit 36 performs a three-dimensional analysis from the 3D face dense shape data calculated by the face region 3D calculation unit 35 and the 3D face region shape data (feature portion) calculated by the face region 3D calculation unit 34.
  • a local region is extracted (calculated). That is, the 3D local region extraction unit 36 is the same as the 3D local patch extraction unit 19a of the 3D authentication unit 19 shown in FIG. 3, and the 3D face shape data and 3D face part shape data (feature part) 3 A dimensional local patch area is extracted (calculated).
  • this method of extracting a three-dimensional local patch region is performed by, for example, dropping a corresponding 3D close-fitting shape data region locally by dropping a perpendicular line to a partial region of a predetermined shape set in a plane.
  • Extraction method as patch area, 3D close-fitting shape data area most similar to reference model shape is locally A method of extracting as a notch region, a method of determining a region of 3D facial shape data included in a predefined region on a two-dimensional image as a local patch region, and a shape of a standard model calculated from an average face
  • the local region information calculation unit 37 also extracts (calculates) local region information from the information power of a single 3D local region (local patch region) extracted by the 3D local region extraction unit 36.
  • the local region information calculation unit 37 uses the 3D local region (local patch region) alone information extracted by the 3D local region extraction unit 36, and the 3D unique feature amount in the facial feature part. (Local 3D facial feature) is extracted.
  • the local 3D facial feature amount extraction method for example, an extraction method similar to the extraction method in the 3D feature amount extraction unit 19b of the three-dimensional authentication unit 19 shown in FIG.
  • the local 3D face feature amount extraction method is, for example, a method of extracting curvatures at a plurality of points in the curved surface of the local patch region as a local 3D face feature amount, and an approximation to the shape of the local patch region. Applying a method to extract the shape information (curved surface information) of the curved surface as a local 3D face feature, the force S is used.
  • a method of extracting the distance between the standard model and the local patch area after performing registration for each local patch area as a local 3D face feature amount using the standard model More specifically, first, a definition point h of a standard local model consisting of a plurality (N) defined in advance on the standard local region model used in the 3D local region extraction unit 36.
  • the global area information calculation unit 38 also extracts (calculates) the global area shape information with respect to the information power of the three-dimensional local area (local patch area) extracted by the 3D local area extraction unit 36.
  • the global region information calculation unit 38 is executed by the 3D local region extraction unit 36. From the extracted 3D local area (local patch area) information, 3D characteristic features (global 3D facial feature quantities) in the entire face are extracted (calculated).
  • the global area shape information is a word for the local area shape information, and is a feature amount of the three-dimensional shape of the entire face of the person to be authenticated.
  • the global area shape information is calculated based on, for example, the ⁇ 1> face three-dimensional local area. Further, for example, the global area shape information is calculated based on ⁇ 2> the shape of the entire face. Further, for example, the global area shape information is calculated based on ⁇ 3> face three-dimensional feature points.
  • ⁇ 1> to ⁇ 3> will be described more specifically.
  • Examples of the calculation method for calculating the global 3D facial feature quantity based on the local patch region of the face include the following ⁇ 1-1> and ⁇ 1-2>.
  • the global 3D facial feature amount is calculated based on the normal of the local patch region. More specifically, first, the standard model and the local patch region are aligned by SRT fitting. Next, RT registration makes it possible to more accurately align the local patch regions to be compared. In this way, the corresponding points are obtained by recalculating. A normal is obtained for each of the N corresponding points thus obtained.
  • SRT fitting is a process that aligns the feature points in the measurement data with the feature points of the standard model.
  • SRT fitting is a process of affine transformation of standard model data using Equations 14-1, 14-2 so that the distance energy between the feature points of the standard model and the feature points of the measurement data is minimized.
  • M is the feature point of the standard model
  • C is the measurement data.
  • K is the number of feature points
  • f (M, C) is the feature point of the standard model.
  • a transformation matrix that minimizes the distance energy is obtained by, for example, the least square method, and the position of the standard model data after the transformation is obtained. In addition, the projection center point of the standard model is also relocated.
  • a covariance matrix B is calculated using the corresponding point group S ′, T ′.
  • the covariance matrix B is given by Equation 15-1.
  • Equation 15.2 The matrix A is given by Equation 15.2.
  • s ' (s ', s', s '), which represents the three-dimensional coordinates of the measurement point, and t' is the same.
  • eigenvalue decomposition is performed, for example, by the Jacobian method, and eigenvalues and eigenvectors are calculated.
  • the global 3D facial features are subjected to SRT fitting by using the local patch region extracted for the standard model, and are used as the deformation parameter S of the deformation S, movement T, and rotation R parameters of the SRT fitting.
  • the deformation parameter S is a parameter for deforming the shape of the standard model so that the definition point on the standard model and the feature point on the local patch region are fitted to the shape of the local patch region. If the feature point used for SRT fitting matches the feature point of the face almost exactly, the deformation parameter S will be the same person's face size (width, height and depth). It is thought that it represents an individual because the travel, etc.) is unchanged. Also, deformation parameters
  • the same value is calculated even if shooting conditions such as magnification and exposure change.
  • the deformation parameter s may not use all the local patch regions.
  • a plurality of local patch regions including the stably obtained nose may be collected and obtained by performing SRT fitting.
  • the global 3D facial feature value is a distance for each of a plurality of predefined points in the standard model and a plurality of measurement points on the local patch area respectively corresponding to the plurality of points of the standard model. It is given by finding the average of those distances.
  • the standard model and the local patch region are aligned by SRT fitting.
  • a plurality of measurements h on the local patch region corresponding to a plurality (N) of point groups H (H (h, h,..., H)) defined in advance on the selected standard model. 1 2 Nh
  • the distances d (h, s) between the point group H and the plurality of point groups S in the corresponding local patch region are obtained. Then, an average value of the plurality of distances d (h, s) is obtained as the global 3D facial feature quantity di stb (see Equation 12). As long as the SRT fitting is used to make the alignment approximately correct, the corresponding point S 'is given to each subject.
  • the global 3D facial feature quantity is obtained by projecting a plurality of predefined points in the standard model onto the local patch area and processing the registered data in the same way as the projected points projected from the standard model. It is given by calculating the distance for each projected point and calculating the average of those distances.
  • the standard model and the local patch region are aligned by SRT fitting.
  • the global 3D facial feature value is given for each local patch area by finding the average value of the distances between corresponding points in the local patch areas to be compared (measurement data and registration data).
  • the standard model and the local patch region are aligned by SRT fitting.
  • the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration.
  • the global 3D facial feature value is given by calculating an average value between points corresponding to each other in the local patch areas to be compared for each local patch area.
  • the standard model and the local patch region are aligned by SRT fitting.
  • the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration. N points after alignment
  • the global 3D facial feature value is given by obtaining a variance value between points corresponding to each other in the local patch areas to be compared for each local patch area.
  • the standard model and the local patch region are aligned by SRT fitting.
  • the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration.
  • the registered point group T ⁇ t I t ⁇ N ⁇ consisting of N points after alignment, and the measurement point group S consisting of N points different from N
  • the global patch shape information is compared as a dense data group in the local patch region and as a coarse data group in the other regions. It can be done.
  • the global 3D face feature is calculated based on the line (feature extraction line) defined in the local patch area. More specifically, a line (feature extraction line) is defined in a predetermined local patch region set in advance.
  • the feature extraction line is, for example, 3D facial part shape data It is defined from multiple feature points. It is desirable that the feature extraction line be defined in the part where the 3D shape features of the face that enhance the authentication accuracy appear frequently.
  • the feature extraction line is a line including undulations of facial irregularities such as crossing the nose.
  • the feature extraction line defined in this local patch area is projected onto 3D facial shape data, and the 3D facial shape corresponding to multiple points on the feature extraction line defined in this local patch area.
  • the feature extraction line may be extended outside the local patch area from the local patch area that further increases the authentication accuracy.
  • a feature extraction line is defined for each of a plurality of local patch regions that further improve the accuracy of authentication, and a plurality of point groups on the 3D face shape data are obtained from each feature extraction line.
  • a group may be a global 3D facial feature.
  • the plurality of points on the feature extraction line may be equally spaced or equally spaced.
  • the similarity calculation unit 39 compares the feature quantity (comparison feature quantity) of the comparison target person registered in advance and the authentication target person HM calculated above. Similarity is evaluated by calculating similarity based on local region shape information and global region shape information. The similarity is calculated as local information similarity D sl and global information similarity D sb for local area shape information and global area shape information, respectively. d s ) can be obtained by calculating the total Euclidean distance between each other (see Equation 10).
  • the registration data storage unit 40 is prepared in advance in the same manner as the registration data storage unit 21 shown in FIG. 3 in order to calculate the local information similarity D sl and the global information similarity D sb by the similarity calculation unit 39. Information on the feature amount (comparison face feature amount) of the comparison target person is stored.
  • authentication determination is performed based on this multiple similarity Re.
  • the authentication decision is As described above, there are cases of face matching and face identification.
  • the overall determination unit 41 first makes a determination based on the local information similarity D sl , and if the difference in the similarity is equal to or greater than a threshold in the determination result, determines that the other person Judgment may be made based on the global information similarity D sb only when the difference is less than the threshold value.
  • a face image of the person HM to be authenticated is acquired by photographing with the cameras CA1 and CA2 (step S3 Do).
  • the two face images obtained by the above are input to the controller 30 (image input unit 31) (step S32), and then from the face images input to the image input unit 31 by the face area detection unit 32.
  • a face area image is detected (step S33), and from the detected face area image, the face part detection unit 33 detects the feature part of the face, that is, the coordinates of the feature point (step S34).
  • a 3D coordinate (3D face part shape data) of each feature part is calculated by the 3D calculation part 34 from the coordinates of the feature part of the face (feature point coordinates) detected by the face part detection part 33 (step 3D).
  • the face area 3D calculation unit 35 calculates 3D face shape data composed of a plurality of 2D points from the face area image (stereo image) detected by the face area detection unit 32 (step). S36).
  • the 3D facial dense shape data calculated by the facial region 3D calculation unit 35 by the 3D local region extraction unit 36 and the 3D facial part shape data calculated by the facial region 3D calculation unit 34 in step S35 described above.
  • a three-dimensional local region (local patch region) is calculated from (step S37).
  • the local region information calculation unit 37 calculates the local region information, that is, the 3D face feature amount in this embodiment, from the information of the three-dimensional local region (local patch region) alone extracted by the 3D local region extraction unit 36. (Step 38).
  • the global region information calculation unit 38 calculates the information power of the 3D local region (local patch region) extracted by the 3D local region extraction unit 36, the global region shape information, and in this embodiment, the global 3D facial feature amount.
  • the similarity calculation unit 39 compares the feature quantity (comparison feature quantity) of the comparison target registered in advance with the local area shape information and the global area shape information calculated in steps S38 and S39. Similarity is evaluated (step S40). Then, based on the multiple similarity Re, the comprehensive determination unit 41 performs face collation or face identification authentication determination (step S41). [0153] In the case where the degree of coincidence of the shapes is compared for each local region by a method such as alignment, the shapes between the local regions are compared.
  • the shape matching accuracy of the local area is high, the error becomes small even if the relative positional relationship between the local areas is greatly different, so that the error of other people also becomes small, resulting in a decrease in authentication accuracy. It will be. Therefore, when comparing the degree of matching of shapes by a method such as alignment using multiple local regions as one global region, in addition to the shape comparison for each local region, the relative position between the local regions Since related information is also included, the accuracy of authentication is expected to improve.
  • the three-dimensional face authentication method based on the ICP algorithm such as Japanese Patent Application Laid-Open No. 2007-164670, is an effective technique from this point.
  • the force S and ICP algorithms are actually the processing time and characteristics. It was difficult in terms of quantification.
  • the shape information of the global region of the face is divided into local regions and separated into the global region shape information and the local region shape information, so that the data amount and the processing time can be shortened. ing. Since global area shape information is also used, authentication can be performed with higher accuracy.
  • the present embodiment can take the following aspects.
  • the area set on the local patch extraction plane T does not have to be rectangular like the rectangular area S. In short, it is a partial area (partial area) on the local patch extraction plane T. If so, the shape may be arbitrary. In addition, the shape of the feature portion may not be a rectangle but may be an arbitrary shape.
  • the method for determining the local patch area from the rectangular area S is as follows.
  • the vertical line dropped perpendicularly to the local patch extraction plane T (the vertical leg force enters the rectangular area S 3D
  • Various methods can be employed without being limited to the method of making a point, for example, it may be lowered at a predetermined angle with respect to the plane T without dropping from the 3D point perpendicular to the local patch extraction plane T.
  • a method may be used in which a virtual, for example, radial line is output from the rectangular area S in a predetermined direction, and a range on the 3D shape that intersects (contacts) the line is used as the local patch area.
  • the authentication system 1 does not have to be separated into the controller 10 and the cameras CA1 and CA2.
  • a configuration in which each camera is built directly in the controller 10 may be employed.
  • each camera is built in such an arrangement that the subject person HM can be photographed at different angles!
  • the authentication system that is one aspect is a local area determination unit that determines a plurality of three-dimensional local areas, which are local areas of the person to be authenticated, and the 3D determined by the local area determination unit.
  • 3D feature quantity that is local area shape information related to the shape of each 3D local area, and that calculates the 3D facial feature quantity that is the 3D feature quantity of the face from the local 3D shape information in the local area Feature amount comparison for comparing the 3D face feature amount calculated by the calculation unit and the 3D feature amount calculation unit that performs the authentication operation for the person to be authenticated with a face feature amount for comparison prepared in advance.
  • the authentication system further includes a three-dimensional shape acquisition unit that acquires information on the entire three-dimensional shape that is an overall three-dimensional shape of the face of the person to be authenticated.
  • the local region determination unit determines a plurality of 3D local regions, which are local regions in the entire 3D shape, from the entire 3D shape information acquired by the 3D shape acquisition unit.
  • the local region determination unit determines a plurality of three-dimensional local regions that are local regions of the person to be authenticated.
  • the 3D shape acquisition unit acquires information on the entire 3D shape, which is the overall 3D shape of the face of the person to be authenticated, and the local region determination unit From the entire 3D shape information acquired by the 3D shape acquisition unit, the plurality of 3D local regions, which are local regions in the entire 3D shape, are determined. From the local 3D shape information in the 3D local area determined by the local area determination unit by the 3D feature quantity calculation unit, the local area shape information on the shape of each 3D local area is obtained.
  • a 3D face feature value which is a feature value, is calculated. Then, the feature quantity comparison unit compares the three-dimensional face feature quantity calculated by the three-dimensional feature quantity calculation unit with the comparison face feature quantity prepared in advance so as to perform the authentication operation for the person to be authenticated.
  • the 3D shape acquisition unit includes a 2D image acquisition unit that acquires a 2D image of the face, and acquires the 2D image.
  • the three-dimensional local region is determined based on a result of a feature part extraction unit that extracts a two-dimensional feature part that is a characteristic part of the face from the two-dimensional image acquired by the part.
  • the three-dimensional shape acquisition unit includes a three-dimensional coordinate calculation unit that calculates the three-dimensional coordinates of the feature part extracted by the feature part extraction unit.
  • the local region determination unit determines the three-dimensional local region based on the three-dimensional coordinates of the feature portion calculated by the three-dimensional coordinate calculation unit.
  • the 3D shape acquisition unit includes the 2D image acquisition unit that acquires the 2D image of the face, and the face is obtained from the 2D image acquired by the 2D image acquisition unit.
  • a three-dimensional local region is determined based on the result of the feature part extraction unit that extracts a two-dimensional feature part that is a characteristic part of.
  • the feature part extraction unit extracts a feature part that is a characteristic part of the face from the two-dimensional image acquired by the two-dimensional image acquisition unit by the feature part extraction unit.
  • the three-dimensional coordinates of the feature part extracted by the feature part extraction unit are calculated, and the three-dimensional local region is determined by the local region determination unit based on the three-dimensional coordinates of the feature part calculated by the three-dimensional coordinate calculation unit. It is done.
  • this authentication system when determining a three-dimensional local region, it can be associated with information of a two-dimensional feature region, and the information of the feature region is used together with the information of the three-dimensional local region. High-precision authentication can be performed.
  • the feature part extraction unit further includes a two-dimensional local region extraction unit that extracts a local region on a two-dimensional image from the extracted two-dimensional feature part.
  • the 3D local region determining unit is calculated by the 2D local region extracting unit.
  • the 3D local region is determined based on the generated 2D local region.
  • the local region determination unit calculates and extracts only a region corresponding to the two-dimensional local region as the three-dimensional local region.
  • the local region determination unit sets a partial region of a predetermined shape in a plane determined from the three-dimensional coordinates, and in the overall three-dimensional shape. A region corresponding to the partial region is determined as the three-dimensional local region.
  • the local region determination unit sets a partial region of a predetermined shape in a plane determined from the three-dimensional coordinates, and the region corresponding to the partial region in the entire three-dimensional shape is three-dimensional. Determined as local region.
  • a three-dimensional local region can be easily determined from the three-dimensional coordinates of a feature part using a simple method.
  • the overall 3D shape information is face shape data composed of a plurality of 3D points
  • the local region determination unit is configured so that the 3D points are Then, a region composed of three-dimensional points in which a perpendicular line that is virtually perpendicular to the plane is included in the partial region is determined as the three-dimensional local region.
  • the entire 3D shape information is the face shape data composed of a plurality of 3D points, and the local region determination unit descends the 3D points virtually perpendicular to the plane.
  • a region composed of 3D points where the perpendicular line is included in the partial region is determined as a 3D local region.
  • the three-dimensional local region corresponding to the partial region can be easily determined using a simple method.
  • the local region determination unit compares the overall three-dimensional shape with a reference three-dimensional partial model shape prepared in advance, and A portion having a shape most similar to the reference three-dimensional partial model shape in the three-dimensional shape is determined as the three-dimensional local region.
  • the local region determination unit compares the entire three-dimensional shape with the reference three-dimensional partial model shape prepared in advance, and the reference three-dimensional partial model in the total three-dimensional shape.
  • the partial force that is the shape most similar to the shape is determined as a 3 ⁇ 4-dimensional local region [0177] Therefore, according to this authentication system, a configuration and operation for acquiring a two-dimensional image and extracting a feature part (two-dimensional facial feature amount) from the two-dimensional image are not required and easy.
  • a 3D local region in the overall 3D shape can be determined.
  • the local region determination unit includes the local region information defined on the entire three-dimensional shape and a reference three-dimensional partial model shape prepared in advance. A same space conversion unit for converting the same three-dimensional shape into the same space, and comparing the inclusion relationship between the entire three-dimensional shape and the reference three-dimensional partial model shape in the same space converted by the same space conversion unit. To determine the three-dimensional local region.
  • the three-dimensional local region determination unit includes a three-dimensional surface on the reference three-dimensional model and a three-dimensional surface of the entire three-dimensional shape. The three-dimensional local region is determined by comparing the relationship.
  • the 3D local region determination unit includes a 3D surface on the reference 3D model and a 3D coordinate point of the overall 3D shape.
  • the three-dimensional local region is determined by comparing the inclusive relations.
  • the 3D local region determination unit includes a 3D coordinate point on the reference 3D model and a 3D surface of the overall 3D shape.
  • the three-dimensional local region is determined by comparing the inclusive relations.
  • the 3D local area determined by the local area determination unit is set as dense data, and the 3D local area determined to be other than the 3D local area is used. Keep the area as sparse data.
  • the local area determination unit converts the entire 3D shape and the local area information defined on the reference 3D partial model shape prepared in advance into the same space.
  • the same space conversion unit compares the inclusion relation between the entire 3D shape and the reference 3D partial model shape in the converted same space, and according to the comparison result. A three-dimensional local region is determined. Therefore, according to this authentication system, the 3D local region in the entire 3D shape can be easily determined.
  • the three-dimensional feature amount calculation unit includes the tertiary Original local region force Local 3D shape information is calculated as the local region shape information.
  • the three-dimensional feature amount calculation unit converts the local three-dimensional shape information in the three-dimensional local region into predetermined curved surface information, and the local region Calculate as shape information.
  • the three-dimensional feature amount calculation unit uses local three-dimensional shape information in the three-dimensional local region as defined points defined on a standard model and a three-dimensional The local area shape information is calculated by converting the distance information of the corresponding points in the local area into an outer area.
  • the three-dimensional feature amount calculation unit calculates local three-dimensional shape information from the three-dimensional local region as the local region shape information.
  • the 3D feature quantity calculation unit calculates the local area shape information obtained by converting the local 3D shape information in the 3D local area into predetermined curved surface information.
  • the 3D feature value calculation unit converts the local 3D shape information in the 3D local region into the vector from the distance information between the defined points defined on the standard model and the corresponding points in the 3D local region. Calculated as local area shape information.
  • the 3D shape information is not used as it is.
  • the 3D shape information calculated from the 3D local area for example, the local 3D shape information is converted. Since the structure is handled as curved surface information (for example, curvature), dimensional compression is possible and the processing speed is increased.
  • the three-dimensional feature amount calculation unit includes, as the three-dimensional face feature amount, information on the positional relationship between three-dimensional local regions. Calculate the amount.
  • the 3D feature amount calculation unit calculates a 3D face feature amount including information on the relative positional relationship of each three-dimensional local region as the 3D face feature amount.
  • the local region determination unit is configured to form the entire three-dimensional shape so that the plurality of three-dimensional local regions are arranged at positions where the left and right sides of the face are symmetrical. Determine the 3D local region in the shape.
  • the local region determination unit determines the three-dimensional local region in the entire three-dimensional shape so that the plurality of three-dimensional local regions are arranged at positions that are symmetrical with respect to the face.
  • the local region determination unit includes the entire three-dimensional shape so that the plurality of three-dimensional local regions include at least the nose and the heel portion of the face. The three-dimensional local region at is determined.
  • the three-dimensional local area in the entire three-dimensional shape is determined by the local area determination unit so that the plurality of three-dimensional local areas include at least the face nose and the heel region.
  • the three-dimensional local region is avoided, for example, by a part hidden by hair (for example, a forehead) or difficult to measure! /, Part (for example, a mouth when having a mustache). Since it can be set, it is possible to accurately calculate the 3D facial feature quantity from this 3D local area, and it is possible to perform highly accurate authentication.
  • a two-dimensional face feature value which is a two-dimensional feature value of the face
  • the feature quantity comparison unit includes a two-dimensional face feature value calculated by the two-dimensional feature value calculation unit and a three-dimensional face feature value calculated by the three-dimensional feature value calculation unit.
  • the two-dimensional feature quantity that is a two-dimensional feature quantity of the face is calculated by the two-dimensional feature quantity calculation unit from the feature part information extracted by the feature part extraction unit. Then, the 2D face feature calculated by the 2D feature amount calculation unit by the feature amount comparison unit The total facial feature quantity, which is a combination of the quantity and the 3D facial feature quantity calculated by the 3D feature quantity calculation unit, is compared with the comparison facial feature quantity.
  • the three-dimensional feature amount calculation unit includes the three-dimensional shape information from local three-dimensional shape information in a three-dimensional local region including at least a part other than the facial feature part. A face feature amount is calculated.
  • the 3D feature quantity calculation unit calculates the 3D face feature quantity from the local 3D shape information in the 3D local region including at least a part other than the facial feature part.
  • the feature part information for calculating the two-dimensional face feature amount is texture information
  • the facial posture is related to the texture information.
  • the image processing apparatus further includes a correction unit that performs posture variation correction that is correction and light source variation correction that is correction related to the direction of the light source relative to the face.
  • the feature part information for calculating the two-dimensional face feature amount is texture information
  • the correction unit corrects the posture of the face with respect to the texture information.
  • the posture variation correction and the light source variation correction that are corrections related to the direction of the light source with respect to the face are performed.
  • this authentication system it is possible to obtain an appropriate two-dimensional face feature amount based on the texture information subjected to the posture variation correction and the light source variation correction, and thus to perform more accurate authentication. Become.
  • the three-dimensional shape acquisition unit includes at least two photographing devices that photograph a two-dimensional image of the face, and two images obtained from the photographing devices.
  • a three-dimensional shape calculation unit for calculating the whole three-dimensional shape by performing a high-precision corresponding point search process from the two-dimensional image by calculation using a phase-only correlation method and performing three-dimensional reconstruction.
  • the 3D shape acquisition unit captures a 2D image of the face with at least two imaging devices, and the 3D shape calculation unit obtains 2 from each imaging device.
  • Two-dimensional image force of the sheet A high-accuracy inspection process is performed by calculation using the phase-only correlation method, and the entire three-dimensional shape is calculated by performing three-dimensional reconstruction.
  • this authentication system it is possible to calculate the entire three-dimensional shape with high accuracy by the phase-only correlation method at a low cost without using an expensive three-dimensional imaging apparatus or the like.
  • the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit is a vector amount
  • the comparison face feature corresponding to the vector amount A storage unit is further provided for storing a comparison vector quantity as a quantity.
  • the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit is a vector amount, and is stored as a comparison face feature amount corresponding to the vector amount by the storage unit.
  • the comparison vector quantity is stored.
  • the data stored as the comparison facial feature quantity by the storage unit becomes the vector quantity that is obtained by the measured so-called dense three-dimensional shape data.
  • the amount can be reduced (requires less memory capacity), and data handling becomes easier.
  • the authentication system which is particularly effective in other aspects, it is based on the 3D local area determined by the local area determining unit! /, And is a global area in the overall 3D shape 3
  • a global region shape information relating to a shape of a three-dimensional global region, further comprising a global 3D feature amount calculation unit that calculates a global 3D face feature amount that is a 3D feature amount of the face,
  • the comparison unit compares the global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit, which performs the authentication operation for the authentication target person, with a comparative global facial feature amount prepared in advance.
  • the global three-dimensional feature amount calculation unit determines the local region determination unit. Based on the specified 3D local area! /, The global area shape information on the shape of the 3D global area, which is the global area of the entire 3D shape, and the 3D feature of the face A global 3D face feature is calculated and prepared in advance by the feature comparison unit to perform an authentication operation on the authentication target by the global 3D feature calculation unit. The compared global facial feature quantity is compared.
  • a global 3D facial feature quantity calculation unit for calculating a global 3D facial feature quantity of the face, wherein the feature quantity comparison unit performs the global 3D feature quantity to perform an authentication operation on the person to be authenticated.
  • the global three-dimensional facial feature value calculated by the calculation unit is compared with a comparative global facial feature value prepared in advance.
  • the global 3D feature value calculation unit is based on the information of the 3D local region determined by the local region determination unit!
  • the global 3D facial feature value of the face is calculated, and the global 3D facial feature value calculated by the global 3D feature value calculation unit to perform the authentication operation for the authentication target person by the feature value comparison unit.
  • the comparison global face feature quantity prepared in advance is compared.
  • the entire three-dimensional It further includes a global 3D face feature quantity calculation unit that calculates a global 3D face feature quantity that is a global 3D feature quantity of the face, which is global information on the shape.
  • the global three-dimensional feature amount calculation unit calculates the deformation parameter of the standard model calculated based on the three-dimensional feature point information defined on the three-dimensional local region. Extract information.
  • the global three-dimensional feature amount calculation unit calculates a three-dimensional local standard calculated based on three-dimensional feature point information defined on the three-dimensional local region. Extract distance information between model and 3D local area.
  • the global three-dimensional feature quantity calculation unit calculates the three-dimensional station calculated based on the three-dimensional feature point information defined on the three-dimensional local area. Extract distance information between different areas.
  • the global 3D feature value calculation unit is based on the 3D feature point information defined on the 3D local region determined by the local region determination unit!
  • a global 3D facial feature value which is a 3D feature value of the face, is calculated as global information in the 3D shape.
  • information on the deformation parameter of the standard model calculated based on the 3D feature point information defined on the 3D local region is extracted by the global 3D feature amount calculation unit.
  • the global 3D feature quantity calculation unit extracts distance information between the 3D local standard model calculated based on the 3D feature point information defined on the 3D local area and the 3D local area.
  • the global 3D feature quantity calculation unit extracts the distance information between the 3D local areas calculated based on the 3D feature point information defined on the 3D local area! . Then, the feature value comparison unit compares the global three-dimensional face feature value calculated by the global three-dimensional feature value calculation unit with the prepared comparison global face feature value in order to perform an authentication operation on the person to be authenticated. Is done.
  • the three-dimensional local region determined by the local region determination unit is extracted in a line shape, and based on the extracted line-shaped three-dimensional local region, A global 3D feature quantity calculation unit is further provided that calculates a global 3D local facial feature quantity as a shape vector of the 3D global area, which is a global area in the overall 3D shape.
  • the global three-dimensional feature amount calculation unit extracts the three-dimensional local region determined by the local region determination unit in a line shape, and based on the extracted line-shaped three-dimensional local region.
  • a global 3D facial feature is calculated as a shape vector of the 3D global area, which is a global area in the overall 3D shape.
  • the feature amount comparison unit calculates the global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit so as to perform the authentication operation for the authentication target person, and the comparison-use global face feature amount prepared in advance.
  • the global 3D face feature value is used for face authentication, the authentication accuracy can be further improved.
  • the global area shape information can be compressed by a so-called data compression technique, and the data amount can be reduced.
  • global 3D facial features are calculated based on 3D local regions. Therefore, it is possible to calculate global information unique to the 3D local region.
  • the amount of data in the 3D local area is less than the 3D shape data of the entire face.
  • the 3D local region is determined based on the 3D coordinates of the feature part, it is possible to select only global information between the feature points of the face part from the entire 3D shape data. Become.
  • the global three-dimensional feature amount calculation unit calculates center-of-gravity information regarding the three-dimensional local region as the global three-dimensional face feature amount.
  • the global three-dimensional feature amount calculation unit calculates normal information related to the three-dimensional local region as the global three-dimensional face feature amount.
  • the feature amount comparison unit performs the three-dimensional feature according to a comparison result between the global three-dimensional face feature amount and the comparative global face feature amount.
  • the 3D face feature value calculated by the quantity calculator is compared with the comparison face feature value prepared in advance.
  • the comparison result is the global 3D facial feature quantity and the comparative global feature quantity. If the facial feature quantity is different, the comparison between the 3D facial feature quantity and the comparison facial feature quantity can be omitted, the authentication processing time is shortened, and the authentication can be performed faster. Become.
  • the feature amount comparison unit performs a global three-dimensional feature amount calculation unit that performs an authentication operation on the person to be authenticated.
  • a global comparison result obtained by comparing a facial feature quantity with a comparative global facial feature quantity prepared in advance, and a 3D facial feature quantity calculated by the 3D feature quantity calculation unit
  • the total comparison result is calculated by integrating the local comparison results that are compared with the compared facial feature quantities.
  • authentication is performed based on the overall comparison result obtained by integrating the global comparison result and the local comparison result, so that the comparison results can be interpolated with each other. Authentication can be performed with high accuracy.
  • the authentication method which is powerful in other aspects includes a first step of acquiring information on the entire three-dimensional shape, which is the entire three-dimensional shape of the face of the person to be authenticated, and the entire three-dimensional shape.
  • a fourth step of comparing the face feature quantity with a comparison face feature quantity prepared in advance.
  • the first step information on the entire three-dimensional shape that is the entire three-dimensional shape of the face of the person to be authenticated is acquired.
  • the second step a plurality of three-dimensional local regions that are local regions in the whole three-dimensional shape are determined from the whole three-dimensional shape information.
  • the third step from the local 3D shape information in the 3D local area, the local area shape information on the shape of each 3D local area, and the 3D face feature quantity that is the 3D feature quantity of the face Is calculated.
  • the third step from the local 3D shape information in the 3D local area, the local area shape information on the shape of each 3D local area, and the 3D face feature quantity that is the 3D feature quantity of the face Is calculated.
  • the fourth step the three-dimensional face feature value is compared with the comparison face feature value prepared in advance to perform the authentication operation for the person to be authenticated.
  • a plurality of 3D local regions are determined from the entire 3D shape of the face of the person to be authenticated, and 3D facial features are obtained from the local 3D shape information in each 3D local region. Since the amount is calculated and the 3D face feature value is compared with the comparison face feature amount, the authentication operation is performed on the person to be authenticated, that is, the information on the entire 3D shape of the face is not used as it is. Since multiple local regions (3D local regions) are extracted from the 3D shape of the entire face and authentication is performed based on the extracted 3D local regions, the Even if a concealment etc.
  • the first step includes a fifth step of acquiring a two-dimensional image of the face, and the face of the face is obtained from the two-dimensional image.
  • the first step is a step including a fifth step of acquiring a two-dimensional image of the face, and in the sixth step, a characteristic part of the face is obtained from the two-dimensional image. Feature parts are extracted. In the seventh step, the three-dimensional coordinates of the feature part are calculated. In the second step, the determination is made based on the three-dimensional coordinates of the characteristic part.
  • this authentication method when determining a three-dimensional local region, it can be associated with information of a two-dimensional feature region, and the information using the feature region information can be used together with the information of the three-dimensional local region. It is possible to perform accuracy authentication.
  • the method further includes an eighth step of calculating a two-dimensional face feature value that is a two-dimensional feature value of the face from the information on the feature part,
  • the fourth step is a step of comparing a total face feature amount that is a combination of the two-dimensional face feature amount and the three-dimensional face feature amount with the comparison face feature amount.
  • a two-dimensional facial feature amount that is a two-dimensional facial feature amount is calculated from the feature part information, and in the fourth step, a two-dimensional facial feature amount is calculated.
  • the total facial feature value that is a combination of the 3D face feature value and the face feature value for comparison is compared.
  • a 3D global area that is a global area in the overall 3D shape The method further comprises a ninth step of calculating global three-dimensional face feature amounts that are global region shape information relating to the shape of the face, which is a three-dimensional feature amount of the face, wherein the fourth step includes the authentication subject.
  • the global three-dimensional facial feature value calculated in the above-mentioned nine steps to perform the authentication operation for the above is compared with a comparative global facial feature value prepared in advance.
  • the global region related to the shape of the three-dimensional global region which is a global region in the overall three-dimensional shape, based on the three-dimensional local region determined in the second step by the ninth step
  • a global 3D facial feature which is shape information and is a 3D facial feature, is calculated, and then the global calculated by the 9th step to perform an authentication operation for the person to be authenticated in the 4th step.
  • the three-dimensional face feature quantity is compared with a comparison-use global face feature quantity prepared in advance.
  • the global 3D face feature value is used for face authentication, the authentication accuracy can be further improved.
  • the global area shape information can be compressed by a so-called data compression technique, and the data amount can be reduced.
  • the global 3D facial feature value is calculated based on the 3D local region, it is possible to calculate global information unique to the 3D local region.
  • the amount of data in the 3D local area is less than the 3D shape data of the entire face.
  • the 3D local region is determined based on the 3D coordinates of the feature part, it is possible to select only global information between the feature points of the face part from the entire 3D shape data. Become.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un système d'authentification et un procédé d'authentification dans lesquels une pluralité de régions locales tridimensionnelles sont choisies dans un visage de personne à authentifier et des quantités tridimensionnelles caractéristiques du visage dans les régions locales tridimensionnelles respectives sont calculées en tant que quantités caractéristiques tridimensionnelles du visage. Les quantités caractéristiques tridimensionnelles du visage sont comparées à des quantités caractéristiques de visage préparées à l'avance pour une comparaison de façon à authentifier la personne. Ceci réduit la diminution de la précision d'authentification et améliore la vitesse d'authentification.
PCT/JP2007/071807 2006-11-10 2007-11-09 Système d'authentification et procédé d'authentification WO2008056777A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008543143A JP4780198B2 (ja) 2006-11-10 2007-11-09 認証システム及び認証方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006-305739 2006-11-10
JP2006305739 2006-11-10

Publications (1)

Publication Number Publication Date
WO2008056777A1 true WO2008056777A1 (fr) 2008-05-15

Family

ID=39364587

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/071807 WO2008056777A1 (fr) 2006-11-10 2007-11-09 Système d'authentification et procédé d'authentification

Country Status (2)

Country Link
JP (1) JP4780198B2 (fr)
WO (1) WO2008056777A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009128191A (ja) * 2007-11-22 2009-06-11 Ihi Corp 物体認識装置およびロボット装置
JP2009128192A (ja) * 2007-11-22 2009-06-11 Ihi Corp 物体認識装置およびロボット装置
JP2010045770A (ja) * 2008-07-16 2010-02-25 Canon Inc 画像処理装置及び画像処理方法
JP2012238121A (ja) * 2011-05-10 2012-12-06 Canon Inc 画像認識装置、画像認識装置の制御方法、およびプログラム
JP2013089123A (ja) * 2011-10-20 2013-05-13 National Institute Of Information & Communication Technology 個人モデルデータの生成方法、生成プログラム、および生成システム
JP2013131209A (ja) * 2011-12-20 2013-07-04 Apple Inc 顔特徴ベクトルの構築
JP2013196046A (ja) * 2012-03-15 2013-09-30 Omron Corp 認証装置、認証装置の制御方法、制御プログラム、および記録媒体
JP2014178969A (ja) * 2013-03-15 2014-09-25 Nec Solution Innovators Ltd 情報処理装置及び判定方法
JP2017016192A (ja) * 2015-06-26 2017-01-19 株式会社東芝 立体物検知装置及び立体物認証装置
CN111122687A (zh) * 2019-11-21 2020-05-08 国政通科技有限公司 一种针对爆炸物的反恐安检方法
JP2023500739A (ja) * 2019-12-20 2023-01-10 コーニンクレッカ フィリップス エヌ ヴェ イメージングにおける照明補償
CN117593367A (zh) * 2023-10-24 2024-02-23 北京城建集团有限责任公司 一种电气设备支架定位系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5904168B2 (ja) * 2012-07-20 2016-04-13 Jfeスチール株式会社 撮像画像の特徴点抽出方法および特徴点抽出装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02224185A (ja) * 1989-02-27 1990-09-06 Osaka Gas Co Ltd 個人識別方法及び装置
JPH0944688A (ja) * 1995-05-23 1997-02-14 Matsushita Electric Ind Co Ltd 点群データの曲面変換方法及びそれを用いた形状測定方法
JPH11283033A (ja) * 1998-03-27 1999-10-15 Ricoh System Kaihatsu Kk 画像識別のための特徴量の利用方法およびそのプログラムを格納した記録媒体
JP2002216129A (ja) * 2001-01-22 2002-08-02 Honda Motor Co Ltd 顔領域検出装置およびその方法並びにコンピュータ読み取り可能な記録媒体
JP2004222118A (ja) * 2003-01-17 2004-08-05 Omron Corp 撮影装置
WO2005038700A1 (fr) * 2003-10-09 2005-04-28 University Of York Reconnaissance d'images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02224185A (ja) * 1989-02-27 1990-09-06 Osaka Gas Co Ltd 個人識別方法及び装置
JPH0944688A (ja) * 1995-05-23 1997-02-14 Matsushita Electric Ind Co Ltd 点群データの曲面変換方法及びそれを用いた形状測定方法
JPH11283033A (ja) * 1998-03-27 1999-10-15 Ricoh System Kaihatsu Kk 画像識別のための特徴量の利用方法およびそのプログラムを格納した記録媒体
JP2002216129A (ja) * 2001-01-22 2002-08-02 Honda Motor Co Ltd 顔領域検出装置およびその方法並びにコンピュータ読み取り可能な記録媒体
JP2004222118A (ja) * 2003-01-17 2004-08-05 Omron Corp 撮影装置
WO2005038700A1 (fr) * 2003-10-09 2005-04-28 University Of York Reconnaissance d'images

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANDO ET AL.: "Kao Hyomen no Hosen Vector o Mochiita Kojin Shogo", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, vol. 31, no. 5, 25 September 2002 (2002-09-25), pages 841 - 847 *
MASUI ET AL.: "3D Keisoku ni yoru Kao Gazo Ninshiki no Kiso Kento", ITEJ TECHNICAL REPORT, vol. 14, no. 36, 29 June 1990 (1990-06-29), pages 7 - 12 *
SHIN ET AL.: "Spin Image o Mochiita 3 Jigen Scan Data kara no Jintai no Tokuchoten Chushutsu", FIT2006 5TH FORUM ON INFORMATION TECHNOLOGY JOHO KAGAKU GIJUTSU LETTERS, vol. 5, 21 August 2006 (2006-08-21), pages 329 - 331 *
TANAKA ET AL.: "3 Jigen Kyokuritsu o Mochiita Kao no Dotei - Kao no 3 Jigen Keijo Chushutsu", THE TRANSACTIONS OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. J76-D-II, no. 8, 25 August 1993 (1993-08-25), pages 1595 - 1603 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009128191A (ja) * 2007-11-22 2009-06-11 Ihi Corp 物体認識装置およびロボット装置
JP2009128192A (ja) * 2007-11-22 2009-06-11 Ihi Corp 物体認識装置およびロボット装置
JP2010045770A (ja) * 2008-07-16 2010-02-25 Canon Inc 画像処理装置及び画像処理方法
JP2012238121A (ja) * 2011-05-10 2012-12-06 Canon Inc 画像認識装置、画像認識装置の制御方法、およびプログラム
JP2013089123A (ja) * 2011-10-20 2013-05-13 National Institute Of Information & Communication Technology 個人モデルデータの生成方法、生成プログラム、および生成システム
JP2013131209A (ja) * 2011-12-20 2013-07-04 Apple Inc 顔特徴ベクトルの構築
JP2013196046A (ja) * 2012-03-15 2013-09-30 Omron Corp 認証装置、認証装置の制御方法、制御プログラム、および記録媒体
JP2014178969A (ja) * 2013-03-15 2014-09-25 Nec Solution Innovators Ltd 情報処理装置及び判定方法
JP2017016192A (ja) * 2015-06-26 2017-01-19 株式会社東芝 立体物検知装置及び立体物認証装置
CN111122687A (zh) * 2019-11-21 2020-05-08 国政通科技有限公司 一种针对爆炸物的反恐安检方法
CN111122687B (zh) * 2019-11-21 2022-09-20 国政通科技有限公司 一种针对爆炸物的反恐安检方法
JP2023500739A (ja) * 2019-12-20 2023-01-10 コーニンクレッカ フィリップス エヌ ヴェ イメージングにおける照明補償
JP7209132B2 (ja) 2019-12-20 2023-01-19 コーニンクレッカ フィリップス エヌ ヴェ イメージングにおける照明補償
CN117593367A (zh) * 2023-10-24 2024-02-23 北京城建集团有限责任公司 一种电气设备支架定位系统

Also Published As

Publication number Publication date
JPWO2008056777A1 (ja) 2010-02-25
JP4780198B2 (ja) 2011-09-28

Similar Documents

Publication Publication Date Title
JP4780198B2 (ja) 認証システム及び認証方法
CN105956582B (zh) 一种基于三维数据的人脸识别系统
Pan et al. 3D face recognition using mapped depth images
JP4696778B2 (ja) 認証装置、認証方法及びプログラム
JP4653606B2 (ja) 画像認識装置、方法およびプログラム
JP4752433B2 (ja) モデリングシステム、モデリング方法およびプログラム
JP2017016192A (ja) 立体物検知装置及び立体物認証装置
JP4992289B2 (ja) 認証システム、認証方法、及びプログラム
Hsu et al. RGB-D-based face reconstruction and recognition
JP5018029B2 (ja) 認証システム及び認証方法
Li et al. Efficient 3D face recognition handling facial expression and hair occlusion
JP2007058397A (ja) 認証システム、登録システム及び証明用媒体
JP4539494B2 (ja) 認証装置、認証方法及びプログラム
JP4539519B2 (ja) 立体モデル生成装置および立体モデル生成方法
JP4952267B2 (ja) 3次元形状処理装置、3次元形状処理装置の制御方法、および3次元形状処理装置の制御プログラム
KR100828412B1 (ko) 멀티 포인트 신호를 이용한 3차원 얼굴 인식 방법
CN111652018B (zh) 一种人脸注册方法和认证方法
JP4814666B2 (ja) 顔解析システム
JP2005351814A (ja) 検出装置および検出方法
JP4956983B2 (ja) 認証システム、認証方法およびプログラム
Hajati et al. Pose-invariant 2.5 D face recognition using geodesic texture warping
Zhang et al. Face recognition using SIFT features under 3D meshes
JP4525523B2 (ja) 認証システム、認証方法およびプログラム
Kakadiaris et al. Face recognition using 3D images
Zou et al. 3-D face recognition based on warped example faces

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07831538

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2008543143

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07831538

Country of ref document: EP

Kind code of ref document: A1