JP4675492B2 - Personal authentication device using facial images - Google Patents

Personal authentication device using facial images Download PDF

Info

Publication number
JP4675492B2
JP4675492B2 JP2001083728A JP2001083728A JP4675492B2 JP 4675492 B2 JP4675492 B2 JP 4675492B2 JP 2001083728 A JP2001083728 A JP 2001083728A JP 2001083728 A JP2001083728 A JP 2001083728A JP 4675492 B2 JP4675492 B2 JP 4675492B2
Authority
JP
Japan
Prior art keywords
face
image
user
means
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2001083728A
Other languages
Japanese (ja)
Other versions
JP2002288670A (en
Inventor
亮治 五十嵐
孝裕 大橋
Original Assignee
本田技研工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 本田技研工業株式会社 filed Critical 本田技研工業株式会社
Priority to JP2001083728A priority Critical patent/JP4675492B2/en
Publication of JP2002288670A publication Critical patent/JP2002288670A/en
Application granted granted Critical
Publication of JP4675492B2 publication Critical patent/JP4675492B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates generally to human interfaces, and more specifically to a personal authentication system using image recognition.
[0002]
[Prior art]
A personal authentication system for identifying an individual person can be used in various situations that require security. One of the recognition methods used in such a personal authentication system is a method using face recognition for identifying an individual from a human face image.
[0003]
In face recognition using human face images, methods such as marking a face or performing face recognition after manually extracting features have been proposed. Face recognition techniques have been proposed. Many of such completely non-contact fully automatic face recognition methods normalize a face image after feature extraction by template matching or the like, and collate the normalized image with a face database. Algorithms that can be used for matching with the face database include normalized cross correlation, eigenspace method, neural network (NN), and the like, and personal authentication is performed based on the results of such an algorithm.
[0004]
Since the face recognition method of the prior art performs collation based on pre-stored face data (face image, face template, etc.), changes in face position, orientation, inclination, etc. that are different from the stored face data Has a very weak drawback. For example, when the face data stored in the face database used for face recognition is a face template facing forward, a face image in a state different from the face template (for example, a face image facing diagonally) is accurately obtained. It is difficult to recognize. For this reason, the face recognition system of the prior art generally executes a normalization process of the face image before the recognition process in order to deal with face images in different states.
[0005]
When this normalization processing is executed, the face recognition rate is improved because the face of the input image is in a predetermined face data format (predetermined face size, face position, face tilt, and image brightness). However, the face recognition of the prior art basically does not have a function of detecting the human face orientation itself in the input image. Therefore, in the case of a lateral face image, a reduction in the recognition rate is inevitable. Therefore, when performing the conventional face recognition, the target user has to be aware that the camera always faces a predetermined direction with respect to the camera, increasing the burden on the user.
[0006]
As conventional techniques for detecting the orientation of a human face from an input image, there are JP-A Nos. 11-63927 and 2000-113186. Furthermore, in the paper “Development of Face / Gaze Measurement System and Application to Motion Recognition” by Matsumoto et al. (5th Robotics Symposia, 2000/3/26, 27), not only the human face direction but also the gaze from the input image. A system that detects the direction and tracks the face direction and the line-of-sight direction in real time from an input image is disclosed.
[0007]
[Problems to be solved by the invention]
However, any of these conventional methods detects a human face direction or line-of-sight direction, and face recognition for identifying individual persons has not been realized.
[0008]
Therefore, an object of the present invention is to provide a personal authentication device that automatically detects the orientation of a target user's face and executes face recognition according to the face orientation to perform personal authentication.
[0009]
[Means for Solving the Problems]
In order to solve the above-described problem, an authentication apparatus according to the present invention includes one or more cameras that photograph a user, illumination means installed to illuminate the user's face, and image output from the camera based on the user's image output. Detecting means for detecting the orientation of the face; means for selecting the face image of the image output as a face image other than the front and the front according to the detected face orientation; and a collation face obtained by photographing the user from the front The image database and the identification unit for identifying the user by comparing the front face image selected by the selection unit with the face image in the database of the comparison face image are provided.
[0010]
According to the present invention, it is possible to select and collate only the face image facing the front from the face image of the user photographed by the camera. As a result, it is possible to identify individual persons with higher accuracy than in the conventional personal authentication device.
[0011]
According to one form of this invention, the said authentication apparatus adjusts the detected user's face image to the face image of the same size and position as the face image memorize | stored in the database of the said face image for collation. And the identification unit is configured to collate the normalized image with a face image in the database of the collation face image and identify the user.
[0012]
According to this form, it is possible to normalize the face image according to the position and size of the face, and the face image taken by the camera is the same size as the face image stored in the database of the face image for verification And the position of the personal authentication device can be improved.
[0013]
According to one aspect of the present invention, the authentication device stores an extraction unit that extracts a predetermined image area from the image normalized by the normalization unit, and stores reference data for the predetermined image area. A determination database; and a success / failure determination unit that compares the extracted image region with the reference data and determines whether the normalization unit performs normalization.
[0014]
According to this embodiment, it can be confirmed whether or not the normalized image is correctly normalized by the normalizing means. Therefore, it is possible to reduce the error of the identification means included in the personal authentication device, and it is possible to provide a personal authentication device that operates stably.
[0015]
According to one form of this invention, the said authentication apparatus is equipped with the database of the face image for collation according to the face direction which image | photographed the said user from each of several face direction, The said identification means detected the said user The face image for collation according to the face direction corresponding to the face direction is collated with the face image of the user in the horizontal direction to identify the user.
[0016]
According to this aspect, personal authentication can be performed according to the face orientation of the face image taken by the camera. Therefore, it is possible to provide a personal authentication device that performs personal authentication without the user being aware of the face orientation.
[0017]
According to one aspect of the present invention, the face orientation detection means of the personal authentication device stores a predetermined reference image of one or a plurality of facial feature portions and three-dimensional coordinates corresponding to the reference image. A feature point database, wherein the three-dimensional coordinates are associated with feature portions of the face, and an image region corresponding to the reference image of the feature portion is extracted from the image output of the camera using the reference image of the feature portion as a template Extracting means for obtaining a three-dimensional coordinate of the extracted image area based on the extracted image area; a three-dimensional coordinate of the obtained image area and a three-dimensional coordinate corresponding to the reference image; And means for detecting the orientation of the user's face.
[0018]
According to this aspect, the face orientation detection unit can detect the face orientation of the user three-dimensionally, whereby the personal authentication device can detect the face image of the user based on the three-dimensional face orientation. Normalization and identification can be handled.
[0019]
According to one form of this invention, the said illumination means of the said personal authentication apparatus is comprised so that the said user's face may be irradiated with infrared light.
[0020]
According to this embodiment, by using infrared light as illumination, an image having less influence on illumination fluctuations around the user can be obtained as compared with an image obtained using visible light. The pupil can be extracted clearly.
[0021]
According to one form of this invention, the said identification means of the said authentication apparatus contains a means to determine whether the face image corresponding to the face image from the said image output exists in the database of the said face image for collation. In response to the presence of a face image corresponding to the face image from the image output in the database for collation, the user is determined to be a registered user, and the face image corresponding to the face image from the image output is In response to the absence of the verification database, the user is configured to be determined as an unregistered user.
[0022]
According to this form, it is determined whether the user has been registered in the database or not registered, and different processes can be executed according to the user's attributes.
[0023]
According to one aspect of the present invention, the authentication device includes an environment setting database that stores a plurality of environment setting data associated with each of the registered users, and an environment that sets an environment of a vehicle according to the environment setting data. Setting means, permitting driving of the vehicle when the user is determined to be a registered user, reading environment setting data relating to the registered user from the environment setting database, and setting an environment relating to the registered user It is configured to set the car environment according to the data.
[0024]
According to this embodiment, when the user is a registered user, it is possible to automatically adjust the environment settings such as the angle of each mirror and the sheet position according to each user.
[0025]
According to one form of this invention, the said authentication apparatus contains the authentication means which authenticates the identity verification means which proves a user's identity, and when the said user is judged as an unregistered user, the presentation of the said identity verification means The user is permitted to use the vehicle in response to the authentication means being authenticated by the authentication means, and the authentication means is not presented by the user and the authentication Means for prohibiting the user from using the vehicle in response to the identification means not being authenticated.
[0026]
According to this form, it becomes possible to manage unregistered users by means of identification means such as a key, a card key, and a password. The personal authentication device of the present invention can prohibit the use of the automobile unless these identification means are correctly presented by the user.
[0027]
DETAILED DESCRIPTION OF THE INVENTION
Next, embodiments of the present invention will be described with reference to the drawings. FIG. 1 shows one embodiment of an automobile equipped with a personal authentication device according to the present invention. The automobile shown in FIG. 1 includes an image input unit 1, a side mirror 2, a room mirror 3, a control device 5, and an infrared irradiation unit 6. In this embodiment, the face image of the driver in the driver's seat is taken by the image input unit 1, and the driver in the driver's seat is recognized from the taken image. If the recognized driver is a person registered in advance in the car database, the control device 5 refers to the environment settings (mirror angle, driving seat position, etc.) corresponding to the person and controls the actuator. The side mirror 2 and the room mirror 3 that are connected to each other are adjusted.
[0028]
FIG. 2 shows a functional block diagram of an automobile provided with a personal authentication system in the present embodiment. This functional block diagram includes an image input unit 1, an infrared irradiation unit 6, a face orientation / gaze detection unit 11, a face feature extraction unit 13, a personal identification unit 15, an environment setting unit 17, a database 19, a side mirror / actuator 21, A rearview mirror actuator 23 and a seat actuator 25 are included.
[0029]
Information about individual drivers is registered in the database 19 in advance. The registered driver information includes face data of each driver, environment setting information corresponding to each driver, and the like. The face data is used by the personal identification unit 15 for collation with the input image taken by the image input unit 1. The environment setting information includes setting values such as the angle and sheet position of each mirror according to each driver. When the driver in the driver's seat is specified by the personal authentication system according to the present invention, the environment setting unit 17 refers to the registered setting value, and for each driver, the side mirror actuator 21, the rearview mirror The actuator 23 and the seat actuator 25 are controlled.
[0030]
The automobile in this embodiment processes two types of operation modes using each functional block shown in FIG. The first operation mode is a face direction / gaze detection mode executed by the face direction / gaze detection unit 11. This mode detects the presence of the driver and continuously detects the state of the face direction and the gaze direction. . The second operation mode is a personal authentication mode. In this mode, a driver detected in the face orientation / gaze detection mode is specified, and an environment setting according to the driver is executed.
[0031]
The vehicle shown in FIG. 1 normally operates in the face orientation / gaze detection mode, and monitors whether the driver is in the driver's seat. When the driver is in the driver's seat, the driver's face orientation and gaze direction are constantly monitored. is doing. The automobile can determine the state of the driver based on the monitored face direction and line-of-sight direction, and can execute processing corresponding to the driver's state.
[0032]
The personal authentication mode starts at the same time as the driver gets into the car. When the personal authentication mode is activated, the car performs face recognition based on the face orientation obtained in the face orientation / gaze detection mode, and identifies the driver in the driver's seat. When a driver is specified, environment settings (adjustment of mirror angle and sheet position) are executed according to the driver. If the driver is not identified by face recognition, there is a high possibility that the person in the driver's seat is an unregistered driver in the database 19, and the sequence corresponding to the unregistered person is processed.
[0033]
In both operation modes, an image taken by the image input unit 1 is used. The image input unit 1 and the infrared irradiation unit 6 used in this embodiment will be described below with reference to the drawings.
[0034]
In the present embodiment, the infrared irradiation unit 6 is used in order to reduce image deterioration due to illumination fluctuations in the vehicle. Therefore, the infrared irradiation unit 6 is installed on the front surface of the driver so as to irradiate the driver's face with near infrared light.
[0035]
The first reason for using near-infrared light as illumination is to improve robustness against illumination variation. In general, the brightness of a vehicle such as an automobile greatly fluctuates indoors or outdoors or due to environmental changes such as daytime or nighttime. Further, when strong visible light hits the driver's face from one direction, a shaded gradation is generated on the driver's face. Such illumination fluctuations and shade gradations significantly deteriorate the accuracy of image recognition. Therefore, in this embodiment, the infrared irradiation unit 6 emits near-infrared light from the front of the driver, and an image is captured with the near-infrared light, so that the gradation of the shadow on the face due to visible light from the surroundings is obtained. Reduce. As a result, this embodiment has an advantage that it is less affected by changes in illumination compared to an image obtained using visible light, and can improve the accuracy of face recognition.
[0036]
The second reason for using near-infrared light is that the driver's pupil can be clearly extracted. Since the position of the driver's pupil is used to detect the line-of-sight direction, it is important to image the pupil clearly. Further, since the pupil can be used as the reference coordinate in the normalization process for driver identification, clear imaging of the pupil leads to improvement of normalization accuracy, that is, improvement of recognition accuracy.
[0037]
FIG. 3 shows an embodiment of the image input unit 1. In this embodiment, the stereo method of the prior art is used to stereoscopically view the object to be photographed. Therefore, the image input unit 1 includes two video cameras (41, 42) installed in front of the driver. The two video cameras are fixed at predetermined positions so that the driver's face can be photographed in stereo. Since near-infrared light is used as illumination as described above, these video cameras are each limited in wavelength band by a visible light blocking filter 43 that blocks visible light having a wavelength other than near-infrared light. Each video camera is controlled via a camera control unit (45, 46). Each camera control unit is connected through an external synchronization signal line, and the left and right video cameras are synchronized by this synchronization signal. The personal authentication system of the present invention can process not only two-dimensional object recognition but also three-dimensional object recognition by using the image thus taken as an input image.
[0038]
As shown in FIG. 3, the image captured by the image input unit 1 may be variously processed via a conventional image processing board 47. For example, the image processing board 47 in FIG. 3 not only functions as an input port for an NTSC video signal, but also includes a memory for storing images and a hardware circuit for executing complex image processing. The algorithm can be executed at high speed. For example, an image processing algorithm by a hardware circuit includes processes such as an oblique projection mechanism, a Hough transform, a binary image matching filter, and an affine transform (image rotation, enlargement, reduction). The use of such an image processing board 47 relaxes the operation of the ECU by the image processing board 47 taking part of the image processing, and enables high-speed image processing. The personal authentication system of the present invention controls the image processing board 47 by the ECU and processes the image photographed by the image input unit 1, thereby realizing the functions of the functional blocks shown in FIG.
[0039]
Hereinafter, processing in each operation mode will be described in detail.
[0040]
As shown in FIG. 2, the face direction / line-of-sight detection unit 11 includes an initial search unit 27, a face direction detection unit 29, and a line-of-sight detection unit 31, and executes the face direction / line-of-sight detection mode. FIG. 4 and FIG. 5 show an overall flow diagram of the face orientation / gaze detection mode processed by the face orientation / gaze detection unit 11. In the face orientation / line-of-sight detection mode, the initial search unit 27, the face direction detection unit 29, and the line-of-sight detection unit 31 operate in association with each other, and the face direction and the line-of-sight direction are detected in real time from the left and right input images. Can be detected.
[0041]
FIG. 4 shows a flowchart of the process of the initial search unit 27. The processing of the face direction detection unit 29 is shown in steps 111 to 117 in FIG. 5, and the processing of the line-of-sight detection unit 31 is shown in steps 121 to 125 in FIG. Real-time detection of the face direction and the line-of-sight direction in the face direction / line-of-sight detection mode is realized by a loop process from step 111 to step 125 shown in FIG. The initial search unit 27 shown in FIG. 4 is used for initial initialization immediately after the start of the face orientation / gaze detection mode and for error recovery of the face orientation / gaze direction detection in real time.
[0042]
The process of the initial search unit 27 will be described with reference to FIG. The initial search unit 27 roughly searches an image area where a human face is captured from the input image. The processing here can be said to be preprocessing for the face direction detection unit 29. The initial search unit 27 roughly searches a region where the face is photographed from the input image before the processing of the face direction detection unit 29, so that the face direction detection unit 29 performs detailed processing of the face in the input image. Analysis can be executed at high speed.
[0043]
First, in step 101, images of the left and right video cameras are input from the image input unit 1. Next, in step 103, an area in which a human face is photographed is roughly searched from the entire input image. This is performed by two-dimensional template matching using a pre-stored initial search template 51.
[0044]
An example of the initial search template 51 is shown in FIG. The image used for the initial search template 59 is an image obtained by partially cutting a human face facing forward, and this partial image includes characteristic regions of the human face such as eyes, nose, and mouth. Is included as a template. The initial search template 51 is previously reduced in resolution in order to increase the processing speed in template matching, and is made into a differential image in order to reduce the influence of illumination variation. This template is created from a plurality of samples and stored in advance.
[0045]
Since the search in step 103 is a two-dimensional template matching, either the right video camera image or the left video camera image is used. In the following, an example of template matching using an image from the right video camera will be described.
[0046]
In the case of template matching using the image of the right video camera, template matching using the initial search template 51 is performed on the right image, and the region in the right image matched here is the face of the input image. Be a candidate. Using this face candidate as a template, similar template matching is performed on the left image, and the three-dimensional position of the face candidate is obtained from the result of the stereo matching (step 105).
[0047]
In step 107, these processing results are evaluated, and it is determined whether a face area is detected from the input image. When it is determined that a face area has been found from the input image, the process proceeds to the process of the face direction detection unit 29 (step 109). If the face area is not found by the initial search unit 27, a series of processing is repeated using a new input image. The initial search unit 27 repeats this series of processes until a human face is photographed by the image input unit 1.
[0048]
Next, processing of the face direction detection unit 29 and the line-of-sight detection unit 31 will be described with reference to FIG. As described above, in the face direction / line-of-sight detection mode, the loop from step 111 to step 125 shown in FIG. 5 is repeated in order to detect the driver's face direction and line-of-sight direction in real time. The face orientation detection unit 29 can extract predetermined feature points from the input image, and obtain the face three-dimensional position and the face orientation from these feature points. The line-of-sight detection unit 31 detects the line-of-sight direction of the face from the input image based on the feature points and the face direction obtained by the face direction detection unit 29.
[0049]
In step 111, the face orientation detection unit 29 searches for feature points of the face by template matching from the input image. The template used for this search uses an image from the three-dimensional face feature point model 53 stored in advance in the database. FIG. 7 shows an example of the three-dimensional face feature point model 53.
[0050]
The three-dimensional face feature point model 53 in the present embodiment is generated from a partial image (81 to 91) obtained by locally cutting a characteristic portion of a human face facing forward from an image. For example, as shown in FIG. 7, the facial feature point image is preliminarily stored in a left eye corner 81, left eye head 83, right eye tip 87, right eye head 85, left mouth end 89, right mouth end 91, It is generated by cutting out locally from the prepared face image. Each of these partial images is associated with three-dimensional coordinates representing the three-dimensional position of the object (in this example, the left and right corners of the eye, the corners of the eyes, and both ends of the mouth) captured in the image, and is stored in the database. Has been. In the present specification, a partial image of the face feature area having these three-dimensional coordinates is referred to as a face feature point, and a face model generated from the plurality of face feature points is referred to as a three-dimensional face feature point model 53. . These three-dimensional face feature point models 53 are generated from a plurality of samples and stored in advance in a database.
[0051]
In step 111, the face orientation detection unit 29 searches the input image for an image area corresponding to the three-dimensional face feature point model 53. This is executed by performing two-dimensional matching on an input image using an image of each feature point of the three-dimensional face feature point model 53 as a template. For this matching, either an image of the right video camera or an image of the left video camera may be used. In this embodiment, an image of the right video camera is used. As a result of the search in step 111, six images of the left and right eyes, the corners of the eyes, and both ends of the mouth captured in the input image are obtained.
[0052]
In step 113, stereo matching is executed on the left image of the input image using the image of each feature point obtained from the search in step 111 as a template. Thereby, the three-dimensional coordinates of each feature point of the input image corresponding to each feature point of the three-dimensional face feature point model 53 are obtained.
[0053]
In the case of this embodiment, stereo matching is executed using the images of the left and right corners of the face and the corners of the face in the searched input image as the templates. As a result of this stereo matching, three-dimensional coordinates of the left and right corners of the driver's face, the corners of the eyes, and both ends of the mouth are obtained. Therefore, the three-dimensional position of the face of the driver in the driver's seat can be obtained from the three-dimensional coordinates of the feature points of these input images.
[0054]
After the three-dimensional coordinates relating to each feature point of the input image are obtained, it is checked in step 115 whether the feature point has been successfully extracted based on a predetermined criterion. If it is determined in this step 115 that each feature point has not been found from the input image, the process returns to the initial search unit 27 to search for a region where the face is photographed from the new input image.
[0055]
If it is determined in step 115 that each feature point has been found from the input image, a three-dimensional model fitting is executed using the three-dimensional face feature point model 53 in step 117, and the face orientation is detected. Hereinafter, this three-dimensional model fitting will be described.
[0056]
As described above, the three-dimensional facial feature point model 53 stored in advance is generated from the facial feature points facing the front. On the other hand, the face photographed in the input image does not necessarily face the front. Therefore, the three-dimensional coordinates of each feature point (observation point) of the input image obtained in step 113 is an arbitrary angle and displacement from the three-dimensional coordinates of each feature point of the three-dimensional face feature point model 53 facing front. Has a gap. Therefore, when the front-facing three-dimensional face feature point model 53 is arbitrarily rotated and displaced, the angle and displacement that match each feature point of the input image correspond to the face direction and position in the input image.
[0057]
When fitting the 3D face feature point model 53 to each feature point of the input image, the fitting error E is expressed by the following equation.
[Expression 1]
Where N is the number of feature points and x i Are the three-dimensional coordinates of each feature point in the model, y i Represents the three-dimensional coordinates of each feature point from the input image. ω i Is a weighting coefficient for each feature point, and uses a correlation value in stereo matching when the three-dimensional position of the feature point is obtained from the input image. By using this correlation value, the reliability of each feature point can be considered. The rotation matrix is R (φ, θ, ψ), the position vector is represented by t (x, y, z), and these are variables in this equation.
[0058]
Therefore, if the rotation matrix R and the position vector t that minimize the fitting error E in the above equation are obtained, the face orientation and face position of the input image can be obtained. This calculation is performed by using a least square method or a fitting method using a virtual spring model.
[0059]
In steps 121 to 125 shown in FIG. 7, the line-of-sight detection unit 31 detects the line-of-sight direction of the face from the input image based on the feature points and the face direction obtained by the face direction detection unit 29. At this time, the eyeball is modeled as a three-dimensional sphere, and the center position of the sphere is set as the rotation center. This three-dimensional eye model uses as parameters the three-dimensional relative position of the eyeball with respect to the face model, the radius of the eyeball, and the radius of the iris. FIG. 8 shows an outline of a method for obtaining the line-of-sight direction.
[0060]
In step 121, the three-dimensional center position 61 of the eyeball is obtained based on the detection result of the face direction detection unit 29. The offset vector for obtaining the eye center position 61 is determined in advance as a three-dimensional vector from the midpoint of the straight line connecting the feature points (55, 57) at both ends of the left and right eyes toward the eye center position 61. ((1) in FIG. 8). Therefore, the center position 61 of the eyeball can be obtained using the offset vector, the eyeball radius, the face position, and the face orientation ((2) in FIG. 8).
[0061]
Since the positions (55, 57) at both ends of the eye have already been obtained as the feature points of the input image, the center position 59 of the pupil on the eyeball can be easily obtained by executing the Hough transform in the regions at both ends. (Step 123). The line-of-sight direction is obtained as a three-dimensional vector (line-of-sight vector) connecting the center position 61 of the eyeball and the center position 59 of the pupil (step 125).
[0062]
In this embodiment, since the left and right images are used as the input image, the line-of-sight vector can be obtained for each of the right image and the left image. Furthermore, since a line-of-sight vector can be obtained for both eyes, a total of four line-of-sight vectors are obtained. In this embodiment, the average vector of these four line-of-sight vectors is used as the line-of-sight direction of the input image.
[0063]
In step 125, after the line-of-sight direction of the face in the input image is detected, the process returns to step 111, and a series of processing is repeated using the new input image. As a result of this repetition, it becomes possible to execute continuous tracking of the driver's face direction, face position, and line-of-sight direction in real time.
[0064]
Next, the personal authentication mode will be described. In the personal authentication mode, the driver's face detected in the face orientation / gaze detection mode is compared with the face data registered in the database to identify the driver. When a driver is specified, the personal authentication mode executes environment settings (mirror angle, sheet position, etc.) corresponding to the driver. As shown in the functional block diagram of FIG. 2, the personal authentication mode includes a face feature extraction unit 13 that extracts features of an input image, and a personal identification unit 15 that compares and collates drivers based on processing results of the face feature extraction unit 13. And the environment setting unit 17 that processes the environment setting according to the result of the personal identification unit 15. In the face orientation / line-of-sight detection mode, processing is executed using both the left and right images. In the personal authentication mode, processing is executed using one of the left and right images.
[0065]
The face feature extraction unit 13 includes a front face feature extraction unit 33 and a side face feature extraction unit 35, and executes processing according to the orientation of the face of the driver 10 in the input image. Since the face orientation of the driver 10 photographed in the input image from the image input unit 1 has already been detected by the face orientation / line-of-sight detection unit 11, the face feature extraction unit 13 responds to the face orientation in the input image. Can be executed. The outline of the process of the face feature extraction unit 13 will be described below.
[0066]
One of the processes of the face feature extraction unit 13 is a normalization process of an image input from the face orientation / gaze detection unit 11. This normalization process converts a face in the input image into a predetermined face data format (a predetermined face size, a face position, a face inclination, and an image brightness) in order to improve the accuracy of face recognition. Here, “face inclination” means rotation of the face image. More specifically, the face in the image input by the face feature extraction unit 13 is inclined with an arbitrary face orientation. For example, as shown in FIG. 9a, the face in the input image faces the front but is tilted in the image, or tilted sideways as shown in FIG. 9c. In the normalization process, the “face inclination” in the image is corrected by rotating the input image on the image plane. For example, in the case of FIG. 9a, the inclination of the face in the image can be corrected by rotating the image by the angle α with the intersection of the x axis and the y axis as the rotation center (FIG. 9b). Similarly in the case of FIG. 9c, the inclination of the face in the image can be corrected by rotating the image by the angle α with the intersection of the x axis and the y axis as the rotation center. In the present specification, the face angle corrected by the rotation on the image plane is referred to as “face inclination”. Therefore, in this specification, even when the face is facing up as shown in FIG. 9c, it is referred to as “landscape”, and when the face is facing down, it is similarly referred to as “landscape”.
[0067]
Another process of the face feature extraction unit 13 is a process for selecting the face orientation in the face image. In this sorting process, the input image is sorted into “front-facing” and “landscape” according to the orientation of the face. This process is selected based on the face orientation detected by the face orientation / gaze detection unit 11. In this specification, the front means a case where the face orientation obtained in the face orientation / gaze detection mode is within a predetermined range. Such a range is determined according to the recognizable range of the face recognition method used in front face recognition. Therefore, it is determined that the face orientation outside the predetermined range is the landscape orientation.
[0068]
Further, the face feature extraction unit 13 executes feature extraction processing of the input image for the personal identification unit 15. The feature extraction processing executed here varies depending on the face recognition method used in front face recognition or side face recognition. For example, when the eigenspace method is used for the face recognition method, the normalized input image is vectorized by this feature extraction process, and a weight vector representing the feature of the input image is calculated from the vector. The weight vector (feature component) obtained here is sent to the personal identification unit 15 and compared with face data stored in the database 19 to identify individual drivers.
[0069]
FIG. 10 shows the overall flow of the personal authentication mode. For example, when the driver enters the vehicle and the personal authentication mode is started, the detection result in the face direction / gaze detection mode is input to the feature extraction unit 13 in step 301. Here, data input to the feature extraction unit 13 includes an image photographed by the image input unit 1 and data on the driver's face orientation, face position, and line-of-sight direction in the image. As described above, either the left or right image may be used as the input image.
[0070]
In step 303, the feature extraction unit 13 executes a branch process according to the face orientation. Since the personal authentication system according to the present invention can use different face recognition methods for different face orientations, face recognition can be performed with high accuracy.
[0071]
For example, the personal authentication system can perform face recognition by using the eigenspace method for a face facing front and matching a three-dimensional model for a face facing sideways. Therefore, according to the present invention, an optimal face recognition method can be selected according to the face orientation of the input image.
[0072]
Face recognition methods that can be used in the personal authentication system include eigenspace method, template matching, Gabor Wavelet method, and NN. In step 301, the personal authentication system according to the present invention has already obtained information such as the face direction, the line-of-sight direction, and the facial feature points. Accordingly, if face recognition is executed by properly combining these basic information and normalization processing, it is possible to identify individual drivers with high accuracy using any face recognition method.
[0073]
In the determination of the face orientation in step 303, the front means that the face orientation obtained in the face orientation / gaze detection mode is within a predetermined range. Such a range is determined according to the recognizable range of the face recognition method used in front face recognition. Therefore, it is determined that the face orientation outside the predetermined range is the landscape orientation.
[0074]
When it is determined in step 303 that the face orientation of the input image is facing the front, the front face feature extraction unit 33 executes normalization processing and feature extraction processing for the front. When it is determined that the face orientation of the input image is landscape, the profile feature extraction unit 33 performs normalization processing and feature extraction processing according to the face orientation. Since normalization processing, feature extraction processing, and comparison / collation processing are branched according to the determination in step 303, the normalization processing, feature extraction processing, and comparison / collation for the front will be described first, and then the normalization for landscape orientation is performed. Processing, feature extraction processing, and comparison verification will be described.
[0075]
In step 305, the front face feature extraction unit 33 converts the input image into a predetermined face format (face size, face position, face inclination). That is, the face size, position, and inclination of the input image are normalized so as to be the same as the size, position, and inclination of the face image previously stored in the database. Further, the brightness of the entire image is adjusted to a predetermined value.
[0076]
In the present embodiment, the eigenspace method is used as the face recognition method of the personal identification unit 15. The normalization process in step 305 is also executed as a primary conversion on the image plane. If the feature points obtained in the face orientation / line-of-sight detection mode are used, the size, position, and tilt of the face in the input image can be easily obtained, so this process can be normalized with high accuracy compared to the conventional technology. It can be carried out.
[0077]
In one embodiment of the normalization processing for the front, affine transformation is executed so that the left and right pupils are horizontal and the distance is constant with reference to the midpoint of a straight line connecting the centers of the left and right pupils. Since the near-infrared image is used in the personal authentication system according to the present invention, the pupil can be extracted clearly, and normalization can be performed with high accuracy.
[0078]
In step 307, it is checked whether the normalization of the input image has been performed correctly. This is executed by cutting out a predetermined area from the normalized image and comparing it with a template for success / failure determination stored in the database in advance.
[0079]
For example, a case where this success / failure determination is executed using a right eye template will be described. In this case, the success / failure determination template is generated by cutting out a predetermined size from the image of a predetermined face recognition format (face size, face position, face tilt) around the right eye. This success / failure determination template is created from a sufficient number of samples and used as a typical eye template. The created success / failure determination template is stored in advance in a success / failure determination database.
[0080]
If the input image is correctly normalized in step 305, the eyes in the normalized image should always exist at a predetermined position and a predetermined size. Therefore, if the region corresponding to the success / failure determination template is cut out from the normalized image and collated with the success / failure determination template, the success or failure of the normalization can be determined. If normalization is successful, the correlation value between the clipped image and the success / failure determination template is a high value, and if normalization fails, the correlation value indicates a low value. Therefore, the success or failure of normalization can be determined by setting a threshold value of a correlation value as a determination criterion in advance. Here, in order to briefly explain the normalization success / failure determination, only the right eye has been described. However, by using a plurality of characteristic areas as success / failure determination templates and using these multiple success / failure determination templates, more accurate results can be obtained. Normalization success / failure judgment can be executed.
[0081]
If it is determined in step 307 that the normalization of the face direction with respect to the front is successful, the front face feature extraction unit 33 executes feature extraction processing for comparison and collation by the personal identification unit 15 in step 308. If it is determined in step 307 that the normalization of the face direction with respect to the front face has failed, the process returns to step 301, and a series of processing is repeated again using a new input image.
[0082]
As described above, since the face recognition for the front in this embodiment uses the eigenspace method, the feature extraction process in step 308 and the comparison and collation process in step 309 are also performed in a manner corresponding to the eigenspace method. . The outline of the processing in steps 308 and 309 using the eigenspace method will be described below.
[0083]
The face images of the front faces of individual drivers to be recognized are stored in advance in a database. In the eigenspace method, eigenvectors are generated from face images registered in the database. A face class representing each driver's face in the eigenspace is generated by projecting each driver's face stored in the database 19 onto the eigenspace. In front face recognition using the eigenspace method, the face class and the normalized image features are compared and collated, and the individual driver is identified from the result.
[0084]
Accordingly, in step 308, the front face feature extraction unit 33 projects the normalized image onto the unique face space, and extracts the feature of the normalized image as a weight vector. More specifically, first, the normalized input image is converted into an image vector. For example, if the normalized input image is an image composed of n × n pixels, n having individual pixel values as components from the image. 2 Dimension vector (1 row, N 2 Column) is generated. Next, this image vector is projected onto the eigenface space, and a weight vector is generated. For example, when M eigenvectors are stored in the database 19, the image vector is projected onto an M-dimensional eigenspace, and a weight vector having M vector components is generated. M is the number of dimensions of the image vector N 2 If the weight vector is sufficiently small, the weight vector is a vector of a smaller dimension than the original image vector while having the facial features captured in the original image. The weight vector generated in this way is sent to the personal identification unit 15.
[0085]
In step 309, the personal identification unit 15 compares the weight vector obtained from the normalized image with the face class of each driver. In one embodiment of this comparison and collation, the distance between each driver's face class and weight vector in the unique face space is determined, and the driver in the image is determined based on the determined distance.
[0086]
Next, in step 317, the personal identification unit 15 determines whether or not the face class corresponding to the normalized image has been found by comparison and collation (step 309). If the face in the normalized image is the face of a person registered in the database 19, a face class corresponding to that person is found by the comparison and collation in step 309. However, when the face in the normalized image is a face of a person who is not registered in the database 19, a face class corresponding to that person cannot be found. Therefore, if a corresponding face class is not found by comparison and collation, the process proceeds to step 321 to execute the unregistered person sequence. If the corresponding face class is found by comparison, the driver in the driver's seat is determined and the registrant sequence in step 319 is processed. The registrant sequence and the non-registrant sequence will be described in detail later.
[0087]
Such face recognition by the eigenspace method can reduce the amount of calculation according to the number of transformation parameters (weight components) at the time of projection, so that the normalized correlation that requires the calculation for the number of pixels of the input image Compared with this method, high-speed processing can be realized.
[0088]
Next, normalization processing and face recognition for landscape orientation will be described. As shown in FIG. 10, the flow of normalization processing and face recognition processing for landscape orientation is basically the same flow as the processing for front (steps 305 to 309). However, since the branching process is executed in accordance with the face orientation in step 303, the normalization process executed for the face in the horizontal direction and the face recognition itself can use a method different from the process for the front face.
[0089]
The face recognition embodiment for landscape orientation can be broadly divided into two types of embodiments, and the normalization process differs accordingly. The first embodiment is an embodiment in which a database is created in advance from images by face orientations of individual drivers, and face recognition is executed using those face orientation databases in accordance with the detected face orientations. is there. In this case, the normalization process is executed according to the face orientation detected from the input image.
[0090]
For example, as shown in FIG. 9d, the reference point for positioning the face in the normalization process (that is, the intersection of the x axis and the y axis in the figure) varies depending on the face orientation. Therefore, when normalizing a horizontally oriented input image, the reference point for positioning the face must be changed according to the face orientation in the image. For this reason, the side face feature extraction unit 35 executes normalization processing according to the face orientation detected by the face orientation / gaze detection unit 11. In this case, the normalization success / failure determination is also executed using different templates depending on the face orientation. These normalization processing and normalization success / failure determination are executed by the same processing as that for a face image with respect to the front, except that processing is performed for each face direction. In the comparison and collation for the horizontal direction, the comparison and collation is executed based on the database for each face direction of the face images photographed for each face direction.
[0091]
In another embodiment of face recognition for landscape orientation, a three-dimensional face model is generated from an input image, and normalization processing is performed using the three-dimensional face model. In this case, a predetermined feature point is extracted from the input image, and a three-dimensional face model of the input image is generated from the feature point. The three-dimensional face model generated here is a face model generated in the face orientation when the input image is taken. Therefore, if the face model is rotated according to the face orientation obtained by the face orientation / gaze detection unit 11, a face model of the input image facing the front can be obtained. In this embodiment, generation of a three-dimensional face model from an input image and rotation processing of the model correspond to normalization processing in step 311. Since the normalized image after rotation can be processed in the same manner as a face image facing the front, the success / failure determination of the normalization process is processed in the same manner as the determination of success / failure of the normalization process of the face facing the front. . The face recognition in this embodiment can be realized by using a face recognition method (for example, eigenspace method) facing the front of the prior art.
[0092]
In step 317 after the comparison matching for the landscape orientation is performed, the personal identification unit 15 determines whether a corresponding face class has been found by the comparison matching. Step 319 or step 321 is processed according to the determination at step 317.
[0093]
Next, an example of the registrant sequence in step 319 will be described. FIG. 11 shows a flowchart of processing of the registrant sequence. As described above, when the driver in the driver's seat is determined in step 317, the personal identification unit 15 gives the driver permission to drive the vehicle to the driver in the driver's seat in step 323. This permission allows the car to be operated by the driver. In step 325, the personal identification unit 15 reads environment setting data corresponding to the determined driver from the database 19 and sends the data to the environment setting unit 17. In step 327, the environment setting unit 17 controls each actuator according to the environment setting data to set the mirror angle and the sheet position.
[0094]
FIG. 12 shows a flowchart of the process of the embodiment for the unregistered person sequence. As described above, if the face of the driver in the driver's seat is the face of an unregistered person who is not stored in the database in step 317, the personal identification unit 15 requests the driver to use the car by the administrator. To do. Here, the manager is a person who is responsible for the use of the car, such as the owner of the car or the manager of the car. The administrator has any means to prove his identity to the car. For example, such identification means is means capable of ensuring security such as a key, a card key, and a password. When an unregistered driver wants to drive a vehicle according to the present invention, the vehicle will require a manager's use authorization certificate. In that case, an unregistered driver or administrator must prove the use permission to the vehicle by means of identification. If the identification means is not presented to the vehicle, the personal identification unit 15 determines in step 335 that the unregistered driver is not permitted to use the vehicle by the administrator, and prohibits the driver from operating the vehicle. When the identification means is presented to the vehicle, the personal identification unit 15 determines whether or not the unregistered driver is a new driver who desires registration. Therefore, if the person in the driver's seat is a new driver who wishes to register, new registration of the driver is executed through any means in step 341. If the person in the driver's seat does not want to register, it is a driver who has temporarily received permission to use the car from the administrator. Therefore, in step 339, the car is operated with restrictions as a temporary driver. to approve.
[0095]
In the above embodiment, an example of a car provided with a personal authentication device according to the present invention has been described. However, the present invention is not limited to personal authentication in a car and can be applied to other fields. Is possible.
[0096]
FIG. 13 shows an application example of the personal authentication apparatus of the present invention to a computer. In FIG. 13, the image input unit 1 is connected to a computer 401 via an image processing board 47 to constitute a personal authentication device. The computer 401 is a general-purpose computer to which a user interface such as a monitor 402, a keyboard 403, and a mouse 404 is connected. For example, a computer operates using an OS known as “Linux”. In this example, when a certain user uses the computer 401, the image input unit 1 connected to the computer 401 automatically captures the user's face image and executes personal authentication from these images. Only when the personal authentication is successful, the use of the computer 401 is permitted, and the user can use the computer 401. Such personal authentication can be realized by executing functional blocks such as the face orientation / gaze detection unit 11, the feature extraction unit 13, and the personal identification unit 15 described above on the computer 401.
[0097]
Thus, the present invention is not limited to the above-described embodiments, and can be applied to many fields.
[0098]
【The invention's effect】
According to the present invention, since the personal authentication device includes a function for detecting the face orientation of the user, face recognition according to the face orientation can be executed, and the accuracy of personal authentication is improved compared to the face recognition of the prior art. It is possible to improve.
[Brief description of the drawings]
FIG. 1 shows an embodiment of an automobile equipped with a personal authentication device according to the present invention.
FIG. 2 is a functional block diagram of an automobile provided with a personal authentication system in the present embodiment.
FIG. 3 is a diagram illustrating an example of an image input unit.
FIG. 4 is a flowchart of processing of an initial search unit.
FIG. 5 is a flowchart of processing of a face direction detection unit and a line-of-sight detection unit.
FIG. 6 is a diagram showing an example of an initial search template.
FIG. 7 is a diagram showing an example of a three-dimensional face feature point model.
FIG. 8 is a schematic view of a method for obtaining a gaze direction.
FIG. 9 is a diagram illustrating an example in which the inclination of the face is corrected by normalization.
FIG. 10 is an overall flowchart of personal authentication mode.
FIG. 11 is a flowchart of processing of a registrant sequence.
FIG. 12 is a flowchart of processing of an unregistered person sequence.
FIG. 13 is a diagram showing a computer including a personal authentication system in the present embodiment.
[Explanation of symbols]
1 Image input section
6 Infrared irradiation part
10 users
11 Face orientation / line-of-sight detection unit
13 Feature extraction unit
15 Personal Identification Department
17 Environment Setting Department
19 Database

Claims (7)

  1. A device for authenticating a user,
    A plurality of cameras for photographing the user;
    Illumination means installed to illuminate the user's face;
    Detecting means for detecting the orientation of the user's face from the image output of the camera;
    Means for sorting the face image of the image output into a front image and a face image other than the front according to the detected face orientation;
    A database of collation face images taken from the front and other than the front;
    Normalization means for adjusting each of the face images other than the front face and the front face to a face image having the same size and position as the corresponding face image for comparison other than the front face or the front face stored in the database of face images for matching ;
    The weight vectors of each of the normalized front and non-front face images are extracted, and each weight vector is stored in the database of the face image for matching and the face of the matching face image other than the front or front face. An authentication device comprising: identification means for comparing with a class and identifying the user.
  2. Extracting means for extracting a predetermined image region from each of the images normalized by the normalizing means;
    A success / failure determination database for storing reference data of the predetermined image area;
    Against the reference data corresponding to each of the extracted image area, the authentication device according to claim 1, and a determining success determining means success or failure of normalization by the normalization means.
  3. The face direction detecting means includes
    Including a predetermined reference image of one or more facial feature parts and a feature point database storing three-dimensional coordinates corresponding to the reference image, wherein the three-dimensional coordinates are respectively associated with the facial feature parts ,
    Extracting means for extracting an image region corresponding to the reference image of the feature portion from the image output of the camera using the reference image of the feature portion as a template;
    Means for determining three-dimensional coordinates of the extracted image region based on the extracted image region;
    The authentication apparatus according to claim 1, further comprising: means for detecting the orientation of the user's face from the three-dimensional coordinates of the obtained image region and the three-dimensional coordinates corresponding to the reference image.
  4.   The authentication apparatus according to claim 1, wherein the illumination unit irradiates the user's face with infrared light.
  5. The identifying means includes means for determining whether a face image corresponding to the face image from the image output exists in the database of face images for verification;
    In response to the presence of a face image corresponding to the face image from the image output in the verification database, the user is determined to be a registered user;
    The authentication apparatus according to claim 1, wherein the user is determined to be an unregistered user in response to a face image corresponding to the face image from the image output not existing in the verification database.
  6. The authentication device includes an environment setting database that stores a plurality of environment setting data associated with each of the registered users;
    Environment setting means for setting the environment of the vehicle according to the environment setting data,
    When it is determined that the user is a registered user, driving of the automobile is permitted, the environment setting data relating to the registered user is read from the environment setting database, and the environment of the automobile is determined according to the environment setting data relating to the registered user. The authentication apparatus according to claim 5 , further comprising: setting.
  7. The authentication device includes authentication means for authenticating identification means for authenticating a user's identity,
    When it is determined that the user is an unregistered user, the user is requested to present the identification means,
    Granting the user permission to use the vehicle in response to the authentication means being authenticated by the authentication means;
    The authentication device according to claim 5 , wherein the user is prohibited from using the vehicle in response to the fact that the identification means is not presented by the user and that the identification means is not authenticated by the authentication means.
JP2001083728A 2001-03-22 2001-03-22 Personal authentication device using facial images Active JP4675492B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2001083728A JP4675492B2 (en) 2001-03-22 2001-03-22 Personal authentication device using facial images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2001083728A JP4675492B2 (en) 2001-03-22 2001-03-22 Personal authentication device using facial images

Publications (2)

Publication Number Publication Date
JP2002288670A JP2002288670A (en) 2002-10-04
JP4675492B2 true JP4675492B2 (en) 2011-04-20

Family

ID=18939509

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2001083728A Active JP4675492B2 (en) 2001-03-22 2001-03-22 Personal authentication device using facial images

Country Status (1)

Country Link
JP (1) JP4675492B2 (en)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4238542B2 (en) * 2002-08-30 2009-03-18 日本電気株式会社 Face orientation estimation apparatus, face orientation estimation method, and face orientation estimation program
JP4059224B2 (en) 2004-04-13 2008-03-12 株式会社デンソー Driver appearance recognition system
WO2005114576A1 (en) * 2004-05-21 2005-12-01 Asahi Kasei Kabushiki Kaisha Operation content judgment device
JP4217664B2 (en) * 2004-06-28 2009-02-04 キヤノン株式会社 Image processing method and image processing apparatus
DE102004044771B4 (en) * 2004-09-16 2006-08-24 Bayerische Motoren Werke Ag Method for image-based driver identification in a motor vehicle
JP4459788B2 (en) 2004-11-16 2010-04-28 パナソニック株式会社 Facial feature matching device, facial feature matching method, and program
JP4501708B2 (en) * 2005-02-02 2010-07-14 トヨタ自動車株式会社 Driver's face orientation determination device
JP4561400B2 (en) * 2005-02-24 2010-10-13 オムロン株式会社 Monitoring device
JP4267584B2 (en) 2005-02-28 2009-05-27 株式会社東芝 Device control apparatus and method
JP2006344100A (en) * 2005-06-10 2006-12-21 Konica Minolta Holdings Inc Authentication device and image output device
JP4692174B2 (en) * 2005-09-14 2011-06-01 株式会社日立製作所 Personal authentication device and door opening / closing system
JP4465619B2 (en) 2005-10-31 2010-05-19 ソニー株式会社 Registration device, verification device, image correction method, and program
JP4758218B2 (en) * 2005-12-16 2011-08-24 株式会社山武 3D pattern position / posture detection device
JP5137833B2 (en) * 2006-07-14 2013-02-06 パナソニック株式会社 Gaze direction detection device and gaze direction detection method
JP2008089402A (en) * 2006-10-02 2008-04-17 Konica Minolta Holdings Inc Information processing system, program, and information processing method
JP2008087140A (en) * 2006-10-05 2008-04-17 Toyota Motor Corp Speech recognition robot and control method of speech recognition robot
JP5001028B2 (en) 2007-03-02 2012-08-15 株式会社デンソー Driving environment setting system, in-vehicle device, and program for in-vehicle device
JP4910827B2 (en) * 2007-03-28 2012-04-04 カシオ計算機株式会社 Image processing apparatus and program
US8174555B2 (en) 2007-05-30 2012-05-08 Eastman Kodak Company Portable video communication system
JP4434236B2 (en) 2007-06-11 2010-03-17 ソニー株式会社 Image processing apparatus, image display apparatus, imaging apparatus, processing method and program thereof
EP3462727A1 (en) 2007-06-14 2019-04-03 FUJIFILM Corporation Photographing apparatus
JP4569670B2 (en) * 2008-06-11 2010-10-27 ソニー株式会社 Image processing apparatus, image processing method, and program
JP2010033305A (en) * 2008-07-29 2010-02-12 Hitachi Ltd Image information processing method and device
JP5098981B2 (en) * 2008-12-08 2012-12-12 トヨタ自動車株式会社 Face detection device
JP5448952B2 (en) * 2010-03-16 2014-03-19 グローリー株式会社 Same person determination device, same person determination method, and same person determination program
JP5538031B2 (en) * 2010-03-31 2014-07-02 アズビル株式会社 Verification device
KR101172227B1 (en) 2010-11-18 2012-08-07 현대자동차주식회사 System of management for entrance and exit using driver's face certification within vehicle and thereof method
KR101805716B1 (en) * 2011-09-21 2017-12-07 현대모비스 주식회사 Apparatus and Method for certifying Driver by recognition face
JP6212930B2 (en) * 2013-05-01 2017-10-18 株式会社ニコン Information processing apparatus, system, method, and program
TWI532620B (en) * 2013-06-24 2016-05-11 Utechzone Co Ltd Vehicle occupancy number monitor and vehicle occupancy monitoring method and computer readable record media
EP3103060A1 (en) 2014-02-04 2016-12-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. 2d image analyzer
JP6354359B2 (en) * 2014-06-11 2018-07-11 株式会社デンソー Vehicle setting device
DE102014211823A1 (en) * 2014-06-20 2015-12-24 Robert Bosch Gmbh Procedure for personal identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000222576A (en) * 1999-01-29 2000-08-11 Nec Corp Person identification method, device therefor, recording medium recording person identification program, and robot system
JP2000259834A (en) * 1999-03-11 2000-09-22 Toshiba Corp Registering device and method for person recognizer
JP2000311220A (en) * 1999-04-28 2000-11-07 Hitachi Ltd Unit operation right managing system, unit operation right managing terminal, ic chip and ic chip case
JP2001051338A (en) * 1999-08-12 2001-02-23 Canon Inc Camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2872776B2 (en) * 1990-08-20 1999-03-24 日本電信電話株式会社 Face image matching device
JPH05300601A (en) * 1992-04-20 1993-11-12 Canon Inc Vehicle safety confirmaton instruction apparatus
JPH07302327A (en) * 1993-08-11 1995-11-14 Nippon Telegr & Teleph Corp <Ntt> Method and device for detecting image of object
JP3738456B2 (en) * 1994-11-14 2006-01-25 マツダ株式会社 Article position detection method and apparatus
JPH0944676A (en) * 1995-08-01 1997-02-14 Toyota Motor Corp Face detector
US6100811A (en) * 1997-12-22 2000-08-08 Trw Inc. Fingerprint actuation of customized vehicle features

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000222576A (en) * 1999-01-29 2000-08-11 Nec Corp Person identification method, device therefor, recording medium recording person identification program, and robot system
JP2000259834A (en) * 1999-03-11 2000-09-22 Toshiba Corp Registering device and method for person recognizer
JP2000311220A (en) * 1999-04-28 2000-11-07 Hitachi Ltd Unit operation right managing system, unit operation right managing terminal, ic chip and ic chip case
JP2001051338A (en) * 1999-08-12 2001-02-23 Canon Inc Camera

Also Published As

Publication number Publication date
JP2002288670A (en) 2002-10-04

Similar Documents

Publication Publication Date Title
Chang et al. An evaluation of multimodal 2D+ 3D face biometrics
USRE42205E1 (en) Method and system for real-time facial image enhancement
US7127086B2 (en) Image processing apparatus and method
KR101356358B1 (en) Computer-implemented method and apparatus for biometric authentication based on images of an eye
US6707933B1 (en) Face direction estimation using a single gray-level image
US8811726B2 (en) Method and system for localizing parts of an object in an image for computer vision applications
US6445810B2 (en) Method and apparatus for personnel detection and tracking
DE602004013181T2 (en) System and method for detecting an occupant and head maintenance through the use of stereo detectors
US7801335B2 (en) Apparatus and methods for detecting the presence of a human eye
JP4546956B2 (en) Target orientation estimation using depth detection
KR101242390B1 (en) Method, apparatus and computer-readable recording medium for identifying user
US7027619B2 (en) Near-infrared method and system for use in face detection
US6813010B2 (en) Personal identification system
US6181805B1 (en) Object image detecting method and system
Kawaguchi et al. Iris detection using intensity and edge information
EP2842075B1 (en) Three-dimensional face recognition for mobile devices
JP4610614B2 (en) Multi-biometric system and method based on a single image
US20040190775A1 (en) Viewpoint-invariant detection and identification of a three-dimensional object from two-dimensional imagery
Niinuma et al. Soft biometric traits for continuous user authentication
US20080212849A1 (en) Method and Apparatus For Facial Image Acquisition and Recognition
Huang et al. Face detection and precise eyes location
JP4786483B2 (en) Biometric guidance control method for biometric authentication device and biometric authentication device
US5012522A (en) Autonomous face recognition machine
D'Orazio et al. An algorithm for real time eye detection in face images
US20040114782A1 (en) Iris image data processing for use with iris recognition system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20071128

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20101029

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20101109

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20101227

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110125

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110126

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140204

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4675492

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150