WO2019216091A1 - Face authentication device, face authentication method, and face authentication system - Google Patents
Face authentication device, face authentication method, and face authentication system Download PDFInfo
- Publication number
- WO2019216091A1 WO2019216091A1 PCT/JP2019/015539 JP2019015539W WO2019216091A1 WO 2019216091 A1 WO2019216091 A1 WO 2019216091A1 JP 2019015539 W JP2019015539 W JP 2019015539W WO 2019216091 A1 WO2019216091 A1 WO 2019216091A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- image
- captured image
- authentication
- plane
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present disclosure relates to a face authentication device, a face authentication method, and a face authentication system that authenticate a target person using an image of a face portion in a captured image captured by a camera.
- Patent Literature 1 a face image sequence for a predetermined time is input, and whether or not a face included in the face image sequence is impersonated based on temporal changes in predetermined color information extracted from the face image sequence.
- An information processing apparatus for determining whether or not is disclosed.
- Patent Document 1 when an improper person uses a portable display terminal such as a tablet terminal to display an image showing the face of an authorized registrant and impersonates the image, Even if the used face authentication is performed, fraud cannot be detected, and there is a possibility that the authentication result is approved.
- An object of the present invention is to provide a face authentication device, a face authentication method, and a face authentication system to be suppressed.
- an image input unit that inputs a first captured image and a second captured image in which the same person is captured by each of a plurality of cameras, and a face area of at least one of the first captured image and the second captured image
- a plane detection unit that detects whether or not the face image of the face area is a plane based on face information corresponding to the face area and the first and second captured images
- a validity determination unit that determines the validity of the authentication result of the person's face information based on a determination result of whether or not the face image of the face area is a plane.
- the present disclosure is a face authentication method in a face authentication apparatus, the step of inputting a first captured image and a second captured image in which the same person is captured by each of a plurality of cameras, and the first captured image And detecting a face area of at least one of the second captured images, and based on the face information corresponding to the face area and the first captured image and the second captured image, the face image of the face area is a plane. Determining whether or not there is a step of determining the validity of the authentication result of the face information of the person based on the determination result of whether or not the face image of the face region is a plane, Provide a face recognition method.
- the present disclosure is a face authentication system in which a plurality of cameras and a face authentication device are connected to be able to communicate with each other, and the plurality of cameras each include a first captured image and a second captured image capturing the same person.
- An image is output to the face authentication device, the face authentication device receives the first captured image and the second captured image, detects at least one face region of the first captured image and the second captured image, Based on the face information corresponding to the face area and the first captured image and the second captured image, it is determined whether the face image of the face area is a plane, and the face image of the face area is a plane.
- a face authentication system that determines the validity of an authentication result of the person's face information based on a determination result of whether or not there is.
- FIG. 1 is a diagram illustrating a system configuration example of the face authentication system according to the first embodiment.
- FIG. 2 is a diagram illustrating an arrangement example of four imaging devices.
- FIG. 3 is a block diagram illustrating a functional configuration example realized by the cooperation of the processor and the memory built in the face authentication apparatus according to the first embodiment.
- FIG. 4 is a block diagram illustrating an example of a functional configuration and an outline of operation of the flat panel detection unit.
- FIG. 5 is an explanatory diagram showing an example of an outline of operation for performing plane determination by capturing a face image projected on a flat panel.
- FIG. 6 is an explanatory diagram showing an example of an outline of operation for performing plane determination by imaging an actual human face.
- FIG. 1 is a diagram illustrating a system configuration example of the face authentication system according to the first embodiment.
- FIG. 2 is a diagram illustrating an arrangement example of four imaging devices.
- FIG. 3 is a block diagram illustrating a functional configuration example realized by the cooperation of the processor and the memory
- FIG. 7 is a flowchart illustrating an example of an operation procedure of face authentication according to the first embodiment in time series.
- FIG. 8 is a flowchart illustrating an example of the operation procedure of the face image plane determination according to step S6 in time series.
- FIG. 9 is a flowchart illustrating an example of another face authentication operation procedure according to the first embodiment in time series.
- FIG. 10 is a diagram illustrating an arrangement example of the other three imaging devices according to the first embodiment.
- FIG. 11 is a block diagram illustrating a functional configuration example realized by cooperation of a processor and a memory built in the face authentication apparatus according to the second embodiment.
- FIG. 12 is an explanatory diagram illustrating an operation outline example of the frame detection processing by the frame detection unit.
- FIG. 13 is an explanatory diagram illustrating an operation outline example of the arrangement determination processing by the arrangement determination unit.
- FIG. 14 is a flowchart illustrating an example of a face authentication operation procedure according to the second embodiment in time series.
- FIG. 1 is a diagram illustrating a system configuration example of a face authentication system 100 according to the first embodiment.
- the face authentication system 100 performs face authentication of a person to be authenticated by collating face image data obtained by capturing an authentication target person with an imaging device and face image data of a regular person registered in advance.
- the face authentication system 100 outputs the face authentication result of the person to be authenticated to the control target device 40.
- the control target device 40 is, for example, a usage management device used by a service provider such as car sharing or a security area entrance / exit management device, but is not limited thereto.
- the use management device permits the use of a car share target vehicle when the user is a registered regular user as a face authentication result.
- the usage management device refuses to use the vehicle to be shared.
- the usage management device sends an authentication result indicating whether or not the vehicle can be used to an electronic control unit (ECU: Electronic Control Unit) mounted on the vehicle, for example.
- the electronic control unit sets the start (for example, ignition on) of the own vehicle to be valid or invalid based on the authentication result sent from the usage management device.
- the entrance / exit management device permits a person who is permitted to enter / exit as a face authentication result to pass the person to the security area.
- the entrance / exit management device blocks the person from entering the security area or intrudes the administrator or operator of the face authentication system 100. Notification that the person has been detected.
- the face authentication system 100 includes a face authentication device 20, a plurality of (for example, two) imaging devices 11A and 11B, a display device 12, and an ID reading device 13. When there is no need to distinguish between the two imaging devices 11A and 11B, they are simply referred to as the imaging device 11.
- the two imaging devices 11A and 11B are arranged at positions where, for example, the face of a person walking in a passage can be simultaneously imaged from different directions.
- the imaging device 11 includes an imaging lens, an imaging device, an image signal processing circuit, a communication interface, and the like.
- the imaging device 11 captures an imaging region including the authentication target person 30 as a subject, and outputs the captured image to the face authentication device 20. Note that the number of imaging devices is not limited to two, and may be any number.
- the imaging device 11 may be a monitoring camera for monitoring a predetermined area, for example.
- FIG. 2 is a diagram illustrating an arrangement example of the four imaging devices 11e, 11f, 11g, and 11h.
- the four imaging devices 11e, 11f, 11g, and 11h are arranged in a direction perpendicular to the surface of the passage through which the person 30 to be authenticated passes (that is, a direction perpendicular to the horizontal plane).
- the four imaging devices 11e, 11f, 11g, and 11h image the presence range SA of the face of the person 30 to be authenticated so that the angles of view overlap in the passage width wd.
- the optical axes of the four imaging devices 11e, 11f, 11g, and 11h are all parallel, for example.
- the face of a person such as a short child is imaged by two imaging devices 11e and 11f installed at a low position.
- the two imaging devices 11e and 11f are used as the imaging devices 11A and 11B in FIG.
- the face of a person such as a tall adult is imaged by two imaging devices 11g and 11h installed at a high position.
- the two imaging devices 11g and 11h are used as the imaging devices 11A and 11B in FIG.
- the imaging devices 11e, 11f, 11g, and 11h are not particularly distinguished, they are referred to as the imaging device 11.
- two face images of these face images are used for face authentication.
- three or more face images may be temporally continuous three frames or temporally discontinuous three frames.
- the ID reader 13 is composed of an information reader such as a card reader.
- the ID reading device 13 reads the identification information (for example, ID information) of the ID card 35 for personal identification possessed by the person 30 to be authenticated, and outputs the read identification information to the face authentication device 20.
- Examples of the ID card 35 include a membership card, an identification card, a driver's license, and a my number card.
- the face authentication device 20 is configured by a PC (personal computer) or the like, and includes a processor, a memory, a communication interface, and the like.
- the face authentication device 20 performs face authentication using the image information of the captured image acquired from the imaging device 11 and the ID information acquired from the ID reading device 13, and outputs the face authentication result to the control target device 40. Details of the face authentication process will be described later.
- Display device 12 includes a display device such as a liquid crystal display.
- the display device 12 displays various data or information when the face authentication device 20 executes face authentication.
- Various data or information includes an operation guidance screen, a captured image, a face detection result, a frame detection result, a face matching result, an authentication result including face authentication, and the like.
- the display device 12 is disposed in a monitoring room or the like that is visible to the administrator or operator of the face authentication system 100.
- the display device 12 may be disposed in the vicinity of the imaging device 11 so that guidance display can be performed for the person 30 to be authenticated.
- FIG. 3 is a block diagram showing an example of a functional configuration realized by the cooperation of the processor and the memory built in the face authentication apparatus 20 according to the first embodiment.
- the face authentication device 20 includes a face recognition unit 21, a face collation unit 24, an validity determination unit 25, and a flat panel detection unit 70.
- the face recognizing unit 21, the face collating unit 24, the validity determining unit 25, and the flat panel detecting unit 70 are, for example, predetermined processors stored in the memory by the cooperation of the processor and the memory built in the face authentication device 20 which is a PC. This is realized by executing the program.
- the face recognition unit 21 inputs data of captured images GZ1 and GZ2 (that is, captured images GZ1 and GZ2 including face images) as shown in FIG. Then, the face of the person 30 to be authenticated included in the captured image is detected by face recognition processing using a known face recognition technique.
- the face recognition unit 21 acquires face feature information and position information in the captured image (in other words, face arrangement information) by face recognition processing.
- the face feature information may include vital information such as skin color, blood flow, and hair color in addition to the feature points of the face.
- the face collating unit 24 uses the face feature information detected by the face recognizing unit 21 and the collation data of the face information of the regular registrant, and the face in the captured image detected by the face recognizing unit 21 and the regular registrant. It is determined whether or not the face matches.
- the collation data is recorded on, for example, a recording medium built in the face authentication device 20, and includes image data of a regular registrant's face image and feature data of face feature information.
- the administrator or operator of the face authentication system 100 sends the verification data included in the ID information recorded on the ID card 35 possessed by the person 30 to be authenticated to the face verification unit 24 of the face authentication device 20 as the verification data.
- the input verification data is stored in a recording medium that can be read by the face verification unit 24.
- the validity determination unit 25 determines whether or not the face of the person to be authenticated 30 included in the captured image is valid (that is, whether or not the face authentication is approved) as a result of the face matching by the face matching unit 24, Outputs the face authentication result.
- the flat panel detection unit 70 determines whether or not the face image detected in the captured image of the person 30 to be authenticated is a plane (see FIG. 4).
- FIG. 4 is a block diagram illustrating an example of a functional configuration and an outline of operation of the flat panel detection unit 70.
- the flat panel detection unit 70 includes a face detection unit 71, 72, a face correspondence determination unit 73, a feature point detection unit 74, 75, a feature point matching unit 76, a projective transformation estimation unit 77, a projective transformation unit 78, a correlation calculation unit 79, A determination unit 80 is included.
- the face detection units 71 and 72 detect the face of the target person included in the corresponding captured images GZ1 and GZ2 in which the authentication target person 30 is captured, and acquire face arrangement information.
- the face arrangement information is feature point detection ranges rc1 and rc2 (see FIG. 5), and is information representing a rectangular range including, for example, eyes, nose and mouth.
- the face detection unit 71 performs face detection on the captured image GZ1 captured by the imaging device 11A.
- the face detection unit 72 performs face detection on the captured image GZ2 captured by the imaging device 11B.
- Feature point detectors 74 and 75 detect facial feature points included in the facial image specified by the face arrangement information, for example, by a feature point detection process using a known facial feature point detection technique.
- facial feature points for example, eyes, nose, mouth position, mouth position, deep nostril position, eye corner position, bangs curve on the forehead, distance between the eye corners, facial contour, protruding
- a part that characterizes a facial part such as the position of the tip of the nose is included.
- the feature point detection unit 74 detects a feature point of the face included in the face image fg1 (see FIG. 5) obtained by the face detection unit 71.
- the feature point detection unit 75 detects a feature point of the face included in the face image fg2 (see FIG.
- the face detection unit 72 obtains the face detection unit 72.
- the feature point detection units 74 and 75 detect, for example, a face corner portion (in other words, a contour) to detect a face feature point, or perform SIFT (Scale-Invariant Feature Transform) or SURF (Speeded-Up Robust).
- SIFT Scale-Invariant Feature Transform
- SURF Speeded-Up Robust
- the feature point matching unit 76 performs a feature point matching process using the outputs from the feature point detection units 74 and 75.
- the same feature points correspond to the plurality of feature points p1a, p2a, p3a detected in the face image fg1 and the plurality of feature points p1b, p2b, p3b detected in the face image fg2. Attached.
- the feature points are associated with the line segments c1 and p3b between the feature points p1a and p1b, between the feature points p2a and p2b, and between the feature points p3a and p3b, respectively. It is expressed by connecting with c2 and c3 (see FIG. 5).
- the projective transformation estimation unit 77 calculates a projective transformation matrix based on the coordinates of two corresponding feature points included in the face images fg1 and fg2.
- the projective transformation matrix is a matrix for transforming coordinates in order to project a certain plane onto another plane.
- the projective transformation matrix performs coordinate transformation for projecting the face of the face image fg2 onto the face of the face image fg1.
- the projective transformation estimation unit 77 may calculate a projective transformation matrix that projects the face of the face image fg1 onto the face of the face image fg2.
- the projective transformation matrix is represented by a 3 ⁇ 3 matrix, for example.
- each element of a matrix represented by a 3 ⁇ 3 matrix is calculated by associating the coordinates of at least four matching feature points.
- the projective transformation estimation unit 77 may calculate a projective transformation matrix using a least square method or the like using four or more corresponding points (that is, coordinates indicating the positions of feature points that match each other). . Thereby, the projective transformation estimation unit 77 can calculate a more accurate projective transformation matrix.
- the projective transformation unit 78 performs projective transformation on the face image fg2 using the projective transformation matrix calculated by the projective transformation estimation unit 77, and obtains a transformed image fg2a (see FIG. 5).
- the correlation calculation unit 79 calculates a correlation value indicating the similarity between the two images based on the face image fg1 and the converted image fg2a.
- template matching is performed in which a position similar to a designated image (for example, a template) is searched from among the images.
- normalized cross-correlation NNC
- the correlation value the sum of squares of differences in luminance values (SSD: SumSofSSquared Difference), the sum of absolute values of differences in luminance values (SAD: Sum of Absolute Difference), and the like are used. Also good.
- the determination unit 80 determines whether or not the face image fg1 is flat (that is, a two-dimensional plane) as projected on the flat panel. The determination result is output.
- the correlation value is high, for example, when the correlation value is higher than a preset threshold value, neither the face image fg1 nor the face image fg2 has a three-dimensional structure (for example, an actual human face). A two-dimensional plane).
- the correlation value when the correlation value is low, for example, when the correlation value is equal to or less than a preset threshold value, at least one of the face image fg1 and the face image fg2 is not a plane but a three-dimensional object (for example, the face of an actual person) 3D solid object having a sense of depth.
- FIG. 5 is an explanatory diagram showing an example of an outline of operation for capturing a face image projected on the flat panel 150 and performing plane determination.
- the flat panel 150 is, for example, a display unit (so-called display device such as a display) such as a tablet terminal, a digital photo frame, a smartphone, or a notebook PC (Personal Computer).
- the imaging devices 11A and 11B respectively capture the face images displayed on the flat panel 150, and obtain captured images GZ1 and GZ2. At this time, the imaging device 11 ⁇ / b> A captures images from the front direction with respect to the flat panel 150.
- the imaging device 11B captures an image by slightly tilting the flat panel 150 from the front direction.
- the face detection units 71 and 72 perform face detection on the captured images GZ1 and GZ2, respectively, to obtain feature point detection ranges rc1 and rc2.
- the feature point detection units 74 and 75 detect face feature points p1a, p2a, and p3a and p1b, p2b, and p3b included in the face images fg1 and fg2 that are surrounded by the feature point detection ranges rc1 and rc2, respectively.
- the feature point matching unit 76 associates a plurality of feature points p1a, p2a, and p3a detected in the face image fg1 with a plurality of feature points p1b, p2b, and p3b detected in the face image fg2, and each of them is a line segment. Connect with c1, c2, and c3.
- the projective conversion unit 78 performs projective conversion on the face image fg2 to obtain a converted image fg2a.
- the correlation calculation unit 79 calculates a correlation value between the face image fg1 and the converted image fg2a.
- the correlation value calculated by the correlation calculation unit 79 is 0.999, which is very close to the value 1 and is a high value.
- the determination unit 80 determines that the face image fg ⁇ b> 1 and the face image fg ⁇ b> 2 are two-dimensional images, and are flat (that is, flat) face images displayed on the flat panel 150.
- the determination unit 80 outputs a superimposed image mg1 in which the face image fg1 and the converted image fg2a are superimposed, and displays the superimposed image mg1 on the display device 12.
- the superimposed image mg1 is displayed like a single face image in which the face image fg1 and the converted image fg2a substantially coincide with each other and there is almost no double.
- the face image projected on the flat panel shows a high correlation value. Accordingly, face authentication of a person who impersonates a regular registrant by holding a flat panel on which a regular registrant's face image is projected is denied.
- FIG. 6 is an explanatory diagram showing an example of an outline of operation for performing plane determination by imaging an actual human face.
- the imaging devices 11A and 11B respectively capture the face of the actual person hm and obtain captured images GZ3 and GZ4. At this time, the imaging device 11A captures an image of the face of the actual person hm from the front direction.
- the imaging device 11B captures an image by slightly tilting the face of the actual person hm from the front direction.
- the face detection units 71 and 72 perform face detection on the captured images GZ3 and GZ4, respectively, and obtain feature point detection ranges rc3 and rc4.
- the feature point detection units 74 and 75 detect face feature points p4a, p5a, p6a, and p7a and p4b, p5b, p6b, and p7b included in the face images fg3 and fg4, which are surrounded by the feature point detection ranges rc3 and rc4, respectively. To do.
- the feature point matching unit 76 associates a plurality of feature points p4a, p5a, p6a, and p7a detected in the face image fg3 with a plurality of feature points p4b, p5b, p6b, and p7b detected in the face image fg4, These are connected by line segments c4, c5, c6, and c7.
- the projective conversion unit 78 performs projective conversion on the face image fg4 to obtain a converted image fg4a.
- the correlation calculation unit 79 calculates a correlation value between the face image fg3 and the converted image fg4a.
- the correlation value calculated by the correlation calculation unit 79 is 0.8832, which is lower than the case of FIG.
- the determination unit 80 determines that the face image fg3 and the face image fg4 are three-dimensional objects with a sense of depth, and are actual face images of the person hm. In addition to the determination result, the determination unit 80 outputs a superimposed image mg2 in which the face image fg3 and the converted image fg4a are superimposed, and displays the superimposed image mg2 on the display device 12.
- the face image fg3 and the converted image fg4a are slightly shifted from each other, and there are not a few doubles.
- the converted image fg4a is displayed in magenta and the face image fg3 is displayed in green. Since red purple and green have a complementary color relationship, in the superimposed image mg2, the image portion where they match is displayed in monotone, and the mismatched image portion is displayed in red purple or green. Therefore, the superimposed image mg2 is displayed like two doubled face images in which the green image portion gn and the reddish purple image portion rd are mixed.
- the face image of an actual person shows a low correlation value because the face surface has irregularities. Therefore, face authentication is approved when an actual person's face is imaged by two imaging devices.
- FIG. 7 is a flowchart for explaining an example of a face authentication operation procedure according to the first embodiment in time series.
- the face recognition unit 21 of the face authentication device 20 takes captured images GZ1 and GZ2 captured by the imaging devices 11A and 11B (S1).
- the face recognition unit 21 detects the face of the target person in the captured images GZ1 and GZ2 (S2).
- the face recognition unit 21 determines whether or not the face of the target person has been detected (S3). When the face is not detected, the process of the face authentication device 20 returns to step S1.
- the face collation unit 24 collates the detected face and authenticates the user (S4).
- the face matching unit 24 determines whether the facial feature information of the target person matches the matching data of the registrant's face (S5).
- the validity determination unit 25 determines invalidity, and a message indicating rejection (ie, message indicating NG) as the authentication result of face authentication. ) Is output (S9). Thereafter, the face authentication device 20 ends the process shown in FIG.
- the flat panel detection unit 70 determines the plane of the face image (S6).
- the determination unit 80 determines whether or not the face image is a plane (S7).
- the validity determination unit 25 determines invalidity in step S9 and outputs denial as the authentication result of face authentication. If the plane is not a plane in step S7, the validity determination unit 25 determines the validity and outputs an approval message (that is, an OK message) as the authentication result of face authentication (S8). Thereafter, the face authentication apparatus 20 ends this process.
- FIG. 8 is a flowchart for explaining an example of the operation procedure of the face image plane determination according to step S6 in time series.
- the face detection units 71 and 72 of the flat panel detection unit 70 detect the face images in the captured images, respectively.
- the feature point detectors 74 and 75 each detect a feature point in the face image (S11).
- the feature point matching unit 76 performs feature point matching between two face images (S12).
- the projective transformation estimation unit 77 calculates a projective transformation matrix based on the result of feature point matching (S13).
- the projective transformation unit 78 performs projective transformation on one of the two face images, and acquires a transformed image. (S14).
- the correlation calculation unit 79 calculates a correlation value between the face image not subjected to projective conversion and the converted image (S15).
- the determination unit 80 determines whether the face image is a plane based on the correlation value (S16). Thereafter, the face authentication device 20 ends the process shown in FIG. Thereby, the process of the face authentication apparatus 20 proceeds to step S7 in FIG.
- the flat panel detection unit 70 (an example of the image input unit) of the face authentication device 20 includes the two imaging devices 11A and 11B (an example of a plurality of cameras).
- a captured image GZ1 (an example of a first captured image) and a captured image GZ2 (an example of a second captured image) in which the same face image (person) is captured are input.
- the face detection units 71 and 72 detect at least one face image fg1 or fg2 (face region) of the captured image GZ1 and the captured image GZ2, respectively.
- the determination unit 80 determines whether the face images fg1 and fg2 are planes based on feature points (face information) corresponding to the two face images fg1 and fg2 and the face images fg1 and fg2. Determine whether or not.
- the validity determination unit 25 determines the validity of the face authentication result based on the determination result of whether or not the face images fg1 and fg2 are planes.
- the face authentication device 20 or the face authentication system 100 detects impersonation using an image that can be performed at the time of authentication of the target person with high accuracy, and effectively suppresses malicious acts of a malicious third party due to impersonation. be able to. For example, even if a regular registrant's face image (still image, moving image) is displayed on a display unit such as a tablet terminal and impersonation is attempted, the face image is determined to be a plane. That is, this face image is determined to be different from the face image of a human face that is a three-dimensional object. Therefore, face authentication is denied.
- a regular registrant's face image still image, moving image
- the face image is determined to be a plane. That is, this face image is determined to be different from the face image of a human face that is a three-dimensional object. Therefore, face authentication is denied.
- the face matching unit 24 of the face authentication device 20 uses the face feature information (an example of face information corresponding to the face area) of the detected face image and the registered matching data (an example of face matching data). Match.
- the validity determination unit 25 determines the validity of the face authentication result of the target person based on the face comparison result and the determination result of whether or not the face image is a plane. In this way, the face authentication device 20 or the face authentication system 100 can reduce the number of times of plane determination by performing plane determination on the face images with matching face matching.
- the face authentication device 20 includes an ID reading device 13 (an example of an ID acquisition unit) that acquires person identification information (an example of ID information).
- the validity determination unit 25 determines the validity of the authentication result based on the determination result whether the face image is a plane and the ID information. As described above, the face authentication device 20 or the face authentication system 100 can improve the accuracy of face authentication by using the ID information.
- the flat panel detection unit 70 captures one of the face images fg1 and fg2 in the direction of capturing the one face image fg1 and the other face image fg2. Whether or not the face image is a plane is determined according to the correlation value with the converted image fg2a (an example of the projective conversion image) obtained by projective conversion in the imaging direction at that time. As described above, the face authentication device 20 or the face authentication system 100 can objectively determine whether or not it is a plane by using the correlation value, so that the result of the plane determination does not cause doubt, and the determination accuracy increases.
- the plurality of imaging devices 11 are arranged at different heights in the vertical direction.
- the optical axes of the plurality of imaging devices 11 are parallel to each other.
- each imaging device 11 can reliably capture the face image of the authentication target person 30 for two frames regardless of the height of the authentication target person 30.
- by making the optical axes of the plurality of imaging devices 11 parallel it is possible to prevent the face image from being greatly displaced in the vertical direction. Further, since the captured images are from the same direction, the processing for calculating these correlation values can be reduced, and the calculation accuracy of the correlation values is increased.
- FIG. 9 is a flowchart illustrating an example of another face authentication operation procedure according to the first embodiment in time series.
- the same step process as the face authentication process shown in FIG. 7 is denoted by the same step number or the corresponding step number, the description is simplified or omitted, and different contents will be described.
- the face authentication device 20 performs plane determination first after a face image is detected. Thereafter, face comparison of the target person is performed.
- the face recognition unit 21 detects a face image in step S3
- the flat panel detection unit 70 performs plane determination of the face image (S6).
- the determination unit 80 determines whether or not the face image is a plane (S7). In the case of a plane, the determination unit 80 determines whether or not the number of times determined to be a plane is a predetermined number or more (S7A). When the number is less than the predetermined number, the face authentication device 20 returns to the process of step S1.
- the validity determination unit 25 determines invalidity and outputs a message indicating rejection (that is, a message indicating NG) as the authentication result of face authentication. (S9). Thereafter, the face authentication apparatus 20 ends the process shown in FIG.
- the face collation unit 24 collates the detected face and authenticates the person (S4).
- the face matching unit 24 determines whether or not the feature information of the target person's face matches the matching data of the registrant's face (S5). If it does not match the collation data, the validity determination unit 25 determines invalidity in step S9, and outputs denial as the authentication result of face authentication. If it matches with the collation data in step S5, the validity determination unit 25 determines the validity and outputs a message indicating approval (ie, a message indicating OK) as the authentication result of face authentication (S8). . Thereafter, the face authentication apparatus 20 ends the process shown in FIG.
- face matching can be performed only when it is determined that the face image is not flat. Therefore, it is possible to reduce the number of face matching operations that require a large processing load.
- the face matching unit 24 checks whether or not the face feature information of the target person matches the registrant's face matching data.
- the validity determination unit 25 determines the validity of the authentication result of the person's face information based on the collation result. Thereby, the face authentication apparatus 20 or the face authentication system 100 can omit the face authentication process when the face image is determined to be a plane, and can reduce the processing load.
- the validity determination unit 25 denies the validity of the verification result as to whether or not the person is the person when it is determined a predetermined number of times that the face image of the face area is a plane. Thereby, for example, when the face authentication device 20 or the face authentication system 100 determines that the face of the person is a flat surface not only once but multiple times, there is a possibility of erroneous determination that the face is a single flat surface. It is possible to omit the execution of face collation processing with a large processing load while rejecting, and output denial as the authentication result of the person's face authentication. Therefore, in the face authentication device 20 or the face authentication system 100, the processing load can be reduced.
- FIG. 10 is a diagram illustrating an arrangement example of the other three imaging devices 11i, 11j, and 11k according to the first embodiment.
- the three imaging devices 11i, 11j, and 11k are arranged in a direction perpendicular to the plane of the passage through which the person 30 to be authenticated passes (that is, a direction perpendicular to the horizontal plane).
- the three imaging devices 11i, 11j, and 11k image the face existence range SA of the authentication target person 30 so that the angles of view overlap in the passage width wd. In this case, the face of a person such as a short child is imaged by the imaging device 11k installed at a low position and the imaging device 11i installed at a high position.
- the face of a person having a general height is imaged by the three imaging devices 11i, 11j, and 11k.
- the face of a tall person is imaged by the imaging device 11i installed at a high position and the imaging device 11j installed at an intermediate position.
- the optical axes of the three imaging devices 11i, 11j, and 11k are arbitrarily set according to the face existence range SA and are not parallel. Therefore, when a person with a different height tries to pass through the passage, a face image is captured by at least two imaging devices.
- the plurality of imaging devices are arranged at different heights in the vertical direction, and the optical axes of the plurality of imaging devices are not parallel to each other. Accordingly, the face image of the target person can be reliably captured by two or more imaging devices regardless of the height of the subject.
- the optical axes of a plurality of cameras non-parallel, it is possible to easily capture face images of target persons having different heights. Therefore, in order to capture two face images, the number of installed image pickup devices can be reduced.
- the same components as those in the first embodiment are denoted by the same reference numerals, and the description thereof is simplified or omitted.
- FIG. 11 is a diagram illustrating a functional configuration example realized by cooperation of a processor and a memory built in the face authentication apparatus 20A according to the second embodiment.
- 20 A of face authentication apparatuses have the frame detection part 22 and the arrangement
- FIG. The face recognition unit 21, the frame detection unit 22, the arrangement determination unit 23, the face collation unit 24, the validity determination unit 25, and the flat panel detection unit 70 are, for example, a processor built in the face authentication device 20 ⁇ / b> A that is a PC (Personal Computer) and This is realized by the processor executing a predetermined program stored in the memory in cooperation with the memory.
- PC Personal Computer
- the frame detection unit 22 uses the captured image and the background image to detect a linear frame in the input image obtained by removing the background image from the captured image.
- the frame detection unit 22 acquires frame position information (in other words, frame arrangement information) when there is a frame of the display unit in the captured image.
- the arrangement determination unit 23 determines the arrangement of whether or not the face is located within the frame of the display unit based on the face arrangement information and the frame arrangement information acquired by the face recognition unit 21. As a result of the placement determination, the placement determination unit 23 detects that a display unit (see above) is present in the captured image and a face image is displayed within the frame of the display unit.
- the validity determining unit 25 determines whether the face of the target person shown in the captured image is valid. It is determined whether it is approval or disapproval as the validity of the result of face authentication. The validity determination unit 25 outputs the authentication result to the control target device 40.
- FIG. 12 is an explanatory diagram showing an example of an outline of the operation of the frame detection process performed by the frame detection unit 22.
- spoofing is performed by reproducing an image (still image increase, moving image) in which a face of a regular registrant is reflected on a portable display unit.
- the imaging devices 11A and 11B each image the display unit.
- the captured image captured by one of the imaging devices 11A and 11B is set as an input image 51 input to the frame detection unit 22.
- the input image 51 includes a housing 53 of the display unit and a person 54 displayed on the screen of the display unit.
- the frame detection unit 22 performs a known contour extraction (for example, edge detection) process on the input image 51 to generate an edge image 52.
- the edge image 52 includes a contour 55 of the housing of the display unit and a contour 56 of a person displayed on the screen of the display unit.
- the frame detection unit 22 performs straight line detection processing on the edge image 52 and extracts a straight line portion (line segment) 57 in the image.
- straight line detection processing for example, straight line detection using Hough transform or the like is used.
- the frame detection unit 22 determines whether there are two parallel straight lines and a straight line perpendicular thereto in the straight line portion 57 in the extracted image, extracts a frame 58 surrounded by four sides, and extracts frame coordinates. To get. Note that the frame detection unit 22 is surrounded by a line segment based on a predetermined condition, even if it is a set of square line segments whose four sides are not closed, or a group of line segments orthogonal to three sides or two sides. It may be determined as a frame and the frame 58 may be extracted.
- the frame detection unit 22 may perform straight line detection and frame extraction after removing the background from the captured image in the frame detection process.
- the accuracy of frame detection can be improved by removing the background portion from the captured image.
- FIG. 13 is an explanatory diagram showing an operation outline example of the arrangement determination processing by the arrangement determination unit 23.
- the arrangement determination unit 23 acquires face arrangement information of the target person obtained as one of the face detection results by the face recognition unit 21.
- the face arrangement information includes a face frame 68 indicating the contour position of the face in the captured image.
- the arrangement determination unit 23 acquires frame arrangement information of the display unit obtained as one of the frame detection results by the frame detection unit 22.
- the frame arrangement information includes a frame 58 indicating the contour position of the display unit detected in the captured image.
- the placement determination unit 23 compares the positions of both the face frame 68 and the frame 58, and determines whether or not the face frame 68 exists in the area inside the frame 58, that is, the target in the display unit frame. It is determined whether or not a person's face is located. As shown in FIG. 13, when there is a face frame 68 inside the frame 58, the arrangement determination unit 23 outputs an arrangement determination result that there is a face image in the frame.
- the validity determination unit 25 regards the face image detected in the captured image as the face image displayed on the display unit when the face image is present in the frame based on the result of the placement determination by the placement determination unit 23, It is determined that the verification result is invalid.
- the validity determination unit 25 outputs a denial to the control target device 40 as a result of the face authentication.
- FIG. 14 is a flowchart illustrating an example of a face authentication operation procedure according to the second embodiment in time series.
- the description is simplified or abbreviate
- the frame detection unit 22 detects a frame in the captured image (S7B).
- the arrangement determination unit 23 determines the arrangement of the face position by face detection and the frame position by frame detection (S7C).
- the arrangement determining unit 23 detects a frame in the captured image and determines whether or not a face exists inside the frame, that is, whether or not a frame surrounding the outside of the target person's face exists (S7D).
- the arrangement determination unit 23 passes the determination result of the frame existence to the validity determination unit 25.
- the validity determination unit 25 determines that the face authentication is invalid and outputs a rejection to the control target device 40 as the face authentication result. If there is no frame surrounding the outside of the target person's face as a result of the frame presence determination, the validity determination unit 25 determines that the face authentication is valid, and outputs approval to the control target device 40 as the face authentication result.
- step S7 if it is determined in step S7 that the plane is not a plane, the frame detection processing has been performed in steps S7B to S7D. If they match, frame detection processing similar to steps S7B to S7D may be performed.
- the face detection process and the frame detection process may be executed in parallel. Moreover, after performing one process in series, you may perform the other process, and the implementation timing and order of a process are not limited.
- the face detection of the target person is performed by using the result of the frame detection by the arrangement determination unit 23 in the face authentication determination in addition to the face matching and plane determination processing. Can be performed with high accuracy.
- face authentication can be denied if the captured image including the face image includes a frame image.
- the target person is captured by a plurality of imaging devices while inverting the orientation of the tablet terminal in small increments or continuously switching the face image displayed on the tablet terminal to 3D display or 2D display, May be determined to be non-planar. Even in such a case, when there is a frame image in the captured image, it is determined that the face image is reflected on the tablet terminal, and the face image of the target person can be denied.
- the present disclosure accurately detects impersonation using an image that can be performed at the time of authentication of a target person, and effectively suppresses malicious acts of a malicious third party due to impersonation, a face authentication method, and face authentication Useful as a system.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
Abstract
In this face authentication system, a flat panel detection unit allows inputs of a first captured image and a second captured image that are respectively obtained by two imaging devices capturing the same face image. Two face detection units each detect a face image in the first captured image and/or the second captured image. A determination unit determines whether or not two face images are in a plane, on the basis of the two face images and feature points corresponding to the two face images. A validity determination unit determines the validity of a face authentication result, on the basis of a determination result as to whether or not the two face images are in a plane.
Description
本開示は、カメラにより撮像された撮像画像中の顔部分の画像を用いて対象人物を認証する顔認証装置、顔認証方法および顔認証システムに関する。
The present disclosure relates to a face authentication device, a face authentication method, and a face authentication system that authenticate a target person using an image of a face portion in a captured image captured by a camera.
特許文献1には、所定の時間の顔画像列を入力し、顔画像列から抽出された所定の色情報の時間的な変化に基づいて、顔画像列に含まれる顔がなりすましであるか否かを判定する情報処理装置が開示されている。
In Patent Literature 1, a face image sequence for a predetermined time is input, and whether or not a face included in the face image sequence is impersonated based on temporal changes in predetermined color information extracted from the face image sequence. An information processing apparatus for determining whether or not is disclosed.
しかしながら、特許文献1の構成では、非正規の人物が例えばタブレット端末等の携帯型の表示端末を用いて、正規の登録者の顔が写っている画像を表示してなりすましを行う場合、画像を用いた顔認証を行っても不正を検知できず、認証結果として承認と判定する可能性があった。
However, in the configuration of Patent Document 1, when an improper person uses a portable display terminal such as a tablet terminal to display an image showing the face of an authorized registrant and impersonates the image, Even if the used face authentication is performed, fraud cannot be detected, and there is a possibility that the authentication result is approved.
本開示は、上述した従来の状況に鑑みて案出され、対象人物の認証時に行われ得る画像を用いたなりすましを高精度に検出し、なりすましによる悪意ある第三者の不正行為を効果的に抑止する顔認証装置、顔認証方法および顔認証システムを提供することを目的とする。
The present disclosure has been devised in view of the above-described conventional situation, detects impersonation using an image that can be performed at the time of authentication of a target person with high accuracy, and effectively prevents malicious third party fraud by impersonation. An object of the present invention is to provide a face authentication device, a face authentication method, and a face authentication system to be suppressed.
本開示は、複数のカメラのそれぞれにより同一の人物が撮像された第1撮像画像および第2撮像画像を入力する画像入力部と、前記第1撮像画像および第2撮像画像の少なくとも一方の顔領域を検出する顔検出部と、前記顔領域に対応する顔情報と前記第1撮像画像および第2撮像画像とに基づいて、前記顔領域の顔画像が平面であるか否かを判定する平面判定部と、前記顔領域の顔画像が平面であるか否かの判定結果に基づいて、前記人物の顔情報の認証結果の有効性を判定する有効性判定部と、を備える、顔認証装置を提供する。
In the present disclosure, an image input unit that inputs a first captured image and a second captured image in which the same person is captured by each of a plurality of cameras, and a face area of at least one of the first captured image and the second captured image A plane detection unit that detects whether or not the face image of the face area is a plane based on face information corresponding to the face area and the first and second captured images And a validity determination unit that determines the validity of the authentication result of the person's face information based on a determination result of whether or not the face image of the face area is a plane. provide.
また、本開示は、顔認証装置における顔認証方法であって、複数のカメラのそれぞれにより同一の人物が撮像された第1撮像画像および第2撮像画像を入力するステップと、前記第1撮像画像および第2撮像画像の少なくとも一方の顔領域を検出するステップと、前記顔領域に対応する顔情報と前記第1撮像画像および第2撮像画像とに基づいて、前記顔領域の顔画像が平面であるか否かを判定するステップと、前記顔領域の顔画像が平面であるか否かの判定結果に基づいて、前記人物の顔情報の認証結果の有効性を判定するステップと、を有する、顔認証方法を提供する。
Further, the present disclosure is a face authentication method in a face authentication apparatus, the step of inputting a first captured image and a second captured image in which the same person is captured by each of a plurality of cameras, and the first captured image And detecting a face area of at least one of the second captured images, and based on the face information corresponding to the face area and the first captured image and the second captured image, the face image of the face area is a plane. Determining whether or not there is a step of determining the validity of the authentication result of the face information of the person based on the determination result of whether or not the face image of the face region is a plane, Provide a face recognition method.
また、本開示は、複数のカメラと顔認証装置とが互いに通信可能に接続された顔認証システムであって、前記複数のカメラは、それぞれ同一の人物を撮像した第1撮像画像および第2撮像画像を前記顔認証装置に出力し、前記顔認証装置は、前記第1撮像画像および第2撮像画像を入力し、前記第1撮像画像および第2撮像画像の少なくとも一方の顔領域を検出し、前記顔領域に対応する顔情報と前記第1撮像画像および第2撮像画像とに基づいて、前記顔領域の顔画像が平面であるか否かを判定し、前記顔領域の顔画像が平面であるか否かの判定結果に基づいて、前記人物の顔情報の認証結果の有効性を判定する、顔認証システムを提供する。
In addition, the present disclosure is a face authentication system in which a plurality of cameras and a face authentication device are connected to be able to communicate with each other, and the plurality of cameras each include a first captured image and a second captured image capturing the same person. An image is output to the face authentication device, the face authentication device receives the first captured image and the second captured image, detects at least one face region of the first captured image and the second captured image, Based on the face information corresponding to the face area and the first captured image and the second captured image, it is determined whether the face image of the face area is a plane, and the face image of the face area is a plane. Provided is a face authentication system that determines the validity of an authentication result of the person's face information based on a determination result of whether or not there is.
本開示によれば、対象人物の認証時に行われ得る画像を用いたなりすましを高精度に検出し、なりすましによる悪意ある第三者の不正行為を抑止することができる。
According to the present disclosure, it is possible to detect impersonation using an image that can be performed at the time of authentication of the target person with high accuracy, and to suppress malicious third party fraud by impersonation.
以下、適宜図面を参照しながら、本開示に係る顔認証装置、顔認証方法および顔認証システムを具体的に開示した実施の形態を詳細に説明する。但し、必要以上に詳細な説明は省略する場合がある。例えば、既によく知られた事項の詳細説明や実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になるのを避け、当業者の理解を容易にするためである。なお、添付図面および以下の説明は、当業者が本開示を十分に理解するために提供されるのであって、これらにより請求の範囲に記載の主題を限定することは意図されていない。
Hereinafter, embodiments that specifically disclose the face authentication device, the face authentication method, and the face authentication system according to the present disclosure will be described in detail with reference to the drawings as appropriate. However, more detailed description than necessary may be omitted. For example, detailed descriptions of already well-known matters and repeated descriptions for substantially the same configuration may be omitted. This is to avoid the following description from becoming unnecessarily redundant and to facilitate understanding by those skilled in the art. The accompanying drawings and the following description are provided to enable those skilled in the art to fully understand the present disclosure, and are not intended to limit the claimed subject matter.
(実施の形態1)
図1は、実施の形態1に係る顔認証システム100のシステム構成例を示す図である。顔認証システム100は、認証対象の人物を撮像装置で撮像した顔画像のデータと、予め登録された正規の人物の顔画像のデータとを照合し、認証対象の人物の顔認証を行う。顔認証システム100は、認証対象の人物の顔認証結果を制御対象機器40に出力する。 (Embodiment 1)
FIG. 1 is a diagram illustrating a system configuration example of aface authentication system 100 according to the first embodiment. The face authentication system 100 performs face authentication of a person to be authenticated by collating face image data obtained by capturing an authentication target person with an imaging device and face image data of a regular person registered in advance. The face authentication system 100 outputs the face authentication result of the person to be authenticated to the control target device 40.
図1は、実施の形態1に係る顔認証システム100のシステム構成例を示す図である。顔認証システム100は、認証対象の人物を撮像装置で撮像した顔画像のデータと、予め登録された正規の人物の顔画像のデータとを照合し、認証対象の人物の顔認証を行う。顔認証システム100は、認証対象の人物の顔認証結果を制御対象機器40に出力する。 (Embodiment 1)
FIG. 1 is a diagram illustrating a system configuration example of a
制御対象機器40は、例えば、カーシェアリング等のサービス事業者が使用する利用管理装置、セキュリティエリアの入退場管理装置であるが、これらに限定されない。
The control target device 40 is, for example, a usage management device used by a service provider such as car sharing or a security area entrance / exit management device, but is not limited thereto.
制御対象機器40がカーシェアリングの利用管理装置である場合、利用管理装置は、顔認証結果として利用者が予め登録されている正規利用者である場合、カーシェア対象の車両の利用を許可する。一方、利用者が予め登録されている正規利用者でない場合、利用管理装置は、カーシェア対象の車両の利用を拒否する。利用管理装置は、車両の利用を許可または拒否する場合、例えばその車両に搭載される電子制御ユニット(ECU:Electronic Control Unit)に車両の利用の可否を示す認証結果を送る。電子制御ユニットは、利用管理装置から送られた認証結果を基に、自車両の始動(例えば、イグニッションのオン)を有効または無効に設定する。
When the control target device 40 is a car sharing use management device, the use management device permits the use of a car share target vehicle when the user is a registered regular user as a face authentication result. On the other hand, when the user is not a registered regular user, the usage management device refuses to use the vehicle to be shared. When permitting or denying the use of a vehicle, the usage management device sends an authentication result indicating whether or not the vehicle can be used to an electronic control unit (ECU: Electronic Control Unit) mounted on the vehicle, for example. The electronic control unit sets the start (for example, ignition on) of the own vehicle to be valid or invalid based on the authentication result sent from the usage management device.
制御対象機器40がセキュリティエリアの入退場管理装置である場合、入退場管理装置は、顔認証結果として入退場者が許可された人物に対し、その人物のセキュリティエリアへの通行を許可する。一方、入退場管理装置は、顔認証結果として入退場者が許可された人物でない場合、その人物のセキュリティエリアへの通行を遮断したり、顔認証システム100の管理者や運用者に対して侵入者を検知した旨の報知をしたりする。
When the control target device 40 is a security area entrance / exit management device, the entrance / exit management device permits a person who is permitted to enter / exit as a face authentication result to pass the person to the security area. On the other hand, if the entrance / exit management device is not a permitted person as a face authentication result, the entrance / exit management device blocks the person from entering the security area or intrudes the administrator or operator of the face authentication system 100. Notification that the person has been detected.
顔認証システム100は、顔認証装置20、複数(例えば、2台)の撮像装置11A,11B、表示装置12、ID読取装置13を含む構成である。2台の撮像装置11A,11Bを区別する必要が無い場合、単に撮像装置11と称する。2台の撮像装置11A,11Bは、例えば通路を歩く人物の顔を、異なる向きから同時に撮像可能な位置に配置される。撮像装置11は、撮像レンズ、撮像デバイス、画像信号処理回路、通信インタフェース等を含む。撮像装置11は、認証対象人物30を含む撮像領域を被写体として撮像し、撮像画像を顔認証装置20に出力する。なお、撮像装置の数は、2台に限らず、任意の台数であってもよい。撮像装置11は、例えば所定のエリアを監視するための監視カメラであってもよい。
The face authentication system 100 includes a face authentication device 20, a plurality of (for example, two) imaging devices 11A and 11B, a display device 12, and an ID reading device 13. When there is no need to distinguish between the two imaging devices 11A and 11B, they are simply referred to as the imaging device 11. The two imaging devices 11A and 11B are arranged at positions where, for example, the face of a person walking in a passage can be simultaneously imaged from different directions. The imaging device 11 includes an imaging lens, an imaging device, an image signal processing circuit, a communication interface, and the like. The imaging device 11 captures an imaging region including the authentication target person 30 as a subject, and outputs the captured image to the face authentication device 20. Note that the number of imaging devices is not limited to two, and may be any number. The imaging device 11 may be a monitoring camera for monitoring a predetermined area, for example.
図2は、4台の撮像装置11e,11f,11g,11hの配置例を示す図である。4台の撮像装置11e,11f,11g,11hは、認証対象人物30が通過する通路の面に対し、垂直な方向(つまり、水平面に対して鉛直方向)に配置される。4台の撮像装置11e,11f,11g,11hは、通路幅wdにおいて、画角が重なり合うように、認証対象人物30の顔の存在範囲SAを撮像する。4台の撮像装置11e,11f,11g,11hの光軸は、例えば全て平行である。
FIG. 2 is a diagram illustrating an arrangement example of the four imaging devices 11e, 11f, 11g, and 11h. The four imaging devices 11e, 11f, 11g, and 11h are arranged in a direction perpendicular to the surface of the passage through which the person 30 to be authenticated passes (that is, a direction perpendicular to the horizontal plane). The four imaging devices 11e, 11f, 11g, and 11h image the presence range SA of the face of the person 30 to be authenticated so that the angles of view overlap in the passage width wd. The optical axes of the four imaging devices 11e, 11f, 11g, and 11h are all parallel, for example.
例えば、背の低い子供等の人物の顔は、低い位置に設置された2台の撮像装置11e,11fで撮像される。この場合、2台の撮像装置11e,11fが、図1の撮像装置11A,11Bとして用いられる。また例えば、背の高い大人等の人物の顔は、高い位置に設置された2台の撮像装置11g,11hで撮像される。この場合、2台の撮像装置11g,11hが、図1の撮像装置11A,11Bとして用いられる。撮像装置11e,11f,11g,11hを特に区別しない場合、撮像装置11と称される。また、3台以上の撮像装置で、対象人物の顔画像が3枚以上撮像された場合、これらの顔画像の中から2枚分の顔画像が顔認証に用いられる。なお、3枚以上の顔画像は、時間的に連続の3フレームでもよいし、時間的に非連続の3フレームでもよい。
For example, the face of a person such as a short child is imaged by two imaging devices 11e and 11f installed at a low position. In this case, the two imaging devices 11e and 11f are used as the imaging devices 11A and 11B in FIG. Further, for example, the face of a person such as a tall adult is imaged by two imaging devices 11g and 11h installed at a high position. In this case, the two imaging devices 11g and 11h are used as the imaging devices 11A and 11B in FIG. When the imaging devices 11e, 11f, 11g, and 11h are not particularly distinguished, they are referred to as the imaging device 11. Further, when three or more face images of the target person are picked up by three or more image pickup devices, two face images of these face images are used for face authentication. Note that three or more face images may be temporally continuous three frames or temporally discontinuous three frames.
ID読取装置13は、例えばカードリーダ等の情報読取装置で構成される。ID読取装置13は、認証対象人物30が所持する個人識別用のIDカード35の識別情報(例えば、ID情報)を読み取り、その読み取られた識別情報を顔認証装置20に出力する。IDカード35として、会員証、身分証明カード、運転免許証、マイナンバーカード等が挙げられる。
The ID reader 13 is composed of an information reader such as a card reader. The ID reading device 13 reads the identification information (for example, ID information) of the ID card 35 for personal identification possessed by the person 30 to be authenticated, and outputs the read identification information to the face authentication device 20. Examples of the ID card 35 include a membership card, an identification card, a driver's license, and a my number card.
顔認証装置20は、PC(personal computer)等で構成され、プロセッサ、メモリ、通信インタフェース等を含む。顔認証装置20は、撮像装置11から取得した撮像画像の画像情報とID読取装置13から取得したID情報とを用いて顔認証を行い、その顔認証結果を制御対象機器40に出力する。顔認証処理の詳細については後述する。
The face authentication device 20 is configured by a PC (personal computer) or the like, and includes a processor, a memory, a communication interface, and the like. The face authentication device 20 performs face authentication using the image information of the captured image acquired from the imaging device 11 and the ID information acquired from the ID reading device 13, and outputs the face authentication result to the control target device 40. Details of the face authentication process will be described later.
表示装置12は、液晶ディスプレイ等の表示デバイスを含む。表示装置12は、顔認証装置20による顔認証の実行時、各種のデータもしくは情報を表示する。各種のデータもしくは情報は、操作案内画面、撮像画像、顔検出結果、枠検出結果、顔照合結果、顔認証を含む認証結果等を含む。表示装置12は、顔認証システム100の管理者や運用者が視認可能な監視室等に配置される。なお、表示装置12は、認証対象人物30に対し、案内表示が可能なように、撮像装置11の近傍に配置されてもよい。
Display device 12 includes a display device such as a liquid crystal display. The display device 12 displays various data or information when the face authentication device 20 executes face authentication. Various data or information includes an operation guidance screen, a captured image, a face detection result, a frame detection result, a face matching result, an authentication result including face authentication, and the like. The display device 12 is disposed in a monitoring room or the like that is visible to the administrator or operator of the face authentication system 100. The display device 12 may be disposed in the vicinity of the imaging device 11 so that guidance display can be performed for the person 30 to be authenticated.
図3は、実施の形態1に係る顔認証装置20に内蔵されるプロセッサおよびメモリの協働により実現される機能的な構成例を示すブロック図である。顔認証装置20は、顔認識部21、顔照合部24、有効判定部25、フラットパネル検出部70を含む。顔認識部21、顔照合部24、有効判定部25、フラットパネル検出部70は、例えばPCである顔認証装置20に内蔵されるプロセッサおよびメモリの協働により、プロセッサがメモリに記憶された所定のプログラムを実行することで実現される。
FIG. 3 is a block diagram showing an example of a functional configuration realized by the cooperation of the processor and the memory built in the face authentication apparatus 20 according to the first embodiment. The face authentication device 20 includes a face recognition unit 21, a face collation unit 24, an validity determination unit 25, and a flat panel detection unit 70. The face recognizing unit 21, the face collating unit 24, the validity determining unit 25, and the flat panel detecting unit 70 are, for example, predetermined processors stored in the memory by the cooperation of the processor and the memory built in the face authentication device 20 which is a PC. This is realized by executing the program.
顔認識部21は、2台の撮像装置から送られた例えば図5に示すうような撮像画像GZ1,GZ2(つまり、顔部分の画像が含まれる撮像画像GZ1,GZ2)のデータをそれぞれ入力し、公知の顔認識技術を用いた顔認識処理によって撮像画像に含まれる認証対象人物30の顔を検出する。顔認識部21は、顔認識処理により、顔の特徴情報と撮像画像中における位置情報(言い換えると、顔配置情報)とを取得する。顔特徴情報は、顔の特徴点を含む他、肌色、血流、髪色等のバイタル情報を含んでよい。
For example, the face recognition unit 21 inputs data of captured images GZ1 and GZ2 (that is, captured images GZ1 and GZ2 including face images) as shown in FIG. Then, the face of the person 30 to be authenticated included in the captured image is detected by face recognition processing using a known face recognition technique. The face recognition unit 21 acquires face feature information and position information in the captured image (in other words, face arrangement information) by face recognition processing. The face feature information may include vital information such as skin color, blood flow, and hair color in addition to the feature points of the face.
顔照合部24は、顔認識部21で検出された顔の特徴情報と正規登録者の顔情報の照合データとを用いて、顔認識部21で検出された撮像画像中の顔と正規登録者の顔とが一致するか否かを判定する。照合データは、例えば顔認証装置20に内蔵された記録媒体に記録され、正規登録者の顔画像の画像データや、顔特徴情報の特徴データを含む。顔認証システム100の管理者や運用者は、照合データとして、例えば認証対象人物30が所持するIDカード35に記録されているID情報に含まれる照合データを顔認証装置20の顔照合部24に入力する。入力された照合データは、顔照合部24が読み出し可能な記録媒体に蓄積される。
The face collating unit 24 uses the face feature information detected by the face recognizing unit 21 and the collation data of the face information of the regular registrant, and the face in the captured image detected by the face recognizing unit 21 and the regular registrant. It is determined whether or not the face matches. The collation data is recorded on, for example, a recording medium built in the face authentication device 20, and includes image data of a regular registrant's face image and feature data of face feature information. For example, the administrator or operator of the face authentication system 100 sends the verification data included in the ID information recorded on the ID card 35 possessed by the person 30 to be authenticated to the face verification unit 24 of the face authentication device 20 as the verification data. input. The input verification data is stored in a recording medium that can be read by the face verification unit 24.
有効判定部25は、顔照合部24による顔照合の結果、撮像画像に含まれる認証対象人物30の顔が有効か否か(つまり、顔認証の有効性として承認か否認か)を判定し、顔認証結果を出力する。
The validity determination unit 25 determines whether or not the face of the person to be authenticated 30 included in the captured image is valid (that is, whether or not the face authentication is approved) as a result of the face matching by the face matching unit 24, Outputs the face authentication result.
フラットパネル検出部70は、認証対象人物30の撮像画像において検出された顔画像が平面であるか否かを判定する(図4参照)。
The flat panel detection unit 70 determines whether or not the face image detected in the captured image of the person 30 to be authenticated is a plane (see FIG. 4).
図4は、フラットパネル検出部70の機能的構成および動作概要の一例を示すブロック図である。フラットパネル検出部70は、顔検出部71,72、顔対応判定部73、特徴点検出部74,75、特徴点マッチング部76、射影変換推定部77、射影変換部78、相関演算部79、判定部80を有する。
FIG. 4 is a block diagram illustrating an example of a functional configuration and an outline of operation of the flat panel detection unit 70. The flat panel detection unit 70 includes a face detection unit 71, 72, a face correspondence determination unit 73, a feature point detection unit 74, 75, a feature point matching unit 76, a projective transformation estimation unit 77, a projective transformation unit 78, a correlation calculation unit 79, A determination unit 80 is included.
顔検出部71,72は、認証対象人物30が撮像された対応する撮像画像GZ1,GZ2に含まれる対象人物の顔をそれぞれ検出し、顔配置情報を取得する。顔配置情報は、特徴点検出範囲rc1,rc2(図5参照)であり、例えば目、鼻、口等を含む矩形の範囲を表す情報である。顔検出部71は、撮像装置11Aで撮像された撮像画像GZ1に対して顔検出を行う。顔検出部72は、撮像装置11Bで撮像された撮像画像GZ2に対して顔検出を行う。
The face detection units 71 and 72 detect the face of the target person included in the corresponding captured images GZ1 and GZ2 in which the authentication target person 30 is captured, and acquire face arrangement information. The face arrangement information is feature point detection ranges rc1 and rc2 (see FIG. 5), and is information representing a rectangular range including, for example, eyes, nose and mouth. The face detection unit 71 performs face detection on the captured image GZ1 captured by the imaging device 11A. The face detection unit 72 performs face detection on the captured image GZ2 captured by the imaging device 11B.
特徴点検出部74,75は、例えば公知の顔の特徴点検出技術を用いた特徴点検出処理により、顔配置情報により特定される顔画像に含まれる顔の特徴点を検出する。顔の特徴点として、例えば、目、鼻、口の位置、口元の位置、奥行き感のある鼻孔の位置、目尻の位置、額にかかる前髪の曲線、目尻間の距離、顔の輪郭、突出している鼻先の位置等の顔の部位を特徴付ける箇所が挙げられる。特徴点検出部74は、顔検出部71で得られた顔画像fg1(図5参照)に含まれる顔の特徴点を検出する。特徴点検出部75は、顔検出部72で得られた顔画像fg2(図5参照)に含まれる顔の特徴点を検出する。ここでは、一例として、両側の口元の位置、鼻孔の位置、額にかかる前髪の曲線を特徴点とした場合を示す。また、特徴点検出部74,75は、例えば顔のコーナ部分(言い換えると、輪郭)を検出して顔の特徴点を検出したり、SIFT(Scale-Invariant Feature Transform)もしくはSURF(Speeded-Up Robust Features)等の公知の特徴点抽出用のアルゴリズムを用いて顔の特徴点を検出してもよい。
Feature point detectors 74 and 75 detect facial feature points included in the facial image specified by the face arrangement information, for example, by a feature point detection process using a known facial feature point detection technique. As facial feature points, for example, eyes, nose, mouth position, mouth position, deep nostril position, eye corner position, bangs curve on the forehead, distance between the eye corners, facial contour, protruding A part that characterizes a facial part such as the position of the tip of the nose is included. The feature point detection unit 74 detects a feature point of the face included in the face image fg1 (see FIG. 5) obtained by the face detection unit 71. The feature point detection unit 75 detects a feature point of the face included in the face image fg2 (see FIG. 5) obtained by the face detection unit 72. Here, as an example, the case where the feature point is the position of the mouth on both sides, the position of the nostril, and the bangs curve on the forehead is shown. Further, the feature point detection units 74 and 75 detect, for example, a face corner portion (in other words, a contour) to detect a face feature point, or perform SIFT (Scale-Invariant Feature Transform) or SURF (Speeded-Up Robust). The feature points of the face may be detected using a known feature point extraction algorithm such as Features.
特徴点マッチング部76は、特徴点検出部74,75からの出力を用いて、特徴点マッチング処理を行う。特徴点マッチング処理では、顔画像fg1で検出された複数の特徴点p1a,p2a,p3aと、顔画像fg2で検出された複数の特徴点p1b,p2b,p3bとに対し、同じ特徴点同士が対応付けられる。特徴点間の対応付けは、顔画像fg1,fg2において、特徴点p1aと特徴点p1bの間、特徴点p2aと特徴点p2bの間、特徴点p3aと特徴点p3bの間をそれぞれ線分c1,c2,c3で結ぶことで、表現される(図5参照)。
The feature point matching unit 76 performs a feature point matching process using the outputs from the feature point detection units 74 and 75. In the feature point matching process, the same feature points correspond to the plurality of feature points p1a, p2a, p3a detected in the face image fg1 and the plurality of feature points p1b, p2b, p3b detected in the face image fg2. Attached. In the face images fg1 and fg2, the feature points are associated with the line segments c1 and p3b between the feature points p1a and p1b, between the feature points p2a and p2b, and between the feature points p3a and p3b, respectively. It is expressed by connecting with c2 and c3 (see FIG. 5).
射影変換推定部77は、顔画像fg1,fg2に含まれる、対応する2つの特徴点の座標を基に、射影変換行列を算出する。射影変換行列は、ある平面を別の平面に射影するために、座標を変換するための行列である。ここでは、射影変換行列は、顔画像fg2の面を顔画像fg1の面に射影するための座標変換を行う。なお、射影変換推定部77は、顔画像fg1の面を顔画像fg2の面に射影する射影変換行列を算出してもよい。射影変換行列は、例えば3行3列の行列で表される。この射影変換行列では、少なくとも4つの一致する特徴点同士の座標を対応付けることで、3行3列の行列で表される行列の各要素が算出される。また、射影変換推定部77は、4つ以上の対応点(つまり、互いに一致する特徴点同士の位置を示す座標)を用いて、最小二乗法等を用いて射影変換行列を算出してもよい。これにより、射影変換推定部77は、より正確な射影変換行列を算出できる。
The projective transformation estimation unit 77 calculates a projective transformation matrix based on the coordinates of two corresponding feature points included in the face images fg1 and fg2. The projective transformation matrix is a matrix for transforming coordinates in order to project a certain plane onto another plane. Here, the projective transformation matrix performs coordinate transformation for projecting the face of the face image fg2 onto the face of the face image fg1. The projective transformation estimation unit 77 may calculate a projective transformation matrix that projects the face of the face image fg1 onto the face of the face image fg2. The projective transformation matrix is represented by a 3 × 3 matrix, for example. In this projective transformation matrix, each element of a matrix represented by a 3 × 3 matrix is calculated by associating the coordinates of at least four matching feature points. Further, the projective transformation estimation unit 77 may calculate a projective transformation matrix using a least square method or the like using four or more corresponding points (that is, coordinates indicating the positions of feature points that match each other). . Thereby, the projective transformation estimation unit 77 can calculate a more accurate projective transformation matrix.
射影変換部78は、射影変換推定部77によって算出された射影変換行列を用いて、顔画像fg2に対し射影変換を行い、変換画像fg2aを取得する(図5参照)。
The projective transformation unit 78 performs projective transformation on the face image fg2 using the projective transformation matrix calculated by the projective transformation estimation unit 77, and obtains a transformed image fg2a (see FIG. 5).
相関演算部79は、顔画像fg1と変換画像fg2aとを基に、2つの画像の類似度を示す相関値を演算する。相関値の演算では、画像の中から指定した画像(例えば、テンプレート)と似ている位置を探すテンプレートマッチングが行われる。ここでは、正規化相互相関(NNC:Normalized Cross-Correlation)が相関値の演算に用いられる。なお、相関値の演算には、輝度値の差の2乗の合計(SSD:Sum of Squared Difference)や、輝度値の差の絶対値の合計(SAD:Sum of Absolute Difference)等が用いられてもよい。
The correlation calculation unit 79 calculates a correlation value indicating the similarity between the two images based on the face image fg1 and the converted image fg2a. In the calculation of the correlation value, template matching is performed in which a position similar to a designated image (for example, a template) is searched from among the images. Here, normalized cross-correlation (NNC) is used for the calculation of the correlation value. For the calculation of the correlation value, the sum of squares of differences in luminance values (SSD: SumSofSSquared Difference), the sum of absolute values of differences in luminance values (SAD: Sum of Absolute Difference), and the like are used. Also good.
判定部80は、相関演算部79で演算された相関値を基に、顔画像fg1がフラットパネルに映し出されたようにフラット(つまり、2次元的な平面)であるか否かを判定し、その判定結果を出力する。相関値が高い場合、例えば相関値が予め設定された閾値より高い場合、顔画像fg1と顔画像fg2は、どちらも平面(例えば、実際の人物の顔のように立体的な構造を有さない2次元的な平面)であると判定される。一方、相関値が低い場合、例えば相関値が予め設定された閾値以下である場合、顔画像fg1と顔画像fg2の少なくとも一方は、平面でなく、立体物(例えば、実際の人物の顔のように奥行き感のある3次元的な立体物)であると判定される。
Based on the correlation value calculated by the correlation calculation unit 79, the determination unit 80 determines whether or not the face image fg1 is flat (that is, a two-dimensional plane) as projected on the flat panel. The determination result is output. When the correlation value is high, for example, when the correlation value is higher than a preset threshold value, neither the face image fg1 nor the face image fg2 has a three-dimensional structure (for example, an actual human face). A two-dimensional plane). On the other hand, when the correlation value is low, for example, when the correlation value is equal to or less than a preset threshold value, at least one of the face image fg1 and the face image fg2 is not a plane but a three-dimensional object (for example, the face of an actual person) 3D solid object having a sense of depth.
図5は、フラットパネル150に映し出された顔画像を撮像して平面判定を行う動作概要例を示す説明図である。フラットパネル150は、例えば、タブレット端末、デジタルフォトフレーム、スマートフォン、ノートPC(Personal Computer)等の表示ユニット(いわゆる、ディスプレイ等の表示デバイス)である。撮像装置11A,11Bは、フラットパネル150に映し出された顔画像をそれぞれ撮像し、撮像画像GZ1,GZ2を得る。このとき、撮像装置11Aは、フラットパネル150に対し、正面方向から撮像する。撮像装置11Bは、フラットパネル150に対し、正面方向から僅かに傾けて撮像する。
FIG. 5 is an explanatory diagram showing an example of an outline of operation for capturing a face image projected on the flat panel 150 and performing plane determination. The flat panel 150 is, for example, a display unit (so-called display device such as a display) such as a tablet terminal, a digital photo frame, a smartphone, or a notebook PC (Personal Computer). The imaging devices 11A and 11B respectively capture the face images displayed on the flat panel 150, and obtain captured images GZ1 and GZ2. At this time, the imaging device 11 </ b> A captures images from the front direction with respect to the flat panel 150. The imaging device 11B captures an image by slightly tilting the flat panel 150 from the front direction.
顔検出部71,72は、それぞれ撮像画像GZ1,GZ2に対し顔検出を行い、特徴点検出範囲rc1,rc2を得る。特徴点検出部74,75は、それぞれ特徴点検出範囲rc1,rc2で囲まれる、顔画像fg1,fg2に含まれる顔の特徴点p1a,p2a,p3aおよびp1b,p2b,p3bを検出する。
The face detection units 71 and 72 perform face detection on the captured images GZ1 and GZ2, respectively, to obtain feature point detection ranges rc1 and rc2. The feature point detection units 74 and 75 detect face feature points p1a, p2a, and p3a and p1b, p2b, and p3b included in the face images fg1 and fg2 that are surrounded by the feature point detection ranges rc1 and rc2, respectively.
特徴点マッチング部76は、顔画像fg1で検出された複数の特徴点p1a,p2a,p3aと、顔画像fg2で検出された複数の特徴点p1b,p2b,p3bとを対応付け、それぞれを線分c1,c2,c3で結ぶ。
The feature point matching unit 76 associates a plurality of feature points p1a, p2a, and p3a detected in the face image fg1 with a plurality of feature points p1b, p2b, and p3b detected in the face image fg2, and each of them is a line segment. Connect with c1, c2, and c3.
射影変換部78は、顔画像fg2に対し、射影変換を行い、変換画像fg2aを得る。相関演算部79は、顔画像fg1と変換画像fg2aとの相関値を演算する。ここでは、相関演算部79によって演算された相関値は、0.999と極めて値1に近く、高い値である。判定部80は、顔画像fg1と顔画像fg2が2次元画像であり、フラットパネル150に映し出されたフラット(つまり、平面)の顔画像であると判定する。また、判定部80は、判定結果の他、顔画像fg1と変換画像fg2aとを重畳した重畳画像mg1を出力し、表示装置12に重畳画像mg1を表示する。重畳画像mg1は、顔画像fg1と変換画像fg2aとがほぼ一致し、ダブリが殆ど無い、1つの顔画像のように表示される。
The projective conversion unit 78 performs projective conversion on the face image fg2 to obtain a converted image fg2a. The correlation calculation unit 79 calculates a correlation value between the face image fg1 and the converted image fg2a. Here, the correlation value calculated by the correlation calculation unit 79 is 0.999, which is very close to the value 1 and is a high value. The determination unit 80 determines that the face image fg <b> 1 and the face image fg <b> 2 are two-dimensional images, and are flat (that is, flat) face images displayed on the flat panel 150. In addition to the determination result, the determination unit 80 outputs a superimposed image mg1 in which the face image fg1 and the converted image fg2a are superimposed, and displays the superimposed image mg1 on the display device 12. The superimposed image mg1 is displayed like a single face image in which the face image fg1 and the converted image fg2a substantially coincide with each other and there is almost no double.
このように、フラットパネルに映し出された顔画像は高い相関値を示す。したがって、正規の登録者の顔画像が映し出されたフラットパネルを持って、正規登録者になりすまし、不正に通過しようとする人物の顔認証は、否認される。
Thus, the face image projected on the flat panel shows a high correlation value. Accordingly, face authentication of a person who impersonates a regular registrant by holding a flat panel on which a regular registrant's face image is projected is denied.
図6は、実際の人物の顔を撮像して平面判定を行う動作概要例を示す説明図である。撮像装置11A,11Bは、実際の人物hmの顔をそれぞれ撮像し、撮像画像GZ3,GZ4を得る。このとき、撮像装置11Aは、実際の人物hmの顔に対し、正面方向から撮像する。撮像装置11Bは、実際の人物hmの顔に対し、正面方向から僅かに傾けて撮像する。
FIG. 6 is an explanatory diagram showing an example of an outline of operation for performing plane determination by imaging an actual human face. The imaging devices 11A and 11B respectively capture the face of the actual person hm and obtain captured images GZ3 and GZ4. At this time, the imaging device 11A captures an image of the face of the actual person hm from the front direction. The imaging device 11B captures an image by slightly tilting the face of the actual person hm from the front direction.
顔検出部71,72は、それぞれ撮像画像GZ3,GZ4に対し顔検出を行い、特徴点検出範囲rc3,rc4を得る。特徴点検出部74,75は、それぞれ特徴点検出範囲rc3,rc4で囲まれる、顔画像fg3,fg4に含まれる顔の特徴点p4a,p5a,p6a,p7aおよびp4b,p5b,p6b,p7bを検出する。
The face detection units 71 and 72 perform face detection on the captured images GZ3 and GZ4, respectively, and obtain feature point detection ranges rc3 and rc4. The feature point detection units 74 and 75 detect face feature points p4a, p5a, p6a, and p7a and p4b, p5b, p6b, and p7b included in the face images fg3 and fg4, which are surrounded by the feature point detection ranges rc3 and rc4, respectively. To do.
特徴点マッチング部76は、顔画像fg3で検出された複数の特徴点p4a,p5a,p6a,p7aと、顔画像fg4で検出された複数の特徴点p4b,p5b,p6b,p7bとを対応付け、それぞれを線分c4,c5,c6,c7で結ぶ。
The feature point matching unit 76 associates a plurality of feature points p4a, p5a, p6a, and p7a detected in the face image fg3 with a plurality of feature points p4b, p5b, p6b, and p7b detected in the face image fg4, These are connected by line segments c4, c5, c6, and c7.
射影変換部78は、顔画像fg4に対し、射影変換を行い、変換画像fg4aを得る。
The projective conversion unit 78 performs projective conversion on the face image fg4 to obtain a converted image fg4a.
相関演算部79は、顔画像fg3と変換画像fg4aとの相関値を演算する。ここでは、相関演算部79によって演算された相関値は、0.8832となり、図5の場合と比べて低い。判定部80は、顔画像fg3と顔画像fg4は、奥行き感のある立体物であり、実際の人物hmの顔画像であると判定する。また、判定部80は、判定結果の他、顔画像fg3と変換画像fg4aとを重畳した重畳画像mg2を出力し、表示装置12に重畳画像mg2を表示する。重畳画像mg2は、顔画像fg3と変換画像fg4aが少しずれており、ダブリが少なからず存在する。ここでは、変換画像fg4aが赤紫色(マゼンタ)で表示され、顔画像fg3が緑色で表示される。赤紫色と緑色は補色の関係にあるので、重畳画像mg2では、これらが一致した画像部分は、モノトーンで表示され、不一致の画像部分は、赤紫色もしくは緑色で表示される。したがって、重畳画像mg2は、緑色の画像部分gnと赤紫色の画像部分rdとが混在し、ダブっている2つの顔画像のように表示される。
The correlation calculation unit 79 calculates a correlation value between the face image fg3 and the converted image fg4a. Here, the correlation value calculated by the correlation calculation unit 79 is 0.8832, which is lower than the case of FIG. The determination unit 80 determines that the face image fg3 and the face image fg4 are three-dimensional objects with a sense of depth, and are actual face images of the person hm. In addition to the determination result, the determination unit 80 outputs a superimposed image mg2 in which the face image fg3 and the converted image fg4a are superimposed, and displays the superimposed image mg2 on the display device 12. In the superimposed image mg2, the face image fg3 and the converted image fg4a are slightly shifted from each other, and there are not a few doubles. Here, the converted image fg4a is displayed in magenta and the face image fg3 is displayed in green. Since red purple and green have a complementary color relationship, in the superimposed image mg2, the image portion where they match is displayed in monotone, and the mismatched image portion is displayed in red purple or green. Therefore, the superimposed image mg2 is displayed like two doubled face images in which the green image portion gn and the reddish purple image portion rd are mixed.
このように、実際の人物の顔画像は、顔の表面に凹凸があるので、低い相関値を示す。したがって、実際の人物の顔が2台の撮像装置で撮像された場合、顔認証は承認される。
Thus, the face image of an actual person shows a low correlation value because the face surface has irregularities. Therefore, face authentication is approved when an actual person's face is imaged by two imaging devices.
次に、実施の形態1に係る顔認証システム100の動作について、図7および図8を参照して説明する。
Next, the operation of the face authentication system 100 according to Embodiment 1 will be described with reference to FIGS.
図7は、実施の形態1に係る顔認証の動作手順の一例を時系列に説明するフローチャートである。
FIG. 7 is a flowchart for explaining an example of a face authentication operation procedure according to the first embodiment in time series.
図7において、顔認証装置20の顔認識部21は、撮像装置11A,11Bで撮像された撮像画像GZ1,GZ2を取り込む(S1)。顔認識部21は、撮像画像GZ1,GZ2中の対象人物の顔検出を行う(S2)。顔認識部21は、対象人物の顔が検出されたか否かを判別する(S3)。顔が検出されない場合、顔認証装置20の処理はステップS1に戻る。
In FIG. 7, the face recognition unit 21 of the face authentication device 20 takes captured images GZ1 and GZ2 captured by the imaging devices 11A and 11B (S1). The face recognition unit 21 detects the face of the target person in the captured images GZ1 and GZ2 (S2). The face recognition unit 21 determines whether or not the face of the target person has been detected (S3). When the face is not detected, the process of the face authentication device 20 returns to step S1.
一方、顔が検出された場合、顔照合部24は、検出された顔の照合を行い、本人認証を行う(S4)。顔照合部24は、対象人物の顔特徴情報が登録者の顔の照合データと一致するか否かを判定する(S5)。
On the other hand, when a face is detected, the face collation unit 24 collates the detected face and authenticates the user (S4). The face matching unit 24 determines whether the facial feature information of the target person matches the matching data of the registrant's face (S5).
対象人物の顔特徴情報が登録者の顔の照合データと一致しない場合、有効判定部25は、無効の判定を行い、顔認証の認証結果として否認の旨のメッセージ(つまり、NGの旨のメッセージ)を出力する(S9)。この後、顔認証装置20は図7に示す処理を終了する。
When the facial feature information of the target person does not match the collation data of the registrant's face, the validity determination unit 25 determines invalidity, and a message indicating rejection (ie, message indicating NG) as the authentication result of face authentication. ) Is output (S9). Thereafter, the face authentication device 20 ends the process shown in FIG.
一方、対象人物の顔特徴情報が登録者の顔の照合データと一致した場合、フラットパネル検出部70は、顔画像の平面判定を行う(S6)。判定部80は、顔画像が平面であるか否かを判別する(S7)。平面である場合、有効判定部25は、ステップS9において、無効の判定を行い、顔認証の認証結果として否認を出力する。また、ステップS7で平面でない場合、有効判定部25は、有効の判定を行い、顔認証の認証結果として承認の旨のメッセージ(つまり、OKの旨のメッセージ)を出力する(S8)。この後、顔認証装置20は本処理を終了する。
On the other hand, when the facial feature information of the target person matches the registrant's face collation data, the flat panel detection unit 70 determines the plane of the face image (S6). The determination unit 80 determines whether or not the face image is a plane (S7). In the case of a flat surface, the validity determination unit 25 determines invalidity in step S9 and outputs denial as the authentication result of face authentication. If the plane is not a plane in step S7, the validity determination unit 25 determines the validity and outputs an approval message (that is, an OK message) as the authentication result of face authentication (S8). Thereafter, the face authentication apparatus 20 ends this process.
図8は、ステップS6に係る顔画像平面判定の動作手順の一例を時系列に説明するフローチャートである。
FIG. 8 is a flowchart for explaining an example of the operation procedure of the face image plane determination according to step S6 in time series.
図8において、フラットパネル検出部70の顔検出部71,72は、撮像画像に対し、それぞれ撮像画像中の顔画像を検出する。特徴点検出部74,75は、それぞれ顔画像中の特徴点を検出する(S11)。
In FIG. 8, the face detection units 71 and 72 of the flat panel detection unit 70 detect the face images in the captured images, respectively. The feature point detectors 74 and 75 each detect a feature point in the face image (S11).
特徴点マッチング部76は、2つの顔画像間で特徴点マッチングを行う(S12)。射影変換推定部77は、特徴点マッチングの結果を基に、射影変換行列を算出する(S13)射影変換部78は、2つの顔画像の一方に対し、射影変換を行い、変換画像を取得する(S14)。
The feature point matching unit 76 performs feature point matching between two face images (S12). The projective transformation estimation unit 77 calculates a projective transformation matrix based on the result of feature point matching (S13). The projective transformation unit 78 performs projective transformation on one of the two face images, and acquires a transformed image. (S14).
相関演算部79は、射影変換されない顔画像と変換画像との相関値を算出する(S15)。判定部80は、この相関値によって、顔画像が平面であるか否かを判定する(S16)。この後、顔認証装置20は、図8に示す処理を終了する。これにより、顔認証装置20の処理は、図7のステップS7に進む。
The correlation calculation unit 79 calculates a correlation value between the face image not subjected to projective conversion and the converted image (S15). The determination unit 80 determines whether the face image is a plane based on the correlation value (S16). Thereafter, the face authentication device 20 ends the process shown in FIG. Thereby, the process of the face authentication apparatus 20 proceeds to step S7 in FIG.
このように、実施の形態1における顔認証システム100では、顔認証装置20のフラットパネル検出部70(画像入力部の一例)は、2台の撮像装置11A,11B(複数のカメラの一例)のそれぞれにより同一の顔画像(人物)が撮像された撮像画像GZ1(第1撮像画像の一例)および撮像画像GZ2(第2撮像画像の一例)を入力する。顔検出部71,72は、それぞれ撮像画像GZ1および撮像画像GZ2の少なくとも一方の顔画像fg1,fg2(顔領域)を検出する。判定部80(平面判定部の一例)は、2つの顔画像fg1,fg2に対応する特徴点(顔情報)と、顔画像fg1,fg2とに基づいて、顔画像fg1,fg2が平面であるか否かを判定する。有効判定部25(有効性判定部の一例)は、顔画像fg1,fg2が平面であるか否かの判定結果に基づいて、顔認証結果の有効性を判定する。
As described above, in the face authentication system 100 according to the first embodiment, the flat panel detection unit 70 (an example of the image input unit) of the face authentication device 20 includes the two imaging devices 11A and 11B (an example of a plurality of cameras). A captured image GZ1 (an example of a first captured image) and a captured image GZ2 (an example of a second captured image) in which the same face image (person) is captured are input. The face detection units 71 and 72 detect at least one face image fg1 or fg2 (face region) of the captured image GZ1 and the captured image GZ2, respectively. The determination unit 80 (an example of a plane determination unit) determines whether the face images fg1 and fg2 are planes based on feature points (face information) corresponding to the two face images fg1 and fg2 and the face images fg1 and fg2. Determine whether or not. The validity determination unit 25 (an example of the validity determination unit) determines the validity of the face authentication result based on the determination result of whether or not the face images fg1 and fg2 are planes.
これにより、顔認証装置20または顔認証システム100は、対象人物の認証時に行われ得る画像を用いたなりすましを高精度に検出し、なりすましによる悪意ある第三者の不正行為を効果的に抑止することができる。例えば、タブレット端末等の表示ユニットに、正規の登録者の顔画像(静止画像、動画像)を表示し、なりすましを行おうとしても、顔画像が平面であると判定される。つまり、この顔画像は、立体物である人間の顔の顔画像とは異なると判定される。よって、顔認証が否認される。
As a result, the face authentication device 20 or the face authentication system 100 detects impersonation using an image that can be performed at the time of authentication of the target person with high accuracy, and effectively suppresses malicious acts of a malicious third party due to impersonation. be able to. For example, even if a regular registrant's face image (still image, moving image) is displayed on a display unit such as a tablet terminal and impersonation is attempted, the face image is determined to be a plane. That is, this face image is determined to be different from the face image of a human face that is a three-dimensional object. Therefore, face authentication is denied.
また、顔認証装置20の顔照合部24は、検出された顔画像の顔特徴情報(前記顔領域に対応する顔情報の一例)と既登録の照合データ(顔照合用データの一例)とを照合する。有効判定部25は、この顔照合結果と顔画像が平面であるか否かの判定結果とに基づいて、対象人物の顔認証結果の有効性を判定する。このように、顔認証装置20または顔認証システム100は、顔照合が一致した顔画像に対し、平面判定を行うことで、平面判定の回数を減らすことができる。
In addition, the face matching unit 24 of the face authentication device 20 uses the face feature information (an example of face information corresponding to the face area) of the detected face image and the registered matching data (an example of face matching data). Match. The validity determination unit 25 determines the validity of the face authentication result of the target person based on the face comparison result and the determination result of whether or not the face image is a plane. In this way, the face authentication device 20 or the face authentication system 100 can reduce the number of times of plane determination by performing plane determination on the face images with matching face matching.
また、顔認証装置20は、人物の識別情報(ID情報の一例)を取得するID読取装置13(ID取得部の一例)を備える。有効判定部25は、顔画像が平面であるか否かの判定結果とID情報とに基づいて、認証結果の有効性を判定する。このように、顔認証装置20または顔認証システム100は、ID情報を用いることで、顔認証の精度を高めることができる。
Further, the face authentication device 20 includes an ID reading device 13 (an example of an ID acquisition unit) that acquires person identification information (an example of ID information). The validity determination unit 25 determines the validity of the authentication result based on the determination result whether the face image is a plane and the ID information. As described above, the face authentication device 20 or the face authentication system 100 can improve the accuracy of face authentication by using the ID information.
また、フラットパネル検出部70(平面判定部の一例)は、顔画像fg1および顔画像fg2のうち一方の顔画像fg1と、他方の顔画像fg2の撮像時の撮像方向を一方の撮像画像の撮像時の撮像方向に射影変換した変換画像fg2a(射影変換画像の一例)との相関値に応じて、顔画像が平面であるか否かを判定する。このように、顔認証装置20または顔認証システム100は、相関値を用いることで、客観的に平面か否かを判定でき、平面判定の結果に疑義が生じず、判定精度が高まる。
In addition, the flat panel detection unit 70 (an example of a plane determination unit) captures one of the face images fg1 and fg2 in the direction of capturing the one face image fg1 and the other face image fg2. Whether or not the face image is a plane is determined according to the correlation value with the converted image fg2a (an example of the projective conversion image) obtained by projective conversion in the imaging direction at that time. As described above, the face authentication device 20 or the face authentication system 100 can objectively determine whether or not it is a plane by using the correlation value, so that the result of the plane determination does not cause doubt, and the determination accuracy increases.
また、複数の撮像装置11(カメラの一例)は、鉛直方向の異なる高さの位置に配置される。複数の撮像装置11のそれぞれの光軸が互いに平行である。これにより、それぞれの撮像装置11は、認証対象人物30の背の高さによらず、認証対象人物30の顔画像を確実に2フレーム分撮像することができる。また、複数の撮像装置11の光軸を平行にすることで、顔画像の鉛直方向の向きが大きくずれないようにすることができる。また、同一の方向からの撮像画像であるので、これらの相関値を算出する処理を軽減でき、相関値の算出精度が高まる。
Further, the plurality of imaging devices 11 (an example of a camera) are arranged at different heights in the vertical direction. The optical axes of the plurality of imaging devices 11 are parallel to each other. Thereby, each imaging device 11 can reliably capture the face image of the authentication target person 30 for two frames regardless of the height of the authentication target person 30. In addition, by making the optical axes of the plurality of imaging devices 11 parallel, it is possible to prevent the face image from being greatly displaced in the vertical direction. Further, since the captured images are from the same direction, the processing for calculating these correlation values can be reduced, and the calculation accuracy of the correlation values is increased.
(実施の形態1における他の顔認証処理例)
図9は、実施の形態1に係る他の顔認証の動作手順の一例を時系列に説明するフローチャートである。図9の説明において、図7に示す顔認証処理と同一のステップ処理については、同一のステップ番号あるいは対応するステップ番号を付して説明を簡略化または省略し、異なる内容について説明する。 (Another example of face authentication processing in Embodiment 1)
FIG. 9 is a flowchart illustrating an example of another face authentication operation procedure according to the first embodiment in time series. In the description of FIG. 9, the same step process as the face authentication process shown in FIG. 7 is denoted by the same step number or the corresponding step number, the description is simplified or omitted, and different contents will be described.
図9は、実施の形態1に係る他の顔認証の動作手順の一例を時系列に説明するフローチャートである。図9の説明において、図7に示す顔認証処理と同一のステップ処理については、同一のステップ番号あるいは対応するステップ番号を付して説明を簡略化または省略し、異なる内容について説明する。 (Another example of face authentication processing in Embodiment 1)
FIG. 9 is a flowchart illustrating an example of another face authentication operation procedure according to the first embodiment in time series. In the description of FIG. 9, the same step process as the face authentication process shown in FIG. 7 is denoted by the same step number or the corresponding step number, the description is simplified or omitted, and different contents will be described.
図9において、他の顔認証処理では、顔認証装置20は、顔画像が検出された後、先に平面判定を行う。その後、対象人物の顔照合を行う。ステップS3で顔認識部21が顔画像を検出した場合、フラットパネル検出部70は、顔画像の平面判定を行う(S6)。判定部80は、顔画像が平面であるか否かを判別する(S7)。平面である場合、判定部80は、平面と判定された回数が所定回数以上であるか否かを判別する(S7A)。所定回数未満である場合、顔認証装置20は、ステップS1の処理に戻る。一方、平面と判定された回数が所定回数以上である場合、有効判定部25は、無効の判定を行い、顔認証の認証結果として否認の旨のメッセージ(つまり、NGの旨のメッセージ)を出力する(S9)。この後、顔認証装置20は図9に示す処理を終了する。
In FIG. 9, in another face authentication process, the face authentication device 20 performs plane determination first after a face image is detected. Thereafter, face comparison of the target person is performed. When the face recognition unit 21 detects a face image in step S3, the flat panel detection unit 70 performs plane determination of the face image (S6). The determination unit 80 determines whether or not the face image is a plane (S7). In the case of a plane, the determination unit 80 determines whether or not the number of times determined to be a plane is a predetermined number or more (S7A). When the number is less than the predetermined number, the face authentication device 20 returns to the process of step S1. On the other hand, when the number of times determined to be a plane is equal to or greater than the predetermined number, the validity determination unit 25 determines invalidity and outputs a message indicating rejection (that is, a message indicating NG) as the authentication result of face authentication. (S9). Thereafter, the face authentication apparatus 20 ends the process shown in FIG.
一方、ステップS7で顔画像が平面でない場合、顔照合部24は、検出された顔の照合を行い、本人認証を行う(S4)。顔照合部24は、対象人物の顔の特徴情報が登録者の顔の照合データと一致するか否かを判別する(S5)。照合データと一致しない場合、有効判定部25は、ステップS9において、無効の判定を行い、顔認証の認証結果として否認を出力する。また、ステップS5で照合データと一致する場合、有効判定部25は、有効の判定を行い、顔認証の認証結果として承認の旨のメッセージ(つまり、OKの旨のメッセージ)を出力する(S8)。この後、顔認証装置20は図9に示す処理を終了する。
On the other hand, if the face image is not a plane in step S7, the face collation unit 24 collates the detected face and authenticates the person (S4). The face matching unit 24 determines whether or not the feature information of the target person's face matches the matching data of the registrant's face (S5). If it does not match the collation data, the validity determination unit 25 determines invalidity in step S9, and outputs denial as the authentication result of face authentication. If it matches with the collation data in step S5, the validity determination unit 25 determines the validity and outputs a message indicating approval (ie, a message indicating OK) as the authentication result of face authentication (S8). . Thereafter, the face authentication apparatus 20 ends the process shown in FIG.
これにより、顔画像が平面でないと判定された場合に限って、顔照合を行うことができる。したがって、処理の負荷が大きい顔照合の回数を減らすことができる。
Thus, face matching can be performed only when it is determined that the face image is not flat. Therefore, it is possible to reduce the number of face matching operations that require a large processing load.
また、顔照合部24は、顔画像が平面でないと判定された場合に、対象人物の顔特徴情報が登録者の顔の照合データと一致するか否かの照合を行う。有効判定部25は、この照合結果に基づいて、人物の顔情報の認証結果の有効性を判定する。これにより、顔認証装置20または顔認証システム100は、顔画像が平面であると判定された場合、顔認証の処理を省くことができ、処理の負荷を軽減できる。
Further, when it is determined that the face image is not a plane, the face matching unit 24 checks whether or not the face feature information of the target person matches the registrant's face matching data. The validity determination unit 25 determines the validity of the authentication result of the person's face information based on the collation result. Thereby, the face authentication apparatus 20 or the face authentication system 100 can omit the face authentication process when the face image is determined to be a plane, and can reduce the processing load.
また、有効判定部25は、顔領域の顔画像が平面であることが所定回判定された場合に、人物が本人か否かの照合結果の有効性を否認する。これにより、顔認証装置20または顔認証システム100は、例えば一度だけでなく複数回に亘ってその人物の顔が平面であると判定された場合、一度の平面であるとの誤判定の可能性を排除しながら処理負荷の大きい顔照合の処理の実行を省略し、その人物の顔認証の認証結果として否認を出力できる。従って、顔認証装置20または顔認証システム100において、処理の負荷の軽減が可能となる。
Also, the validity determination unit 25 denies the validity of the verification result as to whether or not the person is the person when it is determined a predetermined number of times that the face image of the face area is a plane. Thereby, for example, when the face authentication device 20 or the face authentication system 100 determines that the face of the person is a flat surface not only once but multiple times, there is a possibility of erroneous determination that the face is a single flat surface. It is possible to omit the execution of face collation processing with a large processing load while rejecting, and output denial as the authentication result of the person's face authentication. Therefore, in the face authentication device 20 or the face authentication system 100, the processing load can be reduced.
(実施の形態1における他の撮像装置の配置例)
図10は、実施の形態1に係る他の3台の撮像装置11i,11j,11kの配置例を示す図である。3台の撮像装置11i,11j,11kは、認証対象人物30が通過する通路の面に対し、垂直な方向(つまり、水平面に対して鉛直方向)に配置される。3台の撮像装置11i,11j,11kは、通路幅wdにおいて、画角が重なり合うように、認証対象人物30の顔の存在範囲SAを撮像する。この場合、背の低い子供等の人物の顔は、低い位置に設置された撮像装置11kと、高い位置に設置された撮像装置11iで撮像される。また、一般的な背の高さを有する人物の顔は、3台の撮像装置11i,11j,11kで撮像される。また、背の高い大人等の人物の顔は、高い位置に設置された撮像装置11iと、中間位置に設置された撮像装置11jで撮像される。3台の撮像装置11i,11j,11kの光軸は、顔の存在範囲SAに合わせて任意に設定されており、非平行である。したがって、背の高さが異なる人物が通路を通過しようとすると、少なくとも2台の撮像装置によって顔画像が撮像される。 (Example of Arrangement of Other Imaging Devices in Embodiment 1)
FIG. 10 is a diagram illustrating an arrangement example of the other three imaging devices 11i, 11j, and 11k according to the first embodiment. The three imaging devices 11i, 11j, and 11k are arranged in a direction perpendicular to the plane of the passage through which the person 30 to be authenticated passes (that is, a direction perpendicular to the horizontal plane). The three imaging devices 11i, 11j, and 11k image the face existence range SA of the authentication target person 30 so that the angles of view overlap in the passage width wd. In this case, the face of a person such as a short child is imaged by the imaging device 11k installed at a low position and the imaging device 11i installed at a high position. In addition, the face of a person having a general height is imaged by the three imaging devices 11i, 11j, and 11k. Further, the face of a tall person such as an adult is imaged by the imaging device 11i installed at a high position and the imaging device 11j installed at an intermediate position. The optical axes of the three imaging devices 11i, 11j, and 11k are arbitrarily set according to the face existence range SA and are not parallel. Therefore, when a person with a different height tries to pass through the passage, a face image is captured by at least two imaging devices.
図10は、実施の形態1に係る他の3台の撮像装置11i,11j,11kの配置例を示す図である。3台の撮像装置11i,11j,11kは、認証対象人物30が通過する通路の面に対し、垂直な方向(つまり、水平面に対して鉛直方向)に配置される。3台の撮像装置11i,11j,11kは、通路幅wdにおいて、画角が重なり合うように、認証対象人物30の顔の存在範囲SAを撮像する。この場合、背の低い子供等の人物の顔は、低い位置に設置された撮像装置11kと、高い位置に設置された撮像装置11iで撮像される。また、一般的な背の高さを有する人物の顔は、3台の撮像装置11i,11j,11kで撮像される。また、背の高い大人等の人物の顔は、高い位置に設置された撮像装置11iと、中間位置に設置された撮像装置11jで撮像される。3台の撮像装置11i,11j,11kの光軸は、顔の存在範囲SAに合わせて任意に設定されており、非平行である。したがって、背の高さが異なる人物が通路を通過しようとすると、少なくとも2台の撮像装置によって顔画像が撮像される。 (Example of Arrangement of Other Imaging Devices in Embodiment 1)
FIG. 10 is a diagram illustrating an arrangement example of the other three
このように、複数の撮像装置は、鉛直方向の異なる高さの位置に配置されるとともに、複数の撮像装置のそれぞれの光軸が互いに非平行である。これにより、背の高さによらず、対象人物の顔画像を確実に2枚以上の撮像装置で撮像できる。また、複数のカメラの光軸を非平行にすることで、背の高さが異なる対象人物の顔画像を容易に撮像できる。したがって、2枚分の顔画像を撮像するために、設置される撮像装置の台数を少なくすることができる。
Thus, the plurality of imaging devices are arranged at different heights in the vertical direction, and the optical axes of the plurality of imaging devices are not parallel to each other. Accordingly, the face image of the target person can be reliably captured by two or more imaging devices regardless of the height of the subject. In addition, by making the optical axes of a plurality of cameras non-parallel, it is possible to easily capture face images of target persons having different heights. Therefore, in order to capture two face images, the number of installed image pickup devices can be reduced.
(実施の形態2)
実施の形態1では、顔画像が平面である場合、表示ユニットを用いた、なりすましであるとして、顔認証の結果を否認と判定した。実施の形態2では、実施の形態1の構成に加え、さらに撮像画像中に枠画像が検出された場合においても、表示ユニットを用いたなりすましであると判断する場合を示す。 (Embodiment 2)
In the first embodiment, when the face image is a plane, it is determined that the result of the face authentication is denial by assuming that the display unit is impersonated. In the second embodiment, in addition to the configuration of the first embodiment, a case where it is determined that the display unit is impersonated even when a frame image is detected in the captured image is shown.
実施の形態1では、顔画像が平面である場合、表示ユニットを用いた、なりすましであるとして、顔認証の結果を否認と判定した。実施の形態2では、実施の形態1の構成に加え、さらに撮像画像中に枠画像が検出された場合においても、表示ユニットを用いたなりすましであると判断する場合を示す。 (Embodiment 2)
In the first embodiment, when the face image is a plane, it is determined that the result of the face authentication is denial by assuming that the display unit is impersonated. In the second embodiment, in addition to the configuration of the first embodiment, a case where it is determined that the display unit is impersonated even when a frame image is detected in the captured image is shown.
実施の形態2に係る顔認証システムにおいて、実施の形態1と同一の構成要素については同一の符号を用いることで、その説明を簡略化または省略する。
In the face authentication system according to the second embodiment, the same components as those in the first embodiment are denoted by the same reference numerals, and the description thereof is simplified or omitted.
図11は、実施の形態2に係る顔認証装置20Aに内蔵されるプロセッサおよびメモリの協働により実現される機能的な構成例を示す図である。顔認証装置20Aは、実施の形態1で示した顔認識部21、顔照合部24、有効判定部25、フラットパネル検出部70の他、枠検出部22および配置判定部23を有する。顔認識部21、枠検出部22、配置判定部23、顔照合部24、有効判定部25、フラットパネル検出部70は、例えばPC(Personal Computer)である顔認証装置20Aに内蔵されるプロセッサおよびメモリの協働により、プロセッサがメモリに記憶された所定のプログラムを実行することで実現される。
FIG. 11 is a diagram illustrating a functional configuration example realized by cooperation of a processor and a memory built in the face authentication apparatus 20A according to the second embodiment. 20 A of face authentication apparatuses have the frame detection part 22 and the arrangement | positioning determination part 23 other than the face recognition part 21, the face collation part 24, the effectiveness determination part 25, and the flat panel detection part 70 which were shown in Embodiment 1. FIG. The face recognition unit 21, the frame detection unit 22, the arrangement determination unit 23, the face collation unit 24, the validity determination unit 25, and the flat panel detection unit 70 are, for example, a processor built in the face authentication device 20 </ b> A that is a PC (Personal Computer) and This is realized by the processor executing a predetermined program stored in the memory in cooperation with the memory.
枠検出部22は、撮像画像と背景画像とを用い、撮像画像から背景画像を除去した入力画像において、直線状の枠の検出を行う。枠検出部22は、撮像画像中に表示ユニットの枠がある場合に枠の位置情報(言い換えると、枠配置情報)を取得する。
The frame detection unit 22 uses the captured image and the background image to detect a linear frame in the input image obtained by removing the background image from the captured image. The frame detection unit 22 acquires frame position information (in other words, frame arrangement information) when there is a frame of the display unit in the captured image.
配置判定部23は、顔認識部21によって取得された顔配置情報と枠配置情報とを基に、表示ユニットの枠内に顔が位置しているか否かの配置判定を行う。配置判定部23は、配置判定の結果、撮像画像中に表示ユニット(上述参照)が存在し、表示ユニットの枠内に顔画像が表示されていることを検出する。
The arrangement determination unit 23 determines the arrangement of whether or not the face is located within the frame of the display unit based on the face arrangement information and the frame arrangement information acquired by the face recognition unit 21. As a result of the placement determination, the placement determination unit 23 detects that a display unit (see above) is present in the captured image and a face image is displayed within the frame of the display unit.
有効判定部25は、顔照合部24による顔照合、フラットパネル検出部70による平面判定、および配置判定部23による枠検出の結果を基に、撮像画像に映っている対象人物の顔が有効か否か、つまり顔認証の結果の有効性として承認か否認かを判定する。有効判定部25は、認証結果を制御対象機器40に出力する。
Based on the results of face matching by the face matching unit 24, plane determination by the flat panel detecting unit 70, and frame detection by the placement determining unit 23, the validity determining unit 25 determines whether the face of the target person shown in the captured image is valid. It is determined whether it is approval or disapproval as the validity of the result of face authentication. The validity determination unit 25 outputs the authentication result to the control target device 40.
図12は、枠検出部22による枠検出処理の動作概要例を示す説明図である。ここで、携帯型の表示ユニットに正規登録者の顔が写っている画像(静止画増、動画像)を再生してなりすましが行われている状況を想定する。撮像装置11A,11Bは、それぞれ表示ユニットを撮像する。
FIG. 12 is an explanatory diagram showing an example of an outline of the operation of the frame detection process performed by the frame detection unit 22. Here, a situation is assumed in which spoofing is performed by reproducing an image (still image increase, moving image) in which a face of a regular registrant is reflected on a portable display unit. The imaging devices 11A and 11B each image the display unit.
撮像装置11A,11Bの一方で撮像された撮像画像を、枠検出部22に入力される入力画像51とする。入力画像51には、表示ユニットの筐体53と、表示ユニットの画面に表示された人物54とが含まれる。枠検出部22は、入力画像51について公知の輪郭抽出(例えば、エッジ検出)処理を行い、エッジ画像52を生成する。エッジ画像52は、表示ユニットの筐体の輪郭55と、表示ユニットの画面に表示された人物の輪郭56とを含む。枠検出部22は、エッジ画像52に対し直線検出処理を行い、画像中の直線部分(線分)57を抽出する。直線検出処理には、例えばハフ(Hough)変換を用いた直線検出等が用いられる。枠検出部22は、抽出した画像中の直線部分57において、平行な2つの直線とこれに垂直の直線とがあるか否かを判定し、四辺で囲まれた枠58を抽出し、枠座標を取得する。なお、枠検出部22は、四辺が閉じていない方形状の線分の組や三辺または二辺の直交する線分の組等であっても、所定条件に基づいて線分で囲まれた枠として判定し、枠58を抽出してもよい。
The captured image captured by one of the imaging devices 11A and 11B is set as an input image 51 input to the frame detection unit 22. The input image 51 includes a housing 53 of the display unit and a person 54 displayed on the screen of the display unit. The frame detection unit 22 performs a known contour extraction (for example, edge detection) process on the input image 51 to generate an edge image 52. The edge image 52 includes a contour 55 of the housing of the display unit and a contour 56 of a person displayed on the screen of the display unit. The frame detection unit 22 performs straight line detection processing on the edge image 52 and extracts a straight line portion (line segment) 57 in the image. For the straight line detection processing, for example, straight line detection using Hough transform or the like is used. The frame detection unit 22 determines whether there are two parallel straight lines and a straight line perpendicular thereto in the straight line portion 57 in the extracted image, extracts a frame 58 surrounded by four sides, and extracts frame coordinates. To get. Note that the frame detection unit 22 is surrounded by a line segment based on a predetermined condition, even if it is a set of square line segments whose four sides are not closed, or a group of line segments orthogonal to three sides or two sides. It may be determined as a frame and the frame 58 may be extracted.
また、枠検出部22は、枠検出処理において、撮像画像から背景除去をした後で直線検出および枠抽出を行ってもよい。撮像画像において背景部分を除去することによって、枠検出の精度を向上できる。
Further, the frame detection unit 22 may perform straight line detection and frame extraction after removing the background from the captured image in the frame detection process. The accuracy of frame detection can be improved by removing the background portion from the captured image.
図13は、配置判定部23による配置判定処理の動作概要例を示す説明図である。配置判定部23は、顔認識部21による顔検出結果の一つとして得られた対象人物の顔配置情報を取得する。顔配置情報は、撮像画像中の顔の輪郭位置を示す顔枠68を含む。また、配置判定部23は、枠検出部22による枠検出結果の一つとして得られた表示ユニットの枠配置情報を取得する。枠配置情報は、撮像画像において検出された表示ユニットの輪郭位置を示す枠58を含む。
FIG. 13 is an explanatory diagram showing an operation outline example of the arrangement determination processing by the arrangement determination unit 23. The arrangement determination unit 23 acquires face arrangement information of the target person obtained as one of the face detection results by the face recognition unit 21. The face arrangement information includes a face frame 68 indicating the contour position of the face in the captured image. Further, the arrangement determination unit 23 acquires frame arrangement information of the display unit obtained as one of the frame detection results by the frame detection unit 22. The frame arrangement information includes a frame 58 indicating the contour position of the display unit detected in the captured image.
配置判定部23は、配置判定処理として、顔枠68と枠58の双方の位置を比較し、枠58の内側の領域に顔枠68が存在するか否か、つまり表示ユニットの枠内に対象人物の顔が位置しているか否かを判定する。配置判定部23は、図13に示すように、枠58の内側に顔枠68がある場合、枠内に顔画像ありとの配置判定結果を出力する。
As the placement determination process, the placement determination unit 23 compares the positions of both the face frame 68 and the frame 58, and determines whether or not the face frame 68 exists in the area inside the frame 58, that is, the target in the display unit frame. It is determined whether or not a person's face is located. As shown in FIG. 13, when there is a face frame 68 inside the frame 58, the arrangement determination unit 23 outputs an arrangement determination result that there is a face image in the frame.
有効判定部25は、配置判定部23による配置判定結果に基づき、枠内に顔画像が存在する場合、撮像画像において検出された顔画像が表示ユニットに表示された顔画像であるとみなし、顔照合結果、無効であると判定する。有効判定部25は、顔認証の結果として否認を制御対象機器40に出力する。
The validity determination unit 25 regards the face image detected in the captured image as the face image displayed on the display unit when the face image is present in the frame based on the result of the placement determination by the placement determination unit 23, It is determined that the verification result is invalid. The validity determination unit 25 outputs a denial to the control target device 40 as a result of the face authentication.
次に、実施の形態2に係る顔認証システム100の動作について、図14を参照して説明する。
Next, the operation of the face authentication system 100 according to Embodiment 2 will be described with reference to FIG.
図14は、実施の形態2に係る顔認証の動作手順の一例を時系列に説明するフローチャートである。図7に示した顔認証処理と同一のステップ処理については、同一のステップ番号を付すことで、その説明を簡略化または省略する。
FIG. 14 is a flowchart illustrating an example of a face authentication operation procedure according to the second embodiment in time series. About the same step process as the face authentication process shown in FIG. 7, the description is simplified or abbreviate | omitted by attaching | subjecting the same step number.
ステップS7で平面でないと判定された場合、枠検出部22は、撮像画像中の枠検出を行う(S7B)。配置判定部23は、顔検出による顔の位置と枠検出による枠の位置との配置判定を行う(S7C)。配置判定部23は、撮像画像中に枠が検出され、枠の内側に顔が存在するか否か、つまり対象人物の顔の外側を囲む枠が存在するか否かを判定する(S7D)。配置判定部23は、枠存在の判定結果を有効判定部25に渡す。有効判定部25は、枠存在判定の結果、対象人物の顔の外側を囲む枠が存在する場合、顔認証において無効と判定し、顔認証の結果として否認を制御対象機器40に出力する。有効判定部25は、枠存在判定の結果、対象人物の顔の外側を囲む枠が無い場合、顔認証において有効と判定し、顔認証の結果として承認を制御対象機器40に出力する。
If it is determined in step S7 that the plane is not a plane, the frame detection unit 22 detects a frame in the captured image (S7B). The arrangement determination unit 23 determines the arrangement of the face position by face detection and the frame position by frame detection (S7C). The arrangement determining unit 23 detects a frame in the captured image and determines whether or not a face exists inside the frame, that is, whether or not a frame surrounding the outside of the target person's face exists (S7D). The arrangement determination unit 23 passes the determination result of the frame existence to the validity determination unit 25. If there is a frame surrounding the outside of the target person's face as a result of the frame presence determination, the validity determination unit 25 determines that the face authentication is invalid and outputs a rejection to the control target device 40 as the face authentication result. If there is no frame surrounding the outside of the target person's face as a result of the frame presence determination, the validity determination unit 25 determines that the face authentication is valid, and outputs approval to the control target device 40 as the face authentication result.
なお、ここでは、ステップS7で平面でないと判定された場合、上記ステップS7B~S7Dで、枠検出処理が行われたが、ステップS5で対象人物の顔特徴情報が登録者の顔の照合データと一致した場合、ステップS7B~S7Dと同様の枠検出処理が行われてもよい。
Here, if it is determined in step S7 that the plane is not a plane, the frame detection processing has been performed in steps S7B to S7D. If they match, frame detection processing similar to steps S7B to S7D may be performed.
また、上記顔認証処理において、顔検出処理と枠検出処理とを、並列的に並行して実行してもよい。また、直列的に一方の処理を実行してから他方の処理を実行してもよく、処理の実施タイミングや順序は限定されない。
In the face authentication process, the face detection process and the frame detection process may be executed in parallel. Moreover, after performing one process in series, you may perform the other process, and the implementation timing and order of a process are not limited.
このように、実施の形態2における顔認証システム100では、顔照合および平面判定の処理に加え、配置判定部23による枠検出の結果を、顔認証の判定に用いることで、対象人物の顔認証を高い精度で行うことができる。
As described above, in the face authentication system 100 according to the second embodiment, the face detection of the target person is performed by using the result of the frame detection by the arrangement determination unit 23 in the face authentication determination in addition to the face matching and plane determination processing. Can be performed with high accuracy.
また、対象人物の顔画像が平面でないと判定された場合でも、顔画像を含む撮像画像に枠画像が含まれる場合、顔認証を否認することできる。例えば、タブレット端末の向きを小刻みに反転させながら、あるいはタブレット端末に表示される顔画像を3D表示あるいは2D表示に連続的に切り換えながら、対象人物が複数の撮像装置で撮像された場合、顔画像が平面でないと判定されるおそれがある。そのような場合でも、撮像画像に枠画像がある場合、タブレット端末に映っている顔画像であると判断し、対象人物の顔画像を否認できる。
Further, even when it is determined that the face image of the target person is not a plane, face authentication can be denied if the captured image including the face image includes a frame image. For example, when the target person is captured by a plurality of imaging devices while inverting the orientation of the tablet terminal in small increments or continuously switching the face image displayed on the tablet terminal to 3D display or 2D display, May be determined to be non-planar. Even in such a case, when there is a frame image in the captured image, it is determined that the face image is reflected on the tablet terminal, and the face image of the target person can be denied.
一方、顔画像が表示された表示ユニットの枠を隠す等の行為があった場合でも、平面判定によってそのような行為に対処できる。また、平面判定を組み合わせることで、顔認証の対象でない人物に対しても、枠が検出されただけで、顔認証結果がNGになってしまうという、誤検出を回避できる。
On the other hand, even if there is an action such as hiding the frame of the display unit on which the face image is displayed, such an action can be dealt with by plane determination. Further, by combining the plane determination, it is possible to avoid a false detection that a face authentication result is NG only by detecting a frame even for a person who is not a face authentication target.
以上、図面を参照しながら各種の実施の形態について説明したが、本開示はかかる例に限定されないことは言うまでもない。当業者であれば、請求の範囲に記載された範疇内において、各種の変更例、修正例、置換例、付加例、削除例、均等例に想到し得ることは明らかであり、それらについても当然に本開示の技術的範囲に属するものと了解される。また、発明の趣旨を逸脱しない範囲において、上述した各種の実施の形態における各構成要素を任意に組み合わせてもよい。
Although various embodiments have been described above with reference to the drawings, it goes without saying that the present disclosure is not limited to such examples. It will be apparent to those skilled in the art that various changes, modifications, substitutions, additions, deletions, and equivalents can be conceived within the scope of the claims. It is understood that they belong to the technical scope of the present disclosure. In addition, the constituent elements in the various embodiments described above may be arbitrarily combined without departing from the spirit of the invention.
本開示は、対象人物の認証時に行われ得る画像を用いたなりすましを高精度に検出し、なりすましによる悪意ある第三者の不正行為を効果的に抑止する顔認証装置、顔認証方法および顔認証システムとして有用である。
The present disclosure accurately detects impersonation using an image that can be performed at the time of authentication of a target person, and effectively suppresses malicious acts of a malicious third party due to impersonation, a face authentication method, and face authentication Useful as a system.
11A、11B 撮像装置
12 表示装置
13 ID読取装置
20、20A 顔認証装置
21 顔認識部
22 枠検出部
23 配置判定部
24 顔照合部
25 有効判定部
40 制御対象機器
70 フラットパネル検出部
71、72 顔検出部
73 顔対応判定部
74、75 特徴点検出部
76 特徴点マッチング部
77 射影変換推定部
78 射影変換部
79 相関演算部
80 判定部
100 顔認証システム 11A,11B Imaging device 12 Display device 13 ID reader 20, 20A Face authentication device 21 Face recognition unit 22 Frame detection unit 23 Arrangement determination unit 24 Face collation unit 25 Effective determination unit 40 Control target device 70 Flat panel detection unit 71, 72 Face detection unit 73 Face correspondence determination unit 74, 75 Feature point detection unit 76 Feature point matching unit 77 Projection conversion estimation unit 78 Projection conversion unit 79 Correlation calculation unit 80 Determination unit 100 Face authentication system
12 表示装置
13 ID読取装置
20、20A 顔認証装置
21 顔認識部
22 枠検出部
23 配置判定部
24 顔照合部
25 有効判定部
40 制御対象機器
70 フラットパネル検出部
71、72 顔検出部
73 顔対応判定部
74、75 特徴点検出部
76 特徴点マッチング部
77 射影変換推定部
78 射影変換部
79 相関演算部
80 判定部
100 顔認証システム 11A,
Claims (10)
- 複数のカメラのそれぞれにより同一の人物が撮像された第1撮像画像および第2撮像画像を入力する画像入力部と、
前記第1撮像画像および第2撮像画像の少なくとも一方の顔領域を検出する顔検出部と、
前記顔領域に対応する顔情報と前記第1撮像画像および第2撮像画像とに基づいて、前記顔領域の顔画像が平面であるか否かを判定する平面判定部と、
前記顔領域の顔画像が平面であるか否かの判定結果に基づいて、前記人物の顔情報の認証結果の有効性を判定する有効性判定部と、を備える、
顔認証装置。 An image input unit that inputs a first captured image and a second captured image in which the same person is captured by each of a plurality of cameras;
A face detector for detecting at least one face area of the first captured image and the second captured image;
A plane determination unit that determines whether the face image of the face area is a plane based on the face information corresponding to the face area and the first captured image and the second captured image;
A validity determination unit that determines the validity of an authentication result of the person's face information based on a determination result of whether or not the face image of the face region is a plane;
Face recognition device. - 検出された前記顔領域に対応する顔情報と既登録の顔照合用データとを照合する照合部、をさらに備え、
前記有効性判定部は、
前記顔情報の照合結果と前記顔領域の顔画像が平面であるか否かの判定結果とに基づいて、前記人物の顔情報の認証結果の有効性を判定する、
請求項1に記載の顔認証装置。 A collation unit that collates face information corresponding to the detected face area and registered face collation data;
The effectiveness determination unit includes:
Based on the collation result of the face information and the determination result of whether or not the face image of the face area is a plane, the validity of the authentication result of the person's face information is determined.
The face authentication apparatus according to claim 1. - 検出された前記顔領域に対応する顔情報と既登録の顔照合用データとを照合する照合部、をさらに備え、
前記照合部は、
前記顔領域の顔画像が平面でないと判定された場合に、前記顔情報の照合を行い、
前記有効性判定部は、さらに、
前記顔情報の照合結果に基づいて、前記人物の顔情報の認証結果の有効性を判定する、
請求項1に記載の顔認証装置。 A collation unit that collates face information corresponding to the detected face area and registered face collation data;
The collation unit
When it is determined that the face image of the face area is not a plane, the face information is collated,
The effectiveness determination unit further includes:
Determining the validity of the authentication result of the person's face information based on the face information verification result;
The face authentication apparatus according to claim 1. - 前記人物のID情報を取得するID取得部、をさらに備え、
前記有効性判定部は、
前記顔領域の顔画像が平面であるか否かの判定結果と前記ID情報とに基づいて、前記認証結果の有効性を判定する、
請求項1に記載の顔認証装置。 An ID acquisition unit for acquiring ID information of the person,
The effectiveness determination unit includes:
Determining the validity of the authentication result based on the determination result whether the face image of the face area is a plane and the ID information;
The face authentication apparatus according to claim 1. - 前記平面判定部は、
前記第1撮像画像および前記第2撮像画像のうち一方と、他方の撮像画像の撮像時の撮像方向を前記一方の撮像画像の撮像時の撮像方向に射影変換した射影変換画像との相関に応じて、前記一方の撮像画像内の顔領域の顔画像が平面であるか否かを判定する、
請求項1または2に記載の顔認証装置。 The plane determination unit
Depending on the correlation between one of the first captured image and the second captured image and a projective transformation image obtained by projective transformation of the imaging direction of the other captured image into the imaging direction of the one captured image. Determining whether the face image of the face area in the one captured image is a plane,
The face authentication apparatus according to claim 1 or 2. - 前記有効性判定部は、
前記顔領域の顔画像が平面であることが所定回判定された場合に、前記人物が本人かどうかの照合結果の有効性を否認する、
請求項1に記載の顔認証装置。 The effectiveness determination unit includes:
When it is determined a predetermined number of times that the face image of the face area is a plane, the validity of the collation result as to whether the person is the person is denied,
The face authentication apparatus according to claim 1. - 顔認証装置における顔認証方法であって、
複数のカメラのそれぞれにより同一の人物が撮像された第1撮像画像および第2撮像画像を入力するステップと、
前記第1撮像画像および第2撮像画像の少なくとも一方の顔領域を検出するステップと、
前記顔領域に対応する顔情報と前記第1撮像画像および第2撮像画像とに基づいて、前記顔領域の顔画像が平面であるか否かを判定するステップと、
前記顔領域の顔画像が平面であるか否かの判定結果に基づいて、前記人物の顔情報の認証結果の有効性を判定するステップと、を有する、
顔認証方法。 A face authentication method in a face authentication device,
Inputting a first captured image and a second captured image in which the same person is captured by each of a plurality of cameras;
Detecting at least one face area of the first captured image and the second captured image;
Determining whether the face image of the face area is a plane based on the face information corresponding to the face area and the first captured image and the second captured image;
Determining the validity of the authentication result of the person's face information based on the determination result of whether or not the face image of the face region is a plane,
Face authentication method. - 複数のカメラと顔認証装置とが互いに通信可能に接続された顔認証システムであって、
前記複数のカメラは、
それぞれ同一の人物を撮像した第1撮像画像および第2撮像画像を前記顔認証装置に出力し、
前記顔認証装置は、
前記第1撮像画像および第2撮像画像を入力し、
前記第1撮像画像および第2撮像画像の少なくとも一方の顔領域を検出し、
前記顔領域に対応する顔情報と前記第1撮像画像および第2撮像画像とに基づいて、前記顔領域の顔画像が平面であるか否かを判定し、
前記顔領域の顔画像が平面であるか否かの判定結果に基づいて、前記人物の顔情報の認証結果の有効性を判定する、
顔認証システム。 A face authentication system in which a plurality of cameras and a face authentication device are communicably connected to each other,
The plurality of cameras are:
Outputting a first captured image and a second captured image each capturing the same person to the face authentication device;
The face authentication device
Inputting the first captured image and the second captured image;
Detecting at least one face area of the first captured image and the second captured image;
Based on the face information corresponding to the face area and the first captured image and the second captured image, it is determined whether or not the face image of the face area is a plane,
Determining the validity of the authentication result of the person's face information based on the determination result of whether or not the face image of the face region is a plane;
Face recognition system. - 前記複数のカメラは、
鉛直方向の異なる高さの位置に配置されるとともに、前記複数のカメラのそれぞれの光軸が互いに平行である、
請求項8に記載の顔認証システム。 The plurality of cameras are:
The optical axes of the plurality of cameras are parallel to each other and are arranged at different heights in the vertical direction.
The face authentication system according to claim 8. - 前記複数のカメラは、
鉛直方向の異なる高さの位置に配置されるとともに、前記複数のカメラのそれぞれの光軸が互いに非平行である、
請求項8に記載の顔認証システム。 The plurality of cameras are:
The optical axes of the plurality of cameras are non-parallel to each other while being arranged at different heights in the vertical direction.
The face authentication system according to claim 8.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-091596 | 2018-05-10 | ||
JP2018091596A JP2019197426A (en) | 2018-05-10 | 2018-05-10 | Face authentication device, face authentication method, and face authentication system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019216091A1 true WO2019216091A1 (en) | 2019-11-14 |
Family
ID=68466943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/015539 WO2019216091A1 (en) | 2018-05-10 | 2019-04-10 | Face authentication device, face authentication method, and face authentication system |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2019197426A (en) |
WO (1) | WO2019216091A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023248806A1 (en) * | 2022-06-20 | 2023-12-28 | ソニーセミコンダクタソリューションズ株式会社 | Information processing device, information processing method, and recording medium |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4124029A4 (en) * | 2020-03-17 | 2023-04-05 | NEC Corporation | Gate system, gate device, image processing method therefor, program, and arrangement method for gate device |
KR102534582B1 (en) | 2020-03-20 | 2023-05-22 | 한국전자통신연구원 | Method and apparatus of active identity verification based on gaze path analysis |
US11881056B2 (en) | 2020-03-27 | 2024-01-23 | Nec Corporation | Image processing device, image processing method, and storage medium |
JP7428675B2 (en) | 2021-02-24 | 2024-02-06 | 株式会社日立製作所 | Biometric authentication system, authentication terminal, and authentication method |
JP7552913B2 (en) | 2021-07-27 | 2024-09-18 | 富士通株式会社 | Determination method, determination program, and information processing device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001056859A (en) * | 1999-08-19 | 2001-02-27 | Toshiba Corp | Face image recognition device and passage controller |
JP2003178306A (en) * | 2001-12-12 | 2003-06-27 | Toshiba Corp | Personal identification device and personal identification method |
WO2010050206A1 (en) * | 2008-10-28 | 2010-05-06 | 日本電気株式会社 | Spoofing detection system, spoofing detection method and spoofing detection program |
JP2015082195A (en) * | 2013-10-22 | 2015-04-27 | 株式会社東芝 | Personal authentication system |
JP2017021787A (en) * | 2015-07-14 | 2017-01-26 | パナソニックIpマネジメント株式会社 | Identification medium recognition apparatus and identification medium recognition method |
-
2018
- 2018-05-10 JP JP2018091596A patent/JP2019197426A/en active Pending
-
2019
- 2019-04-10 WO PCT/JP2019/015539 patent/WO2019216091A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001056859A (en) * | 1999-08-19 | 2001-02-27 | Toshiba Corp | Face image recognition device and passage controller |
JP2003178306A (en) * | 2001-12-12 | 2003-06-27 | Toshiba Corp | Personal identification device and personal identification method |
WO2010050206A1 (en) * | 2008-10-28 | 2010-05-06 | 日本電気株式会社 | Spoofing detection system, spoofing detection method and spoofing detection program |
JP2015082195A (en) * | 2013-10-22 | 2015-04-27 | 株式会社東芝 | Personal authentication system |
JP2017021787A (en) * | 2015-07-14 | 2017-01-26 | パナソニックIpマネジメント株式会社 | Identification medium recognition apparatus and identification medium recognition method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023248806A1 (en) * | 2022-06-20 | 2023-12-28 | ソニーセミコンダクタソリューションズ株式会社 | Information processing device, information processing method, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
JP2019197426A (en) | 2019-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019216091A1 (en) | Face authentication device, face authentication method, and face authentication system | |
KR102655949B1 (en) | Face verifying method and apparatus based on 3d image | |
KR100885366B1 (en) | Authentication apparatus, authentication method, authentication program, and computer readable recording media | |
WO2018192448A1 (en) | People-credentials comparison authentication method, system and camera | |
CN109002786B (en) | Face detection method, face detection equipment and computer-readable storage medium | |
JP6148064B2 (en) | Face recognition system | |
KR101810190B1 (en) | User authentication method and apparatus using face identification | |
KR101724971B1 (en) | System for recognizing face using wide angle camera and method for recognizing face thereof | |
US11995916B2 (en) | Electronic device capable of identifying ineligible object | |
CN111144277B (en) | Face verification method and system with living body detection function | |
KR101821144B1 (en) | Access Control System using Depth Information based Face Recognition | |
JP2002507035A (en) | How to authenticate the validity of an image recorded for personal identification | |
WO2016172923A1 (en) | Video detection method, video detection system, and computer program product | |
WO2017113286A1 (en) | Authentication method and apparatus | |
JP2013069155A (en) | Face authentication database construction method, face authentication device, and face authentication program | |
KR20150069799A (en) | Method for certifying face and apparatus thereof | |
JP7318833B2 (en) | Image processing device, image processing method, and program | |
KR20130133676A (en) | Method and apparatus for user authentication using face recognition througth camera | |
WO2018179723A1 (en) | Facial authentication processing apparatus, facial authentication processing method, and facial authentication processing system | |
JP2014086042A (en) | Face authentication device | |
CN113705428B (en) | Living body detection method and device, electronic equipment and computer readable storage medium | |
JP2011215896A (en) | Facial image collation device | |
JP6679291B2 (en) | Applicant authentication device, authentication method, and security authentication system using the method | |
TW201721509A (en) | Composite recognition system of human face and biological posture eliminates error judgement generated by pure facial recognition to achieve better accuracy, safety and economy convenience | |
KR101718244B1 (en) | Apparatus and method of processing wide angle image for recognizing face |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19799623 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19799623 Country of ref document: EP Kind code of ref document: A1 |