WO2021060256A1 - Facial authentication device, facial authentication method, and computer-readable recording medium - Google Patents

Facial authentication device, facial authentication method, and computer-readable recording medium Download PDF

Info

Publication number
WO2021060256A1
WO2021060256A1 PCT/JP2020/035737 JP2020035737W WO2021060256A1 WO 2021060256 A1 WO2021060256 A1 WO 2021060256A1 JP 2020035737 W JP2020035737 W JP 2020035737W WO 2021060256 A1 WO2021060256 A1 WO 2021060256A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature amount
collation
master
face
image
Prior art date
Application number
PCT/JP2020/035737
Other languages
French (fr)
Japanese (ja)
Inventor
直紀 徳永
享 半田
雄吾 西山
Original Assignee
Necソリューションイノベータ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necソリューションイノベータ株式会社 filed Critical Necソリューションイノベータ株式会社
Priority to JP2021548920A priority Critical patent/JP7248348B2/en
Publication of WO2021060256A1 publication Critical patent/WO2021060256A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit

Definitions

  • the present invention relates to a face recognition device for face recognition and a face recognition method, and further to a computer-readable recording medium on which a program for realizing these is recorded.
  • Walk-through face recognition is known as an authentication method for verifying identity in entrance / exit management.
  • the identity is confirmed by using the face image of the user moving to the entrance / exit gate and the face image taken in advance.
  • Patent Document 1 discloses a face recognition system that authenticates a user who passes through the authentication area and determines whether or not the user can pass. According to the face recognition system, the user's face image extracted from the input image is collated with the registered face image. Then, when the authentication is successful, when the size of the area indicating the authenticated user in the input image is larger than a predetermined size, the user is allowed to pass.
  • the face image of the person moving to the entrance / exit gate is captured, and the captured multiple face images are collated with the registered face image, and one of the plurality of face images is used. If the verification is successful at any time, the entrance / exit gate is allowed to pass. Therefore, spoofing is possible.
  • the entrance / exit gate will be allowed to pass, so the other person will impersonate the entrance / exit gate. I can pass through.
  • An example of an object of the present invention is to provide a face recognition device for preventing spoofing, a face recognition method, and a computer-readable recording medium.
  • the face recognition device in one aspect of the present invention is A detection unit that detects a face image corresponding to a face from the image using an image of a user in the shooting area.
  • An extraction unit that extracts features using the detected face image, When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored.
  • the first collation unit which is used to perform the first collation, It is characterized by having.
  • the face authentication method in one aspect of the present invention is: A detection step of detecting a face image corresponding to a face from the image using an image of a user in the shooting area. An extraction step of extracting a feature amount using the detected face image, and When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. Using the first collation, the first collation step, It is characterized by having.
  • a computer-readable recording medium on which a program according to one aspect of the present invention is recorded may be used.
  • the first collation step It is characterized by recording a program including an instruction to execute.
  • FIG. 1 is a diagram for explaining an example of a face recognition device.
  • FIG. 2 is a diagram for explaining an example of a system having a face recognition device.
  • FIG. 3 is a diagram for explaining walk-through face recognition.
  • FIG. 4 is a diagram for explaining collation (first collation).
  • FIG. 5 is a diagram for explaining collation (second collation).
  • FIG. 6 is a diagram for explaining the determination of the provisional master feature amount.
  • FIG. 7 is a diagram for explaining collation (third collation).
  • FIG. 8 is a diagram for explaining an example of the operation of the face recognition device.
  • FIG. 9 is a block diagram showing an example of a computer that realizes a face recognition device.
  • the face recognition device shown in FIG. 1 is a device that prevents spoofing. Further, as shown in FIG. 1, the face recognition device 1 has a detection unit 2, an extraction unit 3, and a collation unit 4 (first collation unit).
  • the detection unit 2 detects a face image corresponding to the face from the image using the image of the user in the shooting area.
  • the extraction unit 3 extracts the feature amount using the detected face image.
  • the collation unit 4 acquires the identification information for identifying the user, the collation unit 4 acquires the query feature amount before the time when the identification information is acquired and the master feature associated with the identification information registered in the master storage unit in advance. Collation (first collation) is performed using the quantity.
  • the shooting area is, for example, an area for capturing a user moving to the entrance / exit gate in walk-through face recognition using one or more imaging devices.
  • the shooting area is adjusted by using the distance between the eyes of the face image.
  • the identification information is, for example, information for identifying a user, which is read from an ID card possessed by the user by using an ID reader.
  • the query feature amount is, for example, a face image captured immediately before the face recognition device 1 acquires the identification information, or one face image captured at a preset time before the time when the identification information is acquired. It is a feature amount extracted from.
  • the set time is preferably within one second, for example. Further, the query feature amount may be extracted using one face image captured immediately after the identification information is acquired.
  • the master storage unit is a storage device that stores information in which the master feature amount extracted from the user's face image registered in advance and the user's identification information are associated with each other. Such information is registered in the master storage unit in advance, for example, when the user purchases a ticket.
  • the collation score is calculated using the query feature amount and the master feature amount, and the collation score is compared with the threshold value stored in advance to perform collation.
  • the threshold value is determined by, for example, an experiment or a simulation.
  • the collation score is measured by a machine learning classifier to measure the similarity of the features to be compared.
  • the face image of the person moving to the gate is not captured and collated by using the query feature amount acquired before the time when the identification information is acquired. Therefore, spoofing can be prevented.
  • FIG. 2 is a diagram for explaining an example of a system having a face recognition device.
  • FIG. 3 is a diagram for explaining walk-through face recognition.
  • the system 20 in the present embodiment includes one or more imaging devices 21 (21a, 21b), an identification device 22, a storage device 23, and a passage permission device 24 in addition to the face recognition device 1.
  • the face recognition device 1 has a collation unit 5 (second collation unit), a determination unit 6, and a collation unit 7 (third collation unit) in addition to the detection unit 2, the extraction unit 3, and the collation unit 4.
  • the face recognition device 1 may be realized by using an information processing device such as a server computer or a personal computer. Further, the face recognition device 1 may be provided inside the identification device 22 or the passage permission device 24.
  • the image pickup device 21 transmits the captured image to the face recognition device 1. Specifically, the image pickup apparatus 21 images a subject in a preset shooting area. In the example of FIG. 3, the person 30 is imaged at preset intervals in the shooting areas (areas A1 and A2).
  • each of the image pickup devices 21a and 21b captures the person 30 and transmits it to the face recognition device 1.
  • the image pickup device 21 may be, for example, a camera or the like.
  • the person 30 reads the display on the identification device 22 so that the identification information attached to the ID card 31 can be read. I'm letting you.
  • the ID card 31 is used, but the identification information displayed on a smartphone or the like may be read.
  • the area A1 shown in FIG. 3 is an area for acquiring an image of the person 30 immediately before or immediately after the person 30 causes the identification device 22 to read the identification information. Alternatively, it is an area for acquiring an image of the person 30 at a preset time before the time when the identification device 22 reads the identification information.
  • the area A2 shown in FIG. 3 is an area for collecting images of the person 30.
  • the identification device 22 is, for example, an ID reader that reads identification information for identifying a user from an ID card 31 possessed by a person 30 or the like.
  • the ID card 31 may be, for example, a terminal device such as a ticket or a smartphone.
  • the identification device 22 reads the identification information from a display (for example, a two-dimensional code) that can read the identification information displayed on the ticket, the smartphone, or the like. Further, the identification information may be read from the IC chip provided on the ID card 31.
  • the storage device 23 is a storage device that stores the master feature amount extracted from the user's face image and the user's identification information in association with each other. Specifically, the storage device 23 is a storage device for storing information in which the master feature amount extracted from the user's face image registered in advance and the user's identification information are associated with each other.
  • the storage device 23 is, for example, a device such as a database.
  • the storage device 23 may be provided inside the face recognition device 1 or outside the face recognition device 1.
  • the passage permission device 24 is a device that permits the person 30 to pass. Specifically, the pass permission device 24 determines whether or not to pass the person 30 based on the content of the pass information received from the face recognition device 1. When the passage permission device 24 is a gate device, the passage permission device 24 opens a door or the like provided in the gate device when the person 30 is allowed to pass.
  • the passage permission device 24 may notify the person 30 that the passage is permitted by using voice, an image, or the like. Further, the pass permission device 24 may be provided inside the face recognition device 1.
  • the detection unit 2 detects a region including a face image from the captured image. Specifically, the detection unit 2 first acquires a plurality of images of the person 30 captured by the imaging device 21 in the photographing area. Subsequently, the detection unit 2 detects a face image having a face region corresponding to the face from each of the plurality of captured images.
  • Pattern recognition In face detection, rectangles are cut out in order from the edge of the captured image, and it is determined whether or not the face is included in the rectangles. Pattern recognition technology is used to determine face / non-face. Pattern recognition methods include support vector machines, neural networks, and general learning vector quantization methods.
  • the extraction unit 3 extracts facial features using the detected facial image. Specifically, the extraction unit 3 first acquires a plurality of face images from the detection unit 2. Subsequently, the extraction unit 3 extracts facial features for each facial image.
  • feature point information such as eyes, nose, and corners of the mouth is extracted from the detected face image.
  • Common methods include gradient histograms, support vector machines, neural networks, and optimization and regression using face shape models.
  • the collation unit 4 acquires the identification information for identifying the user, the collation unit 4 acquires the query feature amount before the time when the identification information is acquired and the master feature associated with the identification information registered in the storage device 23 in advance. Collation (first collation) is performed using the quantity.
  • the collation unit 4 first acquires one feature amount corresponding to the face image captured in the area A1 as a query feature amount. For example, in area A1, the face image captured immediately before the identification information is acquired, or the face image captured at a preset time before the time when the identification information is acquired, or immediately after the identification information is acquired.
  • the feature amount corresponding to the face image captured in is used as the query feature amount.
  • the collation unit 4 acquires the master feature amount from the storage device 23 based on the identification information. Subsequently, the collation unit 4 generates a collation score (first collation score) by using the acquired query feature amount and the master feature amount. Subsequently, the collation unit 4 compares the collation score with the threshold value (first collation threshold value), makes a collation determination (first collation determination), and obtains a collation result (face verification result of face authentication). get. For example, if the collation score is equal to or higher than the threshold value, the collation is successful.
  • the collation unit 4 transmits the passage information indicating that the passage is permitted to the passage permission device 24.
  • the collation unit 4 transmits the traffic information indicating that the passage is not permitted to the collation unit 5.
  • FIG. 4 is a diagram for explaining collation (first collation).
  • first collation threshold value first collation threshold value
  • the collation is successful. It is transmitted to the permission device 24.
  • the collation score is 0.40 and the threshold value is 0.50
  • the collation has failed, and the traffic information indicating that the passage is not permitted is transmitted to the collation unit 5.
  • the collation unit 5 sets the temporary master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master. Collation (second collation) is performed using the feature amount.
  • the collation unit 5 first acquires traffic information indicating that the collation has failed from the collation unit 4. Subsequently, the collation unit 5 acquires a provisional master candidate feature amount, which is a feature amount extracted by the extraction unit 3, using the image of the person 30 (the captured image) captured in the area A2.
  • the collation unit 5 generates a collation score (second collation score) for each temporary master candidate feature amount by using the acquired temporary master candidate feature amount and the master feature amount. Subsequently, the collation unit 5 compares the collation score calculated for each provisional master candidate feature amount with the threshold value (second collation threshold value), makes a collation determination (second collation determination), and acquires the collation result. To do.
  • the second collation threshold is determined by, for example, an experiment, a simulation, or the like.
  • FIG. 5 is a diagram for explaining collation (second collation).
  • the collation score (second collation score) of the provisional master candidate feature amount and the master feature amount is 0.75, 0.40, 0.30, 0.50, respectively, and the threshold value (second collation). Threshold) Compared with 0.50, a matching score equal to or higher than the threshold is detected.
  • the determination unit 6 determines the temporary master feature amount from the temporary master candidate feature amount based on the collation (second collation) result of the collation unit 5. Specifically, the determination unit 6 first acquires the collation result from the collation unit 5. Subsequently, the determination unit 6 selects a provisional master candidate feature amount having a collation score equal to or higher than the threshold value, determines the provisional master feature amount, and stores the provisional master feature amount.
  • FIG. 6 is a diagram for explaining the determination of the provisional master feature amount.
  • the feature quantities FV1 and FV4 corresponding to the collation score (second collation score) having the threshold value of 0.50 or more are selected as the provisional master candidate feature quantities.
  • the collation unit 7 performs collation (third collation) using the query feature amount and the temporary master feature amount. Specifically, the collation unit 7 first generates a collation score (third collation score) by using the query feature amount and the provisional master feature amount. Subsequently, the collation unit 7 compares the collation score with the threshold value (third collation threshold value), makes a collation determination (third collation determination), and determines the collation result (the result of face collation in face authentication). get.
  • the threshold value third collation threshold value
  • the third collation threshold is determined by, for example, an experiment, a simulation, or the like.
  • the collation unit 7 After that, if the collation is successful, the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24. On the other hand, when the collation fails, the collation unit 7 transmits the passage information indicating that the passage is not permitted to the passage permission device 24.
  • FIG. 7 is a diagram for explaining collation (third collation).
  • the collation scores (third collation score) 0.85 and 0.75 between the query feature QFV and the provisional master features FV1 and FV4 are generated. Since the generated collation scores 0.85 and 0.75 are at least the threshold value of 0.5, it is determined that the face verification of face recognition is successful. After that, the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24.
  • FIG. 8 is a diagram for explaining an example of the operation of the face recognition device.
  • FIGS. 2 to 7 will be referred to as appropriate.
  • the face authentication method is implemented by operating the face authentication device. Therefore, the description of the face recognition method in the present embodiment is replaced with the following description of the operation of the face recognition device.
  • the detection unit 2 detects a region including a face image from the captured image (step A1). Specifically, in step A1, the detection unit 2 first acquires a plurality of images of the person 30 captured by the imaging device 21 in the photographing area (areas A1 and A2). Subsequently, in step A1, the detection unit 2 detects a face image having a face region corresponding to the face from each of the plurality of captured images.
  • the extraction unit 3 extracts facial features using the detected facial image. Specifically, the extraction unit 3 first acquires a plurality of face images from the detection unit 2. Subsequently, the extraction unit 3 extracts facial features for each facial image (step A2).
  • step A3: Yes when the collation unit 4 acquires the identification information for identifying the user (step A3: Yes), the query feature amount acquired before the time when the identification information is acquired is registered in the storage device 23 in advance. Matching (first matching) is performed using the master feature amount associated with the identification information (step A4). If the identification information has not been acquired (step A3: No), the process proceeds to step A1 and the process is continued.
  • the collation unit 4 first acquires one feature amount corresponding to the face image captured in the area A1 as a query feature amount. For example, in area A1, the face image captured immediately before the identification information is acquired, or the face image captured at a preset time before the time when the identification information is acquired, or immediately after the identification information is acquired.
  • the feature amount corresponding to the face image captured in is used as the query feature amount.
  • step A4 the collating unit 4 acquires the master feature amount from the storage device 23 based on the identification information. Subsequently, in step A4, the collation unit 4 generates a collation score (first collation score) using the acquired query feature amount and master feature amount. Subsequently, in step A4, the collation unit 4 compares the collation score with the threshold value (first collation threshold value), makes a collation determination (first collation determination), and performs a collation result (face collation in face recognition). Result) is obtained. For example, if the collation score is equal to or higher than the threshold value, the collation is successful.
  • the threshold value first collation threshold value
  • step A5 when the collation is successful (step A5: No), the collation unit 4 transmits the passage information indicating that the passage is permitted to the passage permission device 24 (step A10).
  • step A5: Yes when the collation fails (step A5: Yes), the collation unit 4 transmits the passage information indicating that the passage is not permitted to the collation unit 5.
  • step A6 when the collation unit 5 fails in collation (first collation), the temporary master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged. And the master feature amount are collated (second collation) (step A6).
  • step A6 the collation unit 5 first acquires traffic information indicating that the collation has failed from the collation unit 4. Subsequently, in step A6, the collating unit 5 acquires a provisional master candidate feature amount, which is a feature amount extracted by the extraction unit 3, using the image of the person 30 (the captured image) captured in the area A2. To do.
  • step A6 the collation unit 5 generates a collation score (second collation score) for each temporary master candidate feature amount by using the acquired temporary master candidate feature amount and the master feature amount. Subsequently, in step A6, the collation unit 5 compares the collation score calculated for each provisional master candidate feature amount with the threshold value (second collation threshold value), and makes a collation determination (second collation determination). Get the collation result.
  • the second collation threshold is determined by, for example, an experiment, a simulation, or the like.
  • the determination unit 6 determines the temporary master feature amount from the temporary master candidate feature amount based on the collation (second collation) result of the collation unit 5 (step A7). Specifically, in step A7, the determination unit 6 first acquires the collation result from the collation unit 5. Subsequently, in step A7, the determination unit 6 selects a provisional master candidate feature amount having a collation score equal to or higher than the threshold value, determines the provisional master feature amount, and stores the provisional master feature amount.
  • the collation unit 7 performs collation (third collation) using the query feature amount and the temporary master feature amount (step A8). Specifically, in step A8, the collation unit 7 first generates a collation score (third collation score) using the query feature amount and the provisional master feature amount. Subsequently, in step A8, the collation unit 7 compares the collation score with the threshold value (third collation threshold value), makes a collation determination (third collation determination), and performs a collation result (face collation in face recognition). Result) is obtained.
  • the threshold value third collation threshold value
  • the third collation threshold is determined by, for example, an experiment, a simulation, or the like.
  • step A8 when the collation is successful (step A9: Yes), the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24 (step A10). On the other hand, when the collation fails (step A9: No), the collation unit 7 transmits the passage information indicating that the passage is not permitted to the passage permission device 24 (step A11).
  • Walk-through face recognition can be realized by repeating the processes of steps A1 to A11 described above.
  • the temporary master feature amount is generated, and the query feature amount and the generated temporary master feature amount are used to perform the third collation. It is less likely that a person will be rejected if the verification fails even though the person is 30 people.
  • the program according to the embodiment of the present invention may be any program that causes a computer to execute steps A1 to A11 shown in FIG. By installing this program on a computer and executing it, the face recognition device and the face recognition method according to the present embodiment can be realized.
  • the computer processor functions as a detection unit 2, an extraction unit 3, a collation unit 4, 5, 7, and a determination unit 6 to perform processing.
  • each computer may function as one of the detection unit 2, the extraction unit 3, the collation unit 4, 5, 7, and the determination unit 6, respectively.
  • FIG. 9 is a block diagram showing an example of a computer that realizes the face recognition device according to the embodiment of the present invention.
  • the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. And. Each of these parts is connected to each other via a bus 121 so as to be capable of data communication.
  • the computer 110 may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU 111 or in place of the CPU 111.
  • the CPU 111 expands the programs (codes) of the present embodiment stored in the storage device 113 into the main memory 112 and executes them in a predetermined order to perform various operations.
  • the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
  • the program according to the present embodiment is provided in a state of being stored in a computer-readable recording medium 120.
  • the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
  • the storage device 113 in addition to a hard disk drive, a semiconductor storage device such as a flash memory can be mentioned.
  • the input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and mouse.
  • the display controller 115 is connected to the display device 119 and controls the display on the display device 119.
  • the data reader / writer 116 mediates the data transmission between the CPU 111 and the recording medium 120, reads the program from the recording medium 120, and writes the processing result in the computer 110 to the recording medium 120.
  • the communication interface 117 mediates data transmission between the CPU 111 and another computer.
  • the recording medium 120 include a general-purpose semiconductor storage device such as CF (CompactFlash (registered trademark)) and SD (SecureDigital), a magnetic recording medium such as a flexible disk, or a CD-.
  • CF CompactFlash (registered trademark)
  • SD Secure Digital
  • magnetic recording medium such as a flexible disk
  • CD- CompactDiskReadOnlyMemory
  • optical recording media such as ROM (CompactDiskReadOnlyMemory).
  • the face recognition device 1 in the present embodiment can also be realized by using hardware corresponding to each part instead of the computer on which the program is installed. Further, the face recognition device 1 may be partially realized by a program and the rest may be realized by hardware.
  • a detection unit that detects a face image corresponding to a face from the image using an image of a user in the shooting area.
  • An extraction unit that extracts features using the detected face image, When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored.
  • the first collation unit which is used to perform the first collation, A face recognition device characterized by having.
  • Appendix 2 The face recognition device described in Appendix 1 When the first collation fails, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master feature amount are displayed.
  • the second collation unit which is used to perform the second collation, A face recognition device having a determination unit that determines a temporary master feature amount from the temporary master candidate feature amount based on the result of the second collation.
  • Appendix 3 The face recognition device described in Appendix 2, A face recognition device having a third collation unit that performs a third collation using the query feature amount and the tentative master feature amount when the tentative master feature amount is determined.
  • the face recognition device described in Appendix 3 The third collation unit is a face recognition device, characterized in that, when the third collation is successful, the third collation unit transmits traffic information indicating that the user is permitted to pass to the pass permission device.
  • the first collation step A face authentication method characterized by having.
  • Appendix 6 The face recognition method described in Appendix 5
  • the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master feature amount are displayed.
  • the second collation step A determination step in which the temporary master feature amount is determined from the temporary master candidate feature amount based on the result of the second collation, and A face authentication method characterized by having.
  • Appendix 7 The face recognition method described in Appendix 6 When the tentative master feature amount is determined, a third collation step of performing a third collation using the query feature amount and the tentative master feature amount, and A face authentication method characterized by having.
  • Appendix 8 The face recognition method described in Appendix 7 In the third collation step, when the third collation is successful, the face recognition method is characterized by having to transmit the pass information indicating that the user is allowed to pass to the pass permit device. ..
  • the first collation step A computer-readable recording medium on which a program is recorded, including instructions to execute.
  • Appendix 10 The computer-readable recording medium according to Appendix 9, which is a computer-readable recording medium.
  • the program fails in the first collation with the computer, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is captured is used.
  • the second collation step in which the second collation is performed using the master feature amount, A determination step in which the temporary master feature amount is determined from the temporary master candidate feature amount based on the result of the second collation, and A computer-readable recording medium recording a program that further contains instructions to execute the program.
  • Appendix 11 The computer-readable recording medium according to Appendix 10.
  • the third collation step is performed by using the query feature amount and the tentative master feature amount.
  • a computer-readable recording medium recording a program that further contains instructions to execute the program.
  • Appendix 12 The computer-readable recording medium according to Appendix 11, wherein the recording medium is readable.
  • the computer readable is characterized by transmitting the pass information indicating that the user is notified of the pass permission to the pass permit device. Recording medium.
  • the present invention is useful in fields where walk-through face recognition is required.

Abstract

A facial authentication device 1 for preventing acts of impersonation includes: a detecting unit 2 which uses an image, obtained by imaging a user in an image capture area, to detect from the image a facial image corresponding to a face; an extracting unit 3 which extracts a feature quantity using the detected facial image; and a first comparing unit 4 which, if identification information identifying the user has been acquired, performs a first comparison using a query feature quantity acquired before the time at which the identification information was acquired, and a master feature quantity that is recorded in advance in a master storage unit and that is associated with the identification information.

Description

顔認証装置、顔認証方法、及びコンピュータ読み取り可能な記録媒体Face recognition device, face recognition method, and computer-readable recording medium
 本発明は、顔認証をする顔認証装置、顔認証方法に関し、更には、これらを実現するためのプログラムを記録したしているコンピュータ読み取り可能な記録媒体に関する。 The present invention relates to a face recognition device for face recognition and a face recognition method, and further to a computer-readable recording medium on which a program for realizing these is recorded.
 入退場管理において本人確認をする認証方法としてウォークスルー顔認証が知られている。ウォークスルー顔認証は、入出場ゲートに移動してくる利用者の顔画像と、あらかじめ撮影した顔画像とを用いて、本人確認を行う。 Walk-through face recognition is known as an authentication method for verifying identity in entrance / exit management. In the walk-through face authentication, the identity is confirmed by using the face image of the user moving to the entrance / exit gate and the face image taken in advance.
 関連する技術として特許文献1には、認証領域を通行する利用者を顔認証して当該利用者の通行許否を判定する顔認証システムが開示されている。その顔認証システムによれば、入力画像から抽出された利用者の顔画像と登録顔画像とを照合する。そして、認証に成功した場合、入力画像において認証された利用者を示す領域の大きさが所定以上のとき、当該利用者の通行を許可する。 As a related technique, Patent Document 1 discloses a face recognition system that authenticates a user who passes through the authentication area and determines whether or not the user can pass. According to the face recognition system, the user's face image extracted from the input image is collated with the registered face image. Then, when the authentication is successful, when the size of the area indicating the authenticated user in the input image is larger than a predetermined size, the user is allowed to pass.
特開2015-001790号公報Japanese Unexamined Patent Publication No. 2015-001790
 しかしながら、従来のウォークスルー顔認証では、撮影エリアにおいて、入出場ゲートに移動する人物の顔画像を撮り溜め、撮り溜めた複数の顔画像と登録顔画像とを照合し、複数の顔画像の一つでも照合に成功すれば、入出場ゲートの通過を許可している。そのため、なりすまし行為が可能である。 However, in the conventional walk-through face recognition, in the shooting area, the face image of the person moving to the entrance / exit gate is captured, and the captured multiple face images are collated with the registered face image, and one of the plurality of face images is used. If the verification is successful at any time, the entrance / exit gate is allowed to pass. Therefore, spoofing is possible.
 例えば、撮影エリアに二人の利用者がいる場合、一方の人物が顔認証に成功すれば、入出場ゲートの通行が許可されるため、一方の人物になりすまして他方の人物が入出場ゲートを通過できてしまう。 For example, if there are two users in the shooting area, if one person succeeds in face recognition, the entrance / exit gate will be allowed to pass, so the other person will impersonate the entrance / exit gate. I can pass through.
 本発明の目的の一例は、なりすまし行為を防止する顔認証装置、顔認証方法、及びコンピュータ読み取り可能な記録媒体を提供することにある。 An example of an object of the present invention is to provide a face recognition device for preventing spoofing, a face recognition method, and a computer-readable recording medium.
 上記目的を達成するため、本発明の一側面における顔認証装置は、
 撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出する、検出部と、
 検出した前記顔画像を用いて特徴量を抽出する、抽出部と、
 利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合をする、第一の照合部と、
 を有することを特徴とする。
In order to achieve the above object, the face recognition device in one aspect of the present invention is
A detection unit that detects a face image corresponding to a face from the image using an image of a user in the shooting area.
An extraction unit that extracts features using the detected face image,
When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. The first collation unit, which is used to perform the first collation,
It is characterized by having.
 また、上記目的を達成するため、本発明の一側面における顔認証方法は、
 撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出する、検出ステップと、
 検出した前記顔画像を用いて特徴量を抽出する、抽出ステップと、
 利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合をする、第一の照合ステップと、
 を有することを特徴とする。
Further, in order to achieve the above object, the face authentication method in one aspect of the present invention is:
A detection step of detecting a face image corresponding to a face from the image using an image of a user in the shooting area.
An extraction step of extracting a feature amount using the detected face image, and
When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. Using the first collation, the first collation step,
It is characterized by having.
 さらに、上記目的を達成するため、本発明の一側面におけるプログラムを記録したコンピュータ読み取り可能な記録媒体は、
 コンピュータに
 撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出する、検出ステップと、
 検出した前記顔画像を用いて特徴量を抽出する、抽出ステップと、
 利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合する、第一の照合ステップと、
 を実行させる命令を含む、プログラムを記録していることを特徴とする。
Further, in order to achieve the above object, a computer-readable recording medium on which a program according to one aspect of the present invention is recorded may be used.
A detection step of detecting a face image corresponding to a face from the image using an image of a user in the shooting area on a computer.
An extraction step of extracting a feature amount using the detected face image, and
When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. Using the first collation, the first collation step,
It is characterized by recording a program including an instruction to execute.
 以上のように本発明によれば、なりすまし行為を防止することができる。 As described above, according to the present invention, spoofing can be prevented.
図1は、顔認証装置の一例を説明するための図である。FIG. 1 is a diagram for explaining an example of a face recognition device. 図2は、顔認証装置を有するシステムの一例を説明するための図である。FIG. 2 is a diagram for explaining an example of a system having a face recognition device. 図3は、ウォークスルー顔認証の説明をするための図である。FIG. 3 is a diagram for explaining walk-through face recognition. 図4は、照合(第一の照合)を説明するための図である。FIG. 4 is a diagram for explaining collation (first collation). 図5は、照合(第二の照合)を説明するための図である。FIG. 5 is a diagram for explaining collation (second collation). 図6は、仮マスタ特徴量の決定を説明するための図である。FIG. 6 is a diagram for explaining the determination of the provisional master feature amount. 図7は、照合(第三の照合)を説明するための図である。FIG. 7 is a diagram for explaining collation (third collation). 図8は、顔認証装置の動作の一例を説明するための図である。FIG. 8 is a diagram for explaining an example of the operation of the face recognition device. 図9は、顔認証装置を実現するコンピュータの一例を示すブロック図である。FIG. 9 is a block diagram showing an example of a computer that realizes a face recognition device.
(実施の形態)
 以下、本発明の実施の形態について、図1から図9を参照しながら説明する。
(Embodiment)
Hereinafter, embodiments of the present invention will be described with reference to FIGS. 1 to 9.
[装置構成]
 最初に、図1を用いて、本実施の形態における顔認証装置1の構成について説明する。
[Device configuration]
First, the configuration of the face recognition device 1 according to the present embodiment will be described with reference to FIG.
 図1に示す顔認証装置は、なりすまし行為を防止する装置である。また、図1に示すように、顔認証装置1は、検出部2と、抽出部3と、照合部4(第一の照合部)とを有する。 The face recognition device shown in FIG. 1 is a device that prevents spoofing. Further, as shown in FIG. 1, the face recognition device 1 has a detection unit 2, an extraction unit 3, and a collation unit 4 (first collation unit).
 このうち、検出部2は、撮影エリアにいる利用者を撮像した画像を用いて、画像から顔に対応する顔画像を検出する。抽出部3は、検出した顔画像を用いて特徴量を抽出する。照合部4は、利用者を識別する識別情報を取得した場合、識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている識別情報に関連付けられたマスタ特徴量とを用いて照合(第一の照合)をする。 Of these, the detection unit 2 detects a face image corresponding to the face from the image using the image of the user in the shooting area. The extraction unit 3 extracts the feature amount using the detected face image. When the collation unit 4 acquires the identification information for identifying the user, the collation unit 4 acquires the query feature amount before the time when the identification information is acquired and the master feature associated with the identification information registered in the master storage unit in advance. Collation (first collation) is performed using the quantity.
 撮影エリアは、例えば、ウォークスルー顔認証において、入出場ゲートに移動してくる利用者を、一つ以上の撮像装置を用いて、撮像するためのエリアである。撮影エリアは、顔画像の目間を用いて調整する。識別情報は、例えば、利用者が所持しているIDカードからIDリーダを用いて読み込まれた、利用者を識別するための情報である。 The shooting area is, for example, an area for capturing a user moving to the entrance / exit gate in walk-through face recognition using one or more imaging devices. The shooting area is adjusted by using the distance between the eyes of the face image. The identification information is, for example, information for identifying a user, which is read from an ID card possessed by the user by using an ID reader.
 クエリ特徴量は、例えば、顔認証装置1が、識別情報を取得した直前に撮像された顔画像、又は、識別情報を取得した時点より前にあらかじめ設定された時間に撮像された一つの顔画像から抽出した特徴量である。設定された時間は、例えば一秒以内とすることが望ましい。さらに、クエリ特徴量は、識別情報を取得した直後に撮像された一つの顔画像を用いて抽出してもよい。 The query feature amount is, for example, a face image captured immediately before the face recognition device 1 acquires the identification information, or one face image captured at a preset time before the time when the identification information is acquired. It is a feature amount extracted from. The set time is preferably within one second, for example. Further, the query feature amount may be extracted using one face image captured immediately after the identification information is acquired.
 マスタ記憶部は、あらかじめ登録された利用者の顔画像から抽出されたマスタ特徴量と、利用者の識別情報とが関連付けられた情報が記憶されている記憶装置である。それらの情報は、例えば、利用者がチケットを購入する場合などに、事前にマスタ記憶部に登録しておく。 The master storage unit is a storage device that stores information in which the master feature amount extracted from the user's face image registered in advance and the user's identification information are associated with each other. Such information is registered in the master storage unit in advance, for example, when the user purchases a ticket.
 照合(第一の照合)は、例えば、クエリ特徴量とマスタ特徴量とを用いて照合スコアを算出し、照合スコアとあらかじめ記憶した閾値と比較して照合を行う。閾値は、例えば、実験、シミュレーションなどにより決定する。なお、照合スコアは、比較する特徴量の類似度を機械学習の分類器によって測定される。 In the collation (first collation), for example, the collation score is calculated using the query feature amount and the master feature amount, and the collation score is compared with the threshold value stored in advance to perform collation. The threshold value is determined by, for example, an experiment or a simulation. The collation score is measured by a machine learning classifier to measure the similarity of the features to be compared.
 上述したように、本実施の形態においては、従来のように、ゲートに移動する人物の顔画像を撮り溜めせず、識別情報を取得した時点より前に取得したクエリ特徴量を用いて照合するので、なりすまし行為を防止することができる。 As described above, in the present embodiment, unlike the conventional case, the face image of the person moving to the gate is not captured and collated by using the query feature amount acquired before the time when the identification information is acquired. Therefore, spoofing can be prevented.
[システム構成]
 続いて、図2を用いて、本実施の形態における顔認証装置1の構成をより具体的に説明する。図2は、顔認証装置を有するシステムの一例を説明するための図である。図3は、ウォークスルー顔認証の説明をするための図である。
[System configuration]
Subsequently, the configuration of the face recognition device 1 according to the present embodiment will be described more specifically with reference to FIG. FIG. 2 is a diagram for explaining an example of a system having a face recognition device. FIG. 3 is a diagram for explaining walk-through face recognition.
 図2に示すように、本実施の形態におけるシステム20は、顔認証装置1に加えて、一つ以上の撮像装置21(21a、21b)、識別装置22、記憶装置23、通行許可装置24を有する。顔認証装置1は、検出部2、抽出部3、照合部4、に加えて、照合部5(第二の照合部)、決定部6、照合部7(第三の照合部)を有する。 As shown in FIG. 2, the system 20 in the present embodiment includes one or more imaging devices 21 (21a, 21b), an identification device 22, a storage device 23, and a passage permission device 24 in addition to the face recognition device 1. Have. The face recognition device 1 has a collation unit 5 (second collation unit), a determination unit 6, and a collation unit 7 (third collation unit) in addition to the detection unit 2, the extraction unit 3, and the collation unit 4.
 なお、顔認証装置1は、サーバコンピュータ、パーソナルコンピュータなどの情報処理装置を用いて実現してもよい。また、顔認証装置1は、識別装置22又は通行許可装置24の内部に設けてもよい。 The face recognition device 1 may be realized by using an information processing device such as a server computer or a personal computer. Further, the face recognition device 1 may be provided inside the identification device 22 or the passage permission device 24.
 撮像装置21は、撮像した画像を顔認証装置1に送信する。具体的には、撮像装置21は、あらかじめ設定された撮影エリアにおいて被写体を撮像する。図3の例では、撮影エリア(エリアA1、A2)において、あらかじめ設定した間隔で、人物30を撮像する。 The image pickup device 21 transmits the captured image to the face recognition device 1. Specifically, the image pickup apparatus 21 images a subject in a preset shooting area. In the example of FIG. 3, the person 30 is imaged at preset intervals in the shooting areas (areas A1 and A2).
 図3の例では、撮像装置21a、21bそれぞれが人物30を撮像して顔認証装置1に送信する。なお、撮像装置21は、例えば、カメラなどが考えられる。 In the example of FIG. 3, each of the image pickup devices 21a and 21b captures the person 30 and transmits it to the face recognition device 1. The image pickup device 21 may be, for example, a camera or the like.
 図3の例では、IDカード31を所持した人物30が、エリアA2からエリアA1に進行した後、IDカード31に付された識別情報を読み取り可能な表示を、人物30が識別装置22に読み取らせている。図3の例では、IDカード31を用いているが、スマートフォンなどに表示させた識別情報を読み取らせてもよい。 In the example of FIG. 3, after the person 30 holding the ID card 31 has progressed from the area A2 to the area A1, the person 30 reads the display on the identification device 22 so that the identification information attached to the ID card 31 can be read. I'm letting you. In the example of FIG. 3, the ID card 31 is used, but the identification information displayed on a smartphone or the like may be read.
 図3に示すエリアA1は、人物30が識別装置22に識別情報を読み取らせた直前又は直後の人物30の画像を取得するためのエリアである。又は、識別装置22が識別情報を読み取らせた時点より前のあらかじめ設定された時間に、人物30の画像を取得するためのエリアである。図3に示すエリアA2は、人物30の画像を撮り溜めるエリアである。 The area A1 shown in FIG. 3 is an area for acquiring an image of the person 30 immediately before or immediately after the person 30 causes the identification device 22 to read the identification information. Alternatively, it is an area for acquiring an image of the person 30 at a preset time before the time when the identification device 22 reads the identification information. The area A2 shown in FIG. 3 is an area for collecting images of the person 30.
 識別装置22は、例えば、人物30が所持するIDカード31などから、利用者を識別するための識別情報を読み取るIDリーダなどである。IDカード31は、例えば、チケット、スマートフォンなどの端末装置が考えられる。識別装置22は、チケット、スマートフォンなどに表示された識別情報を読み取り可能な表示(例えば、二次元コードなど)から識別情報を読み取る。また、IDカード31に設けられたICチップから識別情報を読み込んでもよい。 The identification device 22 is, for example, an ID reader that reads identification information for identifying a user from an ID card 31 possessed by a person 30 or the like. The ID card 31 may be, for example, a terminal device such as a ticket or a smartphone. The identification device 22 reads the identification information from a display (for example, a two-dimensional code) that can read the identification information displayed on the ticket, the smartphone, or the like. Further, the identification information may be read from the IC chip provided on the ID card 31.
 記憶装置23は、利用者の顔画像から抽出されたマスタ特徴量と、利用者の識別情報とを関連付けられて記憶する記憶装置である。具体的には、記憶装置23は、あらかじめ登録された利用者の顔画像から抽出されたマスタ特徴量と、利用者の識別情報とが関連付けられた情報を記憶するための記憶装置である。記憶装置23は、例えば、データベースなどの装置である。なお、記憶装置23は、顔認証装置1の内部に設けてもよいし、顔認証装置1の外部に設けてもよい。 The storage device 23 is a storage device that stores the master feature amount extracted from the user's face image and the user's identification information in association with each other. Specifically, the storage device 23 is a storage device for storing information in which the master feature amount extracted from the user's face image registered in advance and the user's identification information are associated with each other. The storage device 23 is, for example, a device such as a database. The storage device 23 may be provided inside the face recognition device 1 or outside the face recognition device 1.
 通行許可装置24は、人物30に通行を許可する装置である。具体的には、通行許可装置24は、顔認証装置1から受信した通行情報の内容に基づいて、人物30を通行させるか否かを決定する。通行許可装置24がゲート装置である場合、通行許可装置24は、人物30に通行を許可する場合に、ゲート装置に設けられた扉などを開く。 The passage permission device 24 is a device that permits the person 30 to pass. Specifically, the pass permission device 24 determines whether or not to pass the person 30 based on the content of the pass information received from the face recognition device 1. When the passage permission device 24 is a gate device, the passage permission device 24 opens a door or the like provided in the gate device when the person 30 is allowed to pass.
 なお、通行許可装置24は、スピーカ、モニタなどが接続されている場合、音声、画像などを用いて、人物30に対して通行を許可する旨を通知してもよい。また、通行許可装置24は、顔認証装置1の内部に設けてもよい。 When a speaker, a monitor, or the like is connected, the passage permission device 24 may notify the person 30 that the passage is permitted by using voice, an image, or the like. Further, the pass permission device 24 may be provided inside the face recognition device 1.
 顔認証装置について説明をする。
 検出部2は、撮像した画像から顔画像を含む領域を検出する。具体的には、検出部2は、まず、撮影エリアにおいて撮像装置21が撮像した、人物30の複数の画像を取得する。続いて、検出部2は、撮像した複数の画像それぞれから、顔に対応する顔領域を有する顔画像を検出する。
The face recognition device will be described.
The detection unit 2 detects a region including a face image from the captured image. Specifically, the detection unit 2 first acquires a plurality of images of the person 30 captured by the imaging device 21 in the photographing area. Subsequently, the detection unit 2 detects a face image having a face region corresponding to the face from each of the plurality of captured images.
 顔検出では、撮影画像の端から矩形を順に切り出し、その中に顔が含まれるかどうかを判定していく。顔・非顔の判定にはパターン認識技術を用いる。パターン認識の手法としては、サポートベクターマシン、ニューラルネットワーク、一般学習ベクトル量子化手法などがある。 In face detection, rectangles are cut out in order from the edge of the captured image, and it is determined whether or not the face is included in the rectangles. Pattern recognition technology is used to determine face / non-face. Pattern recognition methods include support vector machines, neural networks, and general learning vector quantization methods.
 抽出部3は、検出された顔画像を用いて顔の特徴量を抽出する。具体的には、抽出部3は、まず、検出部2から複数の顔画像を取得する。続いて、抽出部3は、顔画像それぞれに対して顔の特徴量を抽出する。 The extraction unit 3 extracts facial features using the detected facial image. Specifically, the extraction unit 3 first acquires a plurality of face images from the detection unit 2. Subsequently, the extraction unit 3 extracts facial features for each facial image.
 特徴量抽出では、検出した顔画像から瞳、鼻、口角など特徴点情報を抽出する。一般的な手法としては、勾配ヒストグラムやサポートベクターマシン、ニューラルネットワーク、顔形状モデルを用いた最適化や回帰などがある。 In the feature amount extraction, feature point information such as eyes, nose, and corners of the mouth is extracted from the detected face image. Common methods include gradient histograms, support vector machines, neural networks, and optimization and regression using face shape models.
 照合部4は、利用者を識別する識別情報を取得した場合、識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめ記憶装置23に登録されている識別情報に関連付けられたマスタ特徴量とを用いて照合(第一の照合)をする。 When the collation unit 4 acquires the identification information for identifying the user, the collation unit 4 acquires the query feature amount before the time when the identification information is acquired and the master feature associated with the identification information registered in the storage device 23 in advance. Collation (first collation) is performed using the quantity.
 具体的には、照合部4は、まず、エリアA1で撮像された顔画像に対応する一つの特徴量をクエリ特徴量として取得する。例えば、エリアA1において、識別情報を取得した直前に撮像された顔画像、又は、識別情報を取得した時点より前にあらかじめ設定された時間において撮像された顔画像、又は、識別情報を取得した直後に撮像された顔画像、に対応する特徴量をクエリ特徴量とする。 Specifically, the collation unit 4 first acquires one feature amount corresponding to the face image captured in the area A1 as a query feature amount. For example, in area A1, the face image captured immediately before the identification information is acquired, or the face image captured at a preset time before the time when the identification information is acquired, or immediately after the identification information is acquired. The feature amount corresponding to the face image captured in is used as the query feature amount.
 続いて、照合部4は、識別情報に基づいて、記憶装置23からマスタ特徴量を取得する。続いて、照合部4は、取得したクエリ特徴量とマスタ特徴量を用いて、照合スコア(第一の照合スコア)を生成する。続いて、照合部4は、照合スコアと閾値(第一の照合閾値)とを比較して、照合判定(第一の照合判定)をし、照合結果(顔認証のうち顔照合の結果)を取得する。例えば、照合スコアが閾値以上であれは照合に成功したとする。 Subsequently, the collation unit 4 acquires the master feature amount from the storage device 23 based on the identification information. Subsequently, the collation unit 4 generates a collation score (first collation score) by using the acquired query feature amount and the master feature amount. Subsequently, the collation unit 4 compares the collation score with the threshold value (first collation threshold value), makes a collation determination (first collation determination), and obtains a collation result (face verification result of face authentication). get. For example, if the collation score is equal to or higher than the threshold value, the collation is successful.
 その後、照合部4は、照合に成功した場合、通行を許可することを表す通行情報を、通行許可装置24へ送信する。照合部4は、照合に失敗した場合、通行を許可しないことを表す通行情報を、照合部5へ送信する。 After that, if the collation is successful, the collation unit 4 transmits the passage information indicating that the passage is permitted to the passage permission device 24. When the collation fails, the collation unit 4 transmits the traffic information indicating that the passage is not permitted to the collation unit 5.
 図4は、照合(第一の照合)を説明するための図である。図4のAの例では、照合スコアが0.75で、閾値(第一の照合閾値)が0.50なので、照合に成功しているので、通行を許可することを表す通行情報を、通行許可装置24へ送信する。図4のBの例では、照合スコアが0.40で、閾値が0.50なので、照合に失敗しているので、通行を許可しないことを表す通行情報を、照合部5へ送信する。 FIG. 4 is a diagram for explaining collation (first collation). In the example of A in FIG. 4, since the collation score is 0.75 and the threshold value (first collation threshold value) is 0.50, the collation is successful. It is transmitted to the permission device 24. In the example of B in FIG. 4, since the collation score is 0.40 and the threshold value is 0.50, the collation has failed, and the traffic information indicating that the passage is not permitted is transmitted to the collation unit 5.
 照合部5は、照合(第一の照合)に失敗した場合、クエリ特徴量に対応する画像を撮像した時点より前に撮像された一つ以上の画像に対応する仮マスタ候補特徴量と、マスタ特徴量とを用いて照合(第二の照合)する。 When the collation (first collation) fails, the collation unit 5 sets the temporary master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master. Collation (second collation) is performed using the feature amount.
 具体的には、照合部5は、まず、照合に失敗したことを表す通行情報を照合部4から取得する。続いて、照合部5は、エリアA2で撮像された人物30の画像(撮り溜めた画像)を用いて、抽出部3により抽出された特徴量である仮マスタ候補特徴量を取得する。 Specifically, the collation unit 5 first acquires traffic information indicating that the collation has failed from the collation unit 4. Subsequently, the collation unit 5 acquires a provisional master candidate feature amount, which is a feature amount extracted by the extraction unit 3, using the image of the person 30 (the captured image) captured in the area A2.
 続いて、照合部5は、取得した仮マスタ候補特徴量とマスタ特徴量とを用いて、仮マスタ候補特徴量ごとに照合スコア(第二の照合スコア)を生成する。続いて、照合部5は、仮マスタ候補特徴量ごとに算出した照合スコアと閾値(第二の照合閾値)とを比較して、照合判定(第二の照合判定)をし、照合結果を取得する。第二の照合閾値は、例えば、実験、シミュレーションなどにより決定する。 Subsequently, the collation unit 5 generates a collation score (second collation score) for each temporary master candidate feature amount by using the acquired temporary master candidate feature amount and the master feature amount. Subsequently, the collation unit 5 compares the collation score calculated for each provisional master candidate feature amount with the threshold value (second collation threshold value), makes a collation determination (second collation determination), and acquires the collation result. To do. The second collation threshold is determined by, for example, an experiment, a simulation, or the like.
 図5は、照合(第二の照合)を説明するための図である。図5の例では、仮マスタ候補特徴量とマスタ特徴量との照合スコア(第二の照合スコア)0.75、0.40、0.30、0.50それぞれと、閾値(第二の照合閾値)0.50とを比較して、閾値以上の照合スコアを検出する。 FIG. 5 is a diagram for explaining collation (second collation). In the example of FIG. 5, the collation score (second collation score) of the provisional master candidate feature amount and the master feature amount is 0.75, 0.40, 0.30, 0.50, respectively, and the threshold value (second collation). Threshold) Compared with 0.50, a matching score equal to or higher than the threshold is detected.
 決定部6は、照合部5の照合(第二の照合)結果に基づいて、仮マスタ候補特徴量から仮マスタ特徴量を決定する。具体的には、決定部6は、まず、照合部5から照合結果を取得する。続いて、決定部6は、照合スコアが閾値以上の仮マスタ候補特徴量を選択して、仮マスタ特徴量を決定して記憶する。 The determination unit 6 determines the temporary master feature amount from the temporary master candidate feature amount based on the collation (second collation) result of the collation unit 5. Specifically, the determination unit 6 first acquires the collation result from the collation unit 5. Subsequently, the determination unit 6 selects a provisional master candidate feature amount having a collation score equal to or higher than the threshold value, determines the provisional master feature amount, and stores the provisional master feature amount.
 図6は、仮マスタ特徴量の決定を説明するための図である。図6の例では、閾値が0.50以上の照合スコア(第二の照合スコア)に対応する特徴量FV1、FV4を仮マスタ候補特徴量として選択する。 FIG. 6 is a diagram for explaining the determination of the provisional master feature amount. In the example of FIG. 6, the feature quantities FV1 and FV4 corresponding to the collation score (second collation score) having the threshold value of 0.50 or more are selected as the provisional master candidate feature quantities.
 照合部7は、仮マスタ特徴量が決定した場合、クエリ特徴量と仮マスタ特徴量とを用いて照合(第三の照合)をする。具体的には、照合部7は、まず、クエリ特徴量と仮マスタ特徴量とを用いて、照合スコア(第三の照合スコア)を生成する。続いて、照合部7は、照合スコアと閾値(第三の照合閾値)とを比較して、照合判定(第三の照合判定)をし、照合結果(顔認証のうち顔照合の結果)を取得する。 When the temporary master feature amount is determined, the collation unit 7 performs collation (third collation) using the query feature amount and the temporary master feature amount. Specifically, the collation unit 7 first generates a collation score (third collation score) by using the query feature amount and the provisional master feature amount. Subsequently, the collation unit 7 compares the collation score with the threshold value (third collation threshold value), makes a collation determination (third collation determination), and determines the collation result (the result of face collation in face authentication). get.
 例えば、照合スコアが閾値以上であれば照合に成功したとする。第三の照合閾値は、例えば、実験、シミュレーションなどにより決定する。 For example, if the matching score is equal to or higher than the threshold value, it is assumed that the matching is successful. The third collation threshold is determined by, for example, an experiment, a simulation, or the like.
 その後、照合部7は、照合に成功した場合、通行を許可することを表す通行情報を、通行許可装置24へ送信する。対して、照合部7は、照合に失敗した場合、通行を許可しないことを表す通行情報を、通行許可装置24へ送信する。 After that, if the collation is successful, the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24. On the other hand, when the collation fails, the collation unit 7 transmits the passage information indicating that the passage is not permitted to the passage permission device 24.
 図7は、照合(第三の照合)を説明するための図である。図7の例では、クエリ特徴量QFVと仮マスタ特徴量FV1、FV4との照合スコア(第三の照合スコア)0.85、0.75を生成する。生成した照合スコア0.85、0.75は、閾値0.5以上であるので、顔認証の顔照合に成功した判定する。その後、照合部7は、通行を許可することを表す通行情報を、通行許可装置24へ送信する。 FIG. 7 is a diagram for explaining collation (third collation). In the example of FIG. 7, the collation scores (third collation score) 0.85 and 0.75 between the query feature QFV and the provisional master features FV1 and FV4 are generated. Since the generated collation scores 0.85 and 0.75 are at least the threshold value of 0.5, it is determined that the face verification of face recognition is successful. After that, the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24.
 図7の例では、すべての照合スコアが閾値以上の場合に照合に成功した例を示したが、一つ以上、又は、あらかじめ設定した数の照合スコアが閾値以上の場合に照合に成功しとしてもよい。 In the example of FIG. 7, an example in which matching is successful when all the matching scores are equal to or higher than the threshold value is shown, but when one or more or a preset number of matching scores are equal to or higher than the threshold value, matching is successful. May be good.
[装置動作]
 次に、本発明の実施の形態における顔認証装置の動作について説明する。図8は、顔認証装置の動作の一例を説明するための図である。以下の説明においては、適宜図2から図7を参照する。また、本実施の形態では、顔認証装置を動作させることによって、顔認証方法が実施される。よって、本実施の形態における顔認証方法の説明は、以下の顔認証装置の動作説明に代える。
[Device operation]
Next, the operation of the face recognition device according to the embodiment of the present invention will be described. FIG. 8 is a diagram for explaining an example of the operation of the face recognition device. In the following description, FIGS. 2 to 7 will be referred to as appropriate. Further, in the present embodiment, the face authentication method is implemented by operating the face authentication device. Therefore, the description of the face recognition method in the present embodiment is replaced with the following description of the operation of the face recognition device.
 図8に示すように、最初に、検出部2は、撮像した画像から顔画像を含む領域を検出する(ステップA1)。具体的には、ステップA1において、検出部2は、まず、撮影エリア(エリアA1、A2)において撮像装置21が撮像した、人物30の複数の画像を取得する。続いて、ステップA1において、検出部2は、撮像した複数の画像それぞれから、顔に対応する顔領域を有する顔画像を検出する。 As shown in FIG. 8, first, the detection unit 2 detects a region including a face image from the captured image (step A1). Specifically, in step A1, the detection unit 2 first acquires a plurality of images of the person 30 captured by the imaging device 21 in the photographing area (areas A1 and A2). Subsequently, in step A1, the detection unit 2 detects a face image having a face region corresponding to the face from each of the plurality of captured images.
 次に、抽出部3は、検出された顔画像を用いて顔の特徴量を抽出する。具体的には、抽出部3は、まず、検出部2から複数の顔画像を取得する。続いて、抽出部3は、顔画像それぞれに対して顔の特徴量を抽出する(ステップA2)。 Next, the extraction unit 3 extracts facial features using the detected facial image. Specifically, the extraction unit 3 first acquires a plurality of face images from the detection unit 2. Subsequently, the extraction unit 3 extracts facial features for each facial image (step A2).
 次に、照合部4は、利用者を識別する識別情報を取得した場合(ステップA3:Yes)、識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめ記憶装置23に登録されている識別情報に関連付けられたマスタ特徴量とを用いて照合(第一の照合)をする(ステップA4)。なお、識別情報を取得していない場合(ステップA3:No)、ステップA1に移行して、処理を継続する。 Next, when the collation unit 4 acquires the identification information for identifying the user (step A3: Yes), the query feature amount acquired before the time when the identification information is acquired is registered in the storage device 23 in advance. Matching (first matching) is performed using the master feature amount associated with the identification information (step A4). If the identification information has not been acquired (step A3: No), the process proceeds to step A1 and the process is continued.
 具体的には、ステップA4において、照合部4は、まず、エリアA1で撮像された顔画像に対応する一つの特徴量をクエリ特徴量として取得する。例えば、エリアA1において、識別情報を取得した直前に撮像された顔画像、又は、識別情報を取得した時点より前にあらかじめ設定された時間において撮像された顔画像、又は、識別情報を取得した直後に撮像された顔画像、に対応する特徴量をクエリ特徴量とする。 Specifically, in step A4, the collation unit 4 first acquires one feature amount corresponding to the face image captured in the area A1 as a query feature amount. For example, in area A1, the face image captured immediately before the identification information is acquired, or the face image captured at a preset time before the time when the identification information is acquired, or immediately after the identification information is acquired. The feature amount corresponding to the face image captured in is used as the query feature amount.
 続いて、ステップA4において、照合部4は、識別情報に基づいて、記憶装置23からマスタ特徴量を取得する。続いて、ステップA4において、照合部4は、取得したクエリ特徴量とマスタ特徴量を用いて、照合スコア(第一の照合スコア)を生成する。続いて、ステップA4において、照合部4は、照合スコアと閾値(第一の照合閾値)とを比較して、照合判定(第一の照合判定)をし、照合結果(顔認証のうち顔照合の結果)を取得する。例えば、照合スコアが閾値以上であれは照合に成功したとする。 Subsequently, in step A4, the collating unit 4 acquires the master feature amount from the storage device 23 based on the identification information. Subsequently, in step A4, the collation unit 4 generates a collation score (first collation score) using the acquired query feature amount and master feature amount. Subsequently, in step A4, the collation unit 4 compares the collation score with the threshold value (first collation threshold value), makes a collation determination (first collation determination), and performs a collation result (face collation in face recognition). Result) is obtained. For example, if the collation score is equal to or higher than the threshold value, the collation is successful.
 その後、照合部4は、照合に成功した場合(ステップA5:No)、通行を許可することを表す通行情報を、通行許可装置24へ送信する(ステップA10)。照合部4は、照合に失敗した場合(ステップA5:Yes)、通行を許可しないことを表す通行情報を、照合部5へ送信する。 After that, when the collation is successful (step A5: No), the collation unit 4 transmits the passage information indicating that the passage is permitted to the passage permission device 24 (step A10). When the collation fails (step A5: Yes), the collation unit 4 transmits the passage information indicating that the passage is not permitted to the collation unit 5.
 次に、照合部5は、照合(第一の照合)に失敗した場合、クエリ特徴量に対応する画像を撮像した時点より前に撮像された一つ以上の画像に対応する仮マスタ候補特徴量と、マスタ特徴量とを用いて照合(第二の照合)する(ステップA6)。 Next, when the collation unit 5 fails in collation (first collation), the temporary master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged. And the master feature amount are collated (second collation) (step A6).
 具体的には、ステップA6において、照合部5は、まず、照合に失敗したことを表す通行情報を照合部4から取得する。続いて、ステップA6において、照合部5は、エリアA2で撮像された人物30の画像(撮り溜めた画像)を用いて、抽出部3により抽出された特徴量である仮マスタ候補特徴量を取得する。 Specifically, in step A6, the collation unit 5 first acquires traffic information indicating that the collation has failed from the collation unit 4. Subsequently, in step A6, the collating unit 5 acquires a provisional master candidate feature amount, which is a feature amount extracted by the extraction unit 3, using the image of the person 30 (the captured image) captured in the area A2. To do.
 続いて、ステップA6において、照合部5は、取得した仮マスタ候補特徴量とマスタ特徴量とを用いて、仮マスタ候補特徴量ごとに照合スコア(第二の照合スコア)を生成する。続いて、ステップA6において、照合部5は、仮マスタ候補特徴量ごとに算出した照合スコアと閾値(第二の照合閾値)とを比較して、照合判定(第二の照合判定)をし、照合結果を取得する。第二の照合閾値は、例えば、実験、シミュレーションなどにより決定する。 Subsequently, in step A6, the collation unit 5 generates a collation score (second collation score) for each temporary master candidate feature amount by using the acquired temporary master candidate feature amount and the master feature amount. Subsequently, in step A6, the collation unit 5 compares the collation score calculated for each provisional master candidate feature amount with the threshold value (second collation threshold value), and makes a collation determination (second collation determination). Get the collation result. The second collation threshold is determined by, for example, an experiment, a simulation, or the like.
 次に、決定部6は、照合部5の照合(第二の照合)結果に基づいて、仮マスタ候補特徴量から仮マスタ特徴量を決定する(ステップA7)。具体的には、ステップA7において、決定部6は、まず、照合部5から照合結果を取得する。続いて、ステップA7において、決定部6は、照合スコアが閾値以上の仮マスタ候補特徴量を選択して、仮マスタ特徴量を決定して記憶する。 Next, the determination unit 6 determines the temporary master feature amount from the temporary master candidate feature amount based on the collation (second collation) result of the collation unit 5 (step A7). Specifically, in step A7, the determination unit 6 first acquires the collation result from the collation unit 5. Subsequently, in step A7, the determination unit 6 selects a provisional master candidate feature amount having a collation score equal to or higher than the threshold value, determines the provisional master feature amount, and stores the provisional master feature amount.
 次に、照合部7は、仮マスタ特徴量が決定した場合、クエリ特徴量と仮マスタ特徴量とを用いて照合(第三の照合)をする(ステップA8)。具体的には、ステップA8において、照合部7は、まず、クエリ特徴量と仮マスタ特徴量とを用いて、照合スコア(第三の照合スコア)を生成する。続いて、ステップA8において、照合部7は、照合スコアと閾値(第三の照合閾値)とを比較して、照合判定(第三の照合判定)をし、照合結果(顔認証のうち顔照合の結果)を取得する。 Next, when the temporary master feature amount is determined, the collation unit 7 performs collation (third collation) using the query feature amount and the temporary master feature amount (step A8). Specifically, in step A8, the collation unit 7 first generates a collation score (third collation score) using the query feature amount and the provisional master feature amount. Subsequently, in step A8, the collation unit 7 compares the collation score with the threshold value (third collation threshold value), makes a collation determination (third collation determination), and performs a collation result (face collation in face recognition). Result) is obtained.
 例えば、照合スコアが閾値以上であれば照合に成功したとする。第三の照合閾値は、例えば、実験、シミュレーションなどにより決定する。 For example, if the matching score is equal to or higher than the threshold value, it is assumed that the matching is successful. The third collation threshold is determined by, for example, an experiment, a simulation, or the like.
 その後、ステップA8において、照合部7は、照合に成功した場合(ステップA9:Yes)、通行を許可することを表す通行情報を、通行許可装置24へ送信する(ステップA10)。対して、照合部7は、照合に失敗した場合(ステップA9:No)、通行を許可しないことを表す通行情報を、通行許可装置24へ送信する(ステップA11)。 After that, in step A8, when the collation is successful (step A9: Yes), the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24 (step A10). On the other hand, when the collation fails (step A9: No), the collation unit 7 transmits the passage information indicating that the passage is not permitted to the passage permission device 24 (step A11).
 上述したステップA1からA11の処理を繰り返すことでウォークスルー顔認証を実現することができる。 Walk-through face recognition can be realized by repeating the processes of steps A1 to A11 described above.
[本実施の形態の効果]
 以上のように本実施の形態によれば、従来のように、ゲートに移動する人物の顔画像を撮り溜めせず、識別情報を取得した時点より前に取得したクエリ特徴量を用いて照合するので、なりすまし行為を防止することができる。
[Effect of this embodiment]
As described above, according to the present embodiment, unlike the conventional case, the face image of the person moving to the gate is not captured and collated by using the query feature amount acquired before the time when the identification information is acquired. Therefore, spoofing can be prevented.
 また、本実施の形態によれば、第一に照合に失敗した場合、仮マスタ特徴量を生成して、クエリ特徴量と生成した仮マスタ特徴量とを用いて第三の照合をするので、人物30本人であるにもかかわらず照合に失敗する本人拒否が発生しにくくなる。 Further, according to the present embodiment, when the collation fails first, the temporary master feature amount is generated, and the query feature amount and the generated temporary master feature amount are used to perform the third collation. It is less likely that a person will be rejected if the verification fails even though the person is 30 people.
[プログラム]
 本発明の実施の形態におけるプログラムは、コンピュータに、図8に示すステップA1からA11を実行させるプログラムであればよい。このプログラムをコンピュータにインストールし、実行することによって、本実施の形態における顔認証装置と顔認証方法とを実現することができる。この場合、コンピュータのプロセッサは、検出部2、抽出部3、照合部4、5、7、決定部6として機能し、処理を行なう。
[program]
The program according to the embodiment of the present invention may be any program that causes a computer to execute steps A1 to A11 shown in FIG. By installing this program on a computer and executing it, the face recognition device and the face recognition method according to the present embodiment can be realized. In this case, the computer processor functions as a detection unit 2, an extraction unit 3, a collation unit 4, 5, 7, and a determination unit 6 to perform processing.
 また、本実施の形態におけるプログラムは、複数のコンピュータによって構築されたコンピュータシステムによって実行されてもよい。この場合は、例えば、各コンピュータが、それぞれ、検出部2、抽出部3、照合部4、5、7、決定部6のいずれかとして機能してもよい。 Further, the program in the present embodiment may be executed by a computer system constructed by a plurality of computers. In this case, for example, each computer may function as one of the detection unit 2, the extraction unit 3, the collation unit 4, 5, 7, and the determination unit 6, respectively.
[物理構成]
 ここで、実施の形態におけるプログラムを実行することによって、顔認証装置を実現するコンピュータについて図9を用いて説明する。図9は、本発明の実施の形態における顔認証装置を実現するコンピュータの一例を示すブロック図である。
[Physical configuration]
Here, a computer that realizes a face recognition device by executing the program according to the embodiment will be described with reference to FIG. FIG. 9 is a block diagram showing an example of a computer that realizes the face recognition device according to the embodiment of the present invention.
 図9に示すように、コンピュータ110は、CPU(Central Processing Unit)111と、メインメモリ112と、記憶装置113と、入力インターフェイス114と、表示コントローラ115と、データリーダ/ライタ116と、通信インターフェイス117とを備える。これらの各部は、バス121を介して、互いにデータ通信可能に接続される。なお、コンピュータ110は、CPU111に加えて、又はCPU111に代えて、GPU(Graphics Processing Unit)、又はFPGA(Field-Programmable Gate Array)を備えていてもよい。 As shown in FIG. 9, the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. And. Each of these parts is connected to each other via a bus 121 so as to be capable of data communication. The computer 110 may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU 111 or in place of the CPU 111.
 CPU111は、記憶装置113に格納された、本実施の形態におけるプログラム(コード)をメインメモリ112に展開し、これらを所定順序で実行することにより、各種の演算を実施する。メインメモリ112は、典型的には、DRAM(Dynamic Random Access Memory)などの揮発性の記憶装置である。また、本実施の形態におけるプログラムは、コンピュータ読み取り可能な記録媒体120に格納された状態で提供される。なお、本実施の形態におけるプログラムは、通信インターフェイス117を介して接続されたインターネット上で流通するものであってもよい。 The CPU 111 expands the programs (codes) of the present embodiment stored in the storage device 113 into the main memory 112 and executes them in a predetermined order to perform various operations. The main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory). Further, the program according to the present embodiment is provided in a state of being stored in a computer-readable recording medium 120. The program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
 また、記憶装置113の具体例としては、ハードディスクドライブの他、フラッシュメモリなどの半導体記憶装置があげられる。入力インターフェイス114は、CPU111と、キーボード及びマウスといった入力機器118との間のデータ伝送を仲介する。表示コントローラ115は、ディスプレイ装置119と接続され、ディスプレイ装置119での表示を制御する。 Further, as a specific example of the storage device 113, in addition to a hard disk drive, a semiconductor storage device such as a flash memory can be mentioned. The input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and mouse. The display controller 115 is connected to the display device 119 and controls the display on the display device 119.
 データリーダ/ライタ116は、CPU111と記録媒体120との間のデータ伝送を仲介し、記録媒体120からのプログラムの読み出し、及びコンピュータ110における処理結果の記録媒体120への書き込みを実行する。通信インターフェイス117は、CPU111と、他のコンピュータとの間のデータ伝送を仲介する。 The data reader / writer 116 mediates the data transmission between the CPU 111 and the recording medium 120, reads the program from the recording medium 120, and writes the processing result in the computer 110 to the recording medium 120. The communication interface 117 mediates data transmission between the CPU 111 and another computer.
 また、記録媒体120の具体例としては、CF(Compact Flash(登録商標))及びSD(Secure Digital)などの汎用的な半導体記憶デバイス、フレキシブルディスク(Flexible Disk)などの磁気記録媒体、又はCD-ROM(Compact Disk Read Only Memory)などの光学記録媒体があげられる。 Specific examples of the recording medium 120 include a general-purpose semiconductor storage device such as CF (CompactFlash (registered trademark)) and SD (SecureDigital), a magnetic recording medium such as a flexible disk, or a CD-. Examples include optical recording media such as ROM (CompactDiskReadOnlyMemory).
 なお、本実施の形態における顔認証装置1は、プログラムがインストールされたコンピュータではなく、各部に対応したハードウェアを用いることによっても実現可能である。さらに顔認証装置1は、一部がプログラムで実現され、残りの部分がハードウェアで実現されていてもよい。 The face recognition device 1 in the present embodiment can also be realized by using hardware corresponding to each part instead of the computer on which the program is installed. Further, the face recognition device 1 may be partially realized by a program and the rest may be realized by hardware.
[付記]
 以上の実施の形態に関し、更に以下の付記を開示する。上述した実施の形態の一部又は全部は、以下に記載する(付記1)から(付記12)により表現することができるが、以下の記載に限定されるものではない。
[Additional Notes]
The following additional notes will be further disclosed with respect to the above embodiments. A part or all of the above-described embodiments can be expressed by the following descriptions (Appendix 1) to (Appendix 12), but the present invention is not limited to the following description.
(付記1)
 撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出する、検出部と、
 検出した前記顔画像を用いて特徴量を抽出する、抽出部と、
 利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合をする、第一の照合部と、
 を有することを特徴とする顔認証装置。
(Appendix 1)
A detection unit that detects a face image corresponding to a face from the image using an image of a user in the shooting area.
An extraction unit that extracts features using the detected face image,
When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. The first collation unit, which is used to perform the first collation,
A face recognition device characterized by having.
(付記2)
 付記1に記載の顔認証装置であって、
 前記第一の照合に失敗した場合、前記クエリ特徴量に対応する前記画像を撮像した時点より前に撮像された一つ以上の画像に対応する仮マスタ候補特徴量と、前記マスタ特徴量とを用いて第二の照合をする、第二の照合部と、
 前記第二の照合の結果に基づいて、前記仮マスタ候補特徴量から仮マスタ特徴量を決定する、決定部と
 を有することを特徴とする顔認証装置。
(Appendix 2)
The face recognition device described in Appendix 1
When the first collation fails, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master feature amount are displayed. The second collation unit, which is used to perform the second collation,
A face recognition device having a determination unit that determines a temporary master feature amount from the temporary master candidate feature amount based on the result of the second collation.
(付記3)
 付記2に記載の顔認証装置であって、
 前記仮マスタ特徴量が決定した場合、前記クエリ特徴量と前記仮マスタ特徴量とを用いて第三の照合をする、第三の照合部と
 を有することを特徴とする顔認証装置。
(Appendix 3)
The face recognition device described in Appendix 2,
A face recognition device having a third collation unit that performs a third collation using the query feature amount and the tentative master feature amount when the tentative master feature amount is determined.
(付記4)
 付記3に記載の顔認証装置であって、
 前記第三の照合部は、前記第三の照合に成功した場合、前記利用者に通行を許可することを表す通行情報を、通行許可装置へ送信する
 ことを特徴とする顔認証装置。
(Appendix 4)
The face recognition device described in Appendix 3,
The third collation unit is a face recognition device, characterized in that, when the third collation is successful, the third collation unit transmits traffic information indicating that the user is permitted to pass to the pass permission device.
(付記5)
 撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出する、検出ステップと、
 検出した前記顔画像を用いて特徴量を抽出する、抽出ステップと、
 利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合をする、第一の照合ステップと、
 を有することを特徴とする顔認証方法。
(Appendix 5)
A detection step of detecting a face image corresponding to a face from the image using an image of a user in the shooting area.
An extraction step of extracting a feature amount using the detected face image, and
When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. Using the first collation, the first collation step,
A face authentication method characterized by having.
(付記6)
 付記5に記載の顔認証方法であって、
 前記第一の照合に失敗した場合、前記クエリ特徴量に対応する前記画像を撮像した時点より前に撮像された一つ以上の画像に対応する仮マスタ候補特徴量と、前記マスタ特徴量とを用いて第二の照合をする、第二の照合ステップと、
 前記第二の照合の結果に基づいて、前記仮マスタ候補特徴量から仮マスタ特徴量を決定する、決定ステップと、
 を有することを特徴とする顔認証方法。
(Appendix 6)
The face recognition method described in Appendix 5
When the first collation fails, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master feature amount are displayed. Using the second collation, the second collation step,
A determination step in which the temporary master feature amount is determined from the temporary master candidate feature amount based on the result of the second collation, and
A face authentication method characterized by having.
(付記7)
 付記6に記載の顔認証方法であって、
 前記仮マスタ特徴量が決定した場合、前記クエリ特徴量と前記仮マスタ特徴量とを用いて第三の照合をする、第三の照合ステップと、
 を有することを特徴とする顔認証方法。
(Appendix 7)
The face recognition method described in Appendix 6
When the tentative master feature amount is determined, a third collation step of performing a third collation using the query feature amount and the tentative master feature amount, and
A face authentication method characterized by having.
(付記8)
 付記7に記載の顔認証方法であって、
 前記第三の照合ステップにおいて、前記第三の照合に成功した場合、前記利用者に通行を許可することを表す通行情報を、通行許可装置とへ送信する
 を有することを特徴とする顔認証方法。
(Appendix 8)
The face recognition method described in Appendix 7
In the third collation step, when the third collation is successful, the face recognition method is characterized by having to transmit the pass information indicating that the user is allowed to pass to the pass permit device. ..
(付記9)
 コンピュータに
 撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出する、検出ステップと、
 検出した前記顔画像を用いて特徴量を抽出する、抽出ステップと、
 利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合をする、第一の照合ステップと、
 を実行させる命令を含む、プログラムを記録したコンピュータ読み取り可能な記録媒体。
(Appendix 9)
A detection step of detecting a face image corresponding to a face from the image using an image of a user in the shooting area on a computer.
An extraction step of extracting a feature amount using the detected face image, and
When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. Using the first collation, the first collation step,
A computer-readable recording medium on which a program is recorded, including instructions to execute.
(付記10)
 付記9に記載のコンピュータ読み取り可能な記録媒体であって、
 前記プログラムが、前記コンピュータに
 前記第一の照合に失敗した場合、前記クエリ特徴量に対応する前記画像を撮像した時点より前に撮像された一つ以上の画像に対応する仮マスタ候補特徴量と、前記マスタ特徴量とを用いて第二の照合をする、第二の照合ステップと、
 前記第二の照合の結果に基づいて、前記仮マスタ候補特徴量から仮マスタ特徴量を決定する、決定ステップと、
 を実行させる命令を更に含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
(Appendix 10)
The computer-readable recording medium according to Appendix 9, which is a computer-readable recording medium.
When the program fails in the first collation with the computer, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is captured is used. , The second collation step, in which the second collation is performed using the master feature amount,
A determination step in which the temporary master feature amount is determined from the temporary master candidate feature amount based on the result of the second collation, and
A computer-readable recording medium recording a program that further contains instructions to execute the program.
(付記11)
 付記10に記載のコンピュータ読み取り可能な記録媒体であって、
 前記プログラムが、前記コンピュータに
 前記仮マスタ特徴量が決定した場合、前記クエリ特徴量と前記仮マスタ特徴量とを用いて第三の照合をする、第三の照合ステップと、
 を実行させる命令を更に含む、プログラムを記録しているコンピュータ読み取り可能な記録媒体。
(Appendix 11)
The computer-readable recording medium according to Appendix 10.
When the program determines the tentative master feature amount on the computer, the third collation step is performed by using the query feature amount and the tentative master feature amount.
A computer-readable recording medium recording a program that further contains instructions to execute the program.
(付記12)
 付記11に記載のコンピュータ読み取り可能な記録媒体であって、
前記第三の照合ステップにおいて、前記第三の照合に成功した場合、前記利用者に通行の許可を通知することを表す通行情報を、通行許可装置とへ送信する
 ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 12)
The computer-readable recording medium according to Appendix 11, wherein the recording medium is readable.
In the third collation step, if the third collation is successful, the computer readable is characterized by transmitting the pass information indicating that the user is notified of the pass permission to the pass permit device. Recording medium.
 以上、実施の形態を参照して本願発明を説明したが、本願発明は上記実施の形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the invention of the present application has been described above with reference to the embodiment, the invention of the present application is not limited to the above embodiment. Various changes that can be understood by those skilled in the art can be made within the scope of the present invention in terms of the structure and details of the present invention.
 この出願は、2019年9月24日に出願された日本出願特願2019-173385を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese application Japanese Patent Application No. 2019-173385 filed on September 24, 2019, and incorporates all of its disclosures herein.
 以上のように本発明によれば、なりすまし行為を防止することができる。本発明は、ウォークスルー顔認証が必要な分野において有用である。 As described above, according to the present invention, spoofing can be prevented. The present invention is useful in fields where walk-through face recognition is required.
  1 顔認証装置
  2 検出部
  3 抽出部
  4 照合部(第一の照合部)
  5 照合部(第二の照合部)
  6 決定部
  7 照合部(第三の照合部)
 21、21a、21b 撮像装置
 22 識別装置
 23 記憶装置
 24 通行許可装置
110 コンピュータ
111 CPU
112 メインメモリ
113 記憶装置
114 入力インターフェイス
115 表示コントローラ
116 データリーダ/ライタ
117 通信インターフェイス
118 入力機器
119 ディスプレイ装置
120 記録媒体
121 バス 
1 Face authentication device 2 Detection unit 3 Extraction unit 4 Verification unit (first verification unit)
5 Collation section (second collation section)
6 Decision unit 7 Verification unit (third collation unit)
21, 21a, 21b Imaging device 22 Identification device 23 Storage device 24 Pass permission device 110 Computer 111 CPU
112 Main memory 113 Storage device 114 Input interface 115 Display controller 116 Data reader / writer 117 Communication interface 118 Input device 119 Display device 120 Recording medium 121 Bus

Claims (12)

  1.  撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出する、検出手段と、
     検出した前記顔画像を用いて特徴量を抽出する、抽出手段と、
     利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合をする、第一の照合手段と、
     を有することを特徴とする顔認証装置。
    A detection means that detects a face image corresponding to a face from the image using an image of a user in the shooting area.
    An extraction means for extracting a feature amount using the detected face image, and
    When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. The first collation means, which is used to perform the first collation,
    A face recognition device characterized by having.
  2.  請求項1に記載の顔認証装置であって、
     前記第一の照合に失敗した場合、前記クエリ特徴量に対応する前記画像を撮像した時点より前に撮像された一つ以上の画像に対応する仮マスタ候補特徴量と、前記マスタ特徴量とを用いて第二の照合をする、第二の照合手段と、
     前記第二の照合の結果に基づいて、前記仮マスタ候補特徴量から仮マスタ特徴量を決定する、決定手段と
     を有することを特徴とする顔認証装置。
    The face recognition device according to claim 1.
    When the first collation fails, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master feature amount are displayed. The second collation means, which is used to perform the second collation,
    A face recognition device having a determination means for determining a temporary master feature amount from the temporary master candidate feature amount based on the result of the second collation.
  3.  請求項2に記載の顔認証装置であって、
     前記仮マスタ特徴量が決定した場合、前記クエリ特徴量と前記仮マスタ特徴量とを用いて第三の照合をする、第三の照合手段と
     を有することを特徴とする顔認証装置。
    The face recognition device according to claim 2.
    A face recognition device having a third collation means that performs a third collation using the query feature amount and the tentative master feature amount when the tentative master feature amount is determined.
  4.  請求項3に記載の顔認証装置であって、
     前記第三の照合手段は、前記第三の照合に成功した場合、前記利用者に通行を許可することを表す通行情報を、通行許可装置へ送信する
     ことを特徴とする顔認証装置。
    The face recognition device according to claim 3.
    The third collation means is a face recognition device, which, when the third collation is successful, transmits traffic information indicating that the user is permitted to pass to the pass permission device.
  5.  撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出し、
     検出した前記顔画像を用いて特徴量を抽出し、
     利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合をする
     ことを特徴とする顔認証方法。
    Using the image of the user in the shooting area, the face image corresponding to the face is detected from the image, and the face image is detected.
    The feature amount is extracted using the detected face image, and the feature amount is extracted.
    When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. A face recognition method characterized by using the first collation.
  6.  請求項5に記載の顔認証方法であって、
     前記第一の照合に失敗した場合、前記クエリ特徴量に対応する前記画像を撮像した時点より前に撮像された一つ以上の画像に対応する仮マスタ候補特徴量と、前記マスタ特徴量とを用いて第二の照合をし、
     前記第二の照合の結果に基づいて、前記仮マスタ候補特徴量から仮マスタ特徴量を決定する
     ことを特徴とする顔認証方法。
    The face recognition method according to claim 5.
    When the first collation fails, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master feature amount are displayed. Use to make a second collation,
    A face authentication method characterized in that a temporary master feature amount is determined from the temporary master candidate feature amount based on the result of the second collation.
  7.  請求項6に記載の顔認証方法であって、
     前記仮マスタ特徴量が決定した場合、前記クエリ特徴量と前記仮マスタ特徴量とを用いて第三の照合をする
     ことを特徴とする顔認証方法。
    The face recognition method according to claim 6.
    A face recognition method characterized in that when the temporary master feature amount is determined, a third collation is performed using the query feature amount and the temporary master feature amount.
  8.  請求項7に記載の顔認証方法であって、
    前記第三の照合に成功した場合、前記利用者に通行を許可することを表す通行情報を、通行許可装置とへ送信する
     ことを特徴とする顔認証方法。
    The face recognition method according to claim 7.
    A face recognition method characterized in that, when the third collation is successful, the traffic information indicating that the user is permitted to pass is transmitted to the passage permission device.
  9.  コンピュータに
     撮影エリアにいる利用者を撮像した画像を用いて、前記画像から顔に対応する顔画像を検出し、
     検出した前記顔画像を用いて特徴量を抽出し、
     利用者を識別する識別情報を取得した場合、前記識別情報を取得した時点より前に取得したクエリ特徴量と、あらかじめマスタ記憶部に登録されている前記識別情報に関連付けられたマスタ特徴量とを用いて第一の照合をする
     処理を実行させる命令を含む、プログラムを記録したコンピュータ読み取り可能な記録媒体。
    Using an image of a user in the shooting area on a computer, a face image corresponding to the face is detected from the image, and the face image is detected.
    The feature amount is extracted using the detected face image, and the feature amount is extracted.
    When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. A computer-readable recording medium on which a program is recorded, including instructions to perform the first collation process using it.
  10.  請求項9に記載のコンピュータ読み取り可能な記録媒体であって、
     前記プログラムが、前記コンピュータに
     前記第一の照合に失敗した場合、前記クエリ特徴量に対応する前記画像を撮像した時点より前に撮像された一つ以上の画像に対応する仮マスタ候補特徴量と、前記マスタ特徴量とを用いて第二の照合をし、
     前記第二の照合の結果に基づいて、前記仮マスタ候補特徴量から仮マスタ特徴量を決定する
     処理を実行させる命令を含む、プログラムを記録したコンピュータ読み取り可能な記録媒体。
    The computer-readable recording medium according to claim 9.
    When the program fails in the first collation with the computer, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is captured is used. , The second collation is performed using the master feature amount,
    A computer-readable recording medium on which a program is recorded, which includes an instruction for executing a process of determining a temporary master feature amount from the temporary master candidate feature amount based on the result of the second collation.
  11.  請求項10に記載のコンピュータ読み取り可能な記録媒体であって、
     前記プログラムが、前記コンピュータに
     前記仮マスタ特徴量が決定した場合、前記クエリ特徴量と前記仮マスタ特徴量とを用いて第三の照合をする
     処理を実行させる命令を含む、プログラムを記録したコンピュータ読み取り可能な記録媒体。
    The computer-readable recording medium according to claim 10.
    The computer that recorded the program includes an instruction that causes the computer to execute a process of performing a third collation using the query feature amount and the temporary master feature amount when the temporary master feature amount is determined by the program. A readable recording medium.
  12.  請求項11に記載のコンピュータ読み取り可能な記録媒体であって、
     前記第三の照合に成功した場合、前記利用者に通行の許可を通知することを表す通行情報を、通行許可装置とへ送信する
     処理を実行させる命令を含む、プログラムを記録したコンピュータ読み取り可能な記録媒体。 
     
    The computer-readable recording medium according to claim 11.
    If the third collation is successful, the computer can read the program, including an instruction to execute a process of transmitting the traffic information indicating that the user is notified of the permission of the passage to the permission device. recoding media.
PCT/JP2020/035737 2019-09-24 2020-09-23 Facial authentication device, facial authentication method, and computer-readable recording medium WO2021060256A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021548920A JP7248348B2 (en) 2019-09-24 2020-09-23 Face authentication device, face authentication method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019173385 2019-09-24
JP2019-173385 2019-09-24

Publications (1)

Publication Number Publication Date
WO2021060256A1 true WO2021060256A1 (en) 2021-04-01

Family

ID=75166150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/035737 WO2021060256A1 (en) 2019-09-24 2020-09-23 Facial authentication device, facial authentication method, and computer-readable recording medium

Country Status (2)

Country Link
JP (1) JP7248348B2 (en)
WO (1) WO2021060256A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236260A (en) * 2005-02-28 2006-09-07 Toshiba Corp Face authentication device, face authentication method, and entrance/exit management device
JP2008052549A (en) * 2006-08-25 2008-03-06 Hitachi Kokusai Electric Inc Image processing system
JP2009104599A (en) * 2007-10-04 2009-05-14 Toshiba Corp Face authenticating apparatus, face authenticating method and face authenticating system
JP2013210824A (en) * 2012-03-30 2013-10-10 Secom Co Ltd Face image authentication device
JP2018128970A (en) * 2017-02-10 2018-08-16 株式会社テイパーズ Non-stop face authentication system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236260A (en) * 2005-02-28 2006-09-07 Toshiba Corp Face authentication device, face authentication method, and entrance/exit management device
JP2008052549A (en) * 2006-08-25 2008-03-06 Hitachi Kokusai Electric Inc Image processing system
JP2009104599A (en) * 2007-10-04 2009-05-14 Toshiba Corp Face authenticating apparatus, face authenticating method and face authenticating system
JP2013210824A (en) * 2012-03-30 2013-10-10 Secom Co Ltd Face image authentication device
JP2018128970A (en) * 2017-02-10 2018-08-16 株式会社テイパーズ Non-stop face authentication system

Also Published As

Publication number Publication date
JP7248348B2 (en) 2023-03-29
JPWO2021060256A1 (en) 2021-04-01

Similar Documents

Publication Publication Date Title
CN109948408B (en) Activity test method and apparatus
WO2017198014A1 (en) Identity authentication method and apparatus
JP6483485B2 (en) Person authentication method
US20150169943A1 (en) System, method and apparatus for biometric liveness detection
US11503021B2 (en) Mobile enrollment using a known biometric
US11682236B2 (en) Iris authentication device, iris authentication method and recording medium
US11756338B2 (en) Authentication device, authentication method, and recording medium
US9292752B2 (en) Image processing device and image processing method
JP2020524860A (en) Identity authentication method and device, electronic device, computer program and storage medium
JP4899552B2 (en) Authentication device, authentication method, authentication program, and computer-readable recording medium recording the same
WO2020070821A1 (en) Biometric identification device, biometric identification method, and biometric identification program
JP2006085268A (en) Biometrics system and biometrics method
JP6311237B2 (en) Collation device and collation method, collation system, and computer program
JP2003233816A (en) Access control system
JP2007272775A (en) Biological collation system
JP6432634B2 (en) Authentication device, authentication method, and program
WO2021060256A1 (en) Facial authentication device, facial authentication method, and computer-readable recording medium
JP2003099780A (en) Access control system
JP6438693B2 (en) Authentication apparatus, authentication method, and program
JP2022522523A (en) User authentication device and user authentication method using a security card
US20230206686A1 (en) Face authentication method, storage medium, and face authentication device
WO2023175781A1 (en) Authentication device, authentication method, and program
JP7415640B2 (en) Authentication method, information processing device, and authentication program
KR102583982B1 (en) Method for controlling entrance of contactless and system performing the same
WO2023188332A1 (en) Person identification device, person identification method, and person identification program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20867385

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021548920

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20867385

Country of ref document: EP

Kind code of ref document: A1