WO2021060256A1 - Dispositif d'authentification faciale, procédé d'authentification faciale et support d'enregistrement lisible par ordinateur - Google Patents

Dispositif d'authentification faciale, procédé d'authentification faciale et support d'enregistrement lisible par ordinateur Download PDF

Info

Publication number
WO2021060256A1
WO2021060256A1 PCT/JP2020/035737 JP2020035737W WO2021060256A1 WO 2021060256 A1 WO2021060256 A1 WO 2021060256A1 JP 2020035737 W JP2020035737 W JP 2020035737W WO 2021060256 A1 WO2021060256 A1 WO 2021060256A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature amount
collation
master
face
image
Prior art date
Application number
PCT/JP2020/035737
Other languages
English (en)
Japanese (ja)
Inventor
直紀 徳永
享 半田
雄吾 西山
Original Assignee
Necソリューションイノベータ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necソリューションイノベータ株式会社 filed Critical Necソリューションイノベータ株式会社
Priority to JP2021548920A priority Critical patent/JP7248348B2/ja
Publication of WO2021060256A1 publication Critical patent/WO2021060256A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit

Definitions

  • the present invention relates to a face recognition device for face recognition and a face recognition method, and further to a computer-readable recording medium on which a program for realizing these is recorded.
  • Walk-through face recognition is known as an authentication method for verifying identity in entrance / exit management.
  • the identity is confirmed by using the face image of the user moving to the entrance / exit gate and the face image taken in advance.
  • Patent Document 1 discloses a face recognition system that authenticates a user who passes through the authentication area and determines whether or not the user can pass. According to the face recognition system, the user's face image extracted from the input image is collated with the registered face image. Then, when the authentication is successful, when the size of the area indicating the authenticated user in the input image is larger than a predetermined size, the user is allowed to pass.
  • the face image of the person moving to the entrance / exit gate is captured, and the captured multiple face images are collated with the registered face image, and one of the plurality of face images is used. If the verification is successful at any time, the entrance / exit gate is allowed to pass. Therefore, spoofing is possible.
  • the entrance / exit gate will be allowed to pass, so the other person will impersonate the entrance / exit gate. I can pass through.
  • An example of an object of the present invention is to provide a face recognition device for preventing spoofing, a face recognition method, and a computer-readable recording medium.
  • the face recognition device in one aspect of the present invention is A detection unit that detects a face image corresponding to a face from the image using an image of a user in the shooting area.
  • An extraction unit that extracts features using the detected face image, When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored.
  • the first collation unit which is used to perform the first collation, It is characterized by having.
  • the face authentication method in one aspect of the present invention is: A detection step of detecting a face image corresponding to a face from the image using an image of a user in the shooting area. An extraction step of extracting a feature amount using the detected face image, and When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored. Using the first collation, the first collation step, It is characterized by having.
  • a computer-readable recording medium on which a program according to one aspect of the present invention is recorded may be used.
  • the first collation step It is characterized by recording a program including an instruction to execute.
  • FIG. 1 is a diagram for explaining an example of a face recognition device.
  • FIG. 2 is a diagram for explaining an example of a system having a face recognition device.
  • FIG. 3 is a diagram for explaining walk-through face recognition.
  • FIG. 4 is a diagram for explaining collation (first collation).
  • FIG. 5 is a diagram for explaining collation (second collation).
  • FIG. 6 is a diagram for explaining the determination of the provisional master feature amount.
  • FIG. 7 is a diagram for explaining collation (third collation).
  • FIG. 8 is a diagram for explaining an example of the operation of the face recognition device.
  • FIG. 9 is a block diagram showing an example of a computer that realizes a face recognition device.
  • the face recognition device shown in FIG. 1 is a device that prevents spoofing. Further, as shown in FIG. 1, the face recognition device 1 has a detection unit 2, an extraction unit 3, and a collation unit 4 (first collation unit).
  • the detection unit 2 detects a face image corresponding to the face from the image using the image of the user in the shooting area.
  • the extraction unit 3 extracts the feature amount using the detected face image.
  • the collation unit 4 acquires the identification information for identifying the user, the collation unit 4 acquires the query feature amount before the time when the identification information is acquired and the master feature associated with the identification information registered in the master storage unit in advance. Collation (first collation) is performed using the quantity.
  • the shooting area is, for example, an area for capturing a user moving to the entrance / exit gate in walk-through face recognition using one or more imaging devices.
  • the shooting area is adjusted by using the distance between the eyes of the face image.
  • the identification information is, for example, information for identifying a user, which is read from an ID card possessed by the user by using an ID reader.
  • the query feature amount is, for example, a face image captured immediately before the face recognition device 1 acquires the identification information, or one face image captured at a preset time before the time when the identification information is acquired. It is a feature amount extracted from.
  • the set time is preferably within one second, for example. Further, the query feature amount may be extracted using one face image captured immediately after the identification information is acquired.
  • the master storage unit is a storage device that stores information in which the master feature amount extracted from the user's face image registered in advance and the user's identification information are associated with each other. Such information is registered in the master storage unit in advance, for example, when the user purchases a ticket.
  • the collation score is calculated using the query feature amount and the master feature amount, and the collation score is compared with the threshold value stored in advance to perform collation.
  • the threshold value is determined by, for example, an experiment or a simulation.
  • the collation score is measured by a machine learning classifier to measure the similarity of the features to be compared.
  • the face image of the person moving to the gate is not captured and collated by using the query feature amount acquired before the time when the identification information is acquired. Therefore, spoofing can be prevented.
  • FIG. 2 is a diagram for explaining an example of a system having a face recognition device.
  • FIG. 3 is a diagram for explaining walk-through face recognition.
  • the system 20 in the present embodiment includes one or more imaging devices 21 (21a, 21b), an identification device 22, a storage device 23, and a passage permission device 24 in addition to the face recognition device 1.
  • the face recognition device 1 has a collation unit 5 (second collation unit), a determination unit 6, and a collation unit 7 (third collation unit) in addition to the detection unit 2, the extraction unit 3, and the collation unit 4.
  • the face recognition device 1 may be realized by using an information processing device such as a server computer or a personal computer. Further, the face recognition device 1 may be provided inside the identification device 22 or the passage permission device 24.
  • the image pickup device 21 transmits the captured image to the face recognition device 1. Specifically, the image pickup apparatus 21 images a subject in a preset shooting area. In the example of FIG. 3, the person 30 is imaged at preset intervals in the shooting areas (areas A1 and A2).
  • each of the image pickup devices 21a and 21b captures the person 30 and transmits it to the face recognition device 1.
  • the image pickup device 21 may be, for example, a camera or the like.
  • the person 30 reads the display on the identification device 22 so that the identification information attached to the ID card 31 can be read. I'm letting you.
  • the ID card 31 is used, but the identification information displayed on a smartphone or the like may be read.
  • the area A1 shown in FIG. 3 is an area for acquiring an image of the person 30 immediately before or immediately after the person 30 causes the identification device 22 to read the identification information. Alternatively, it is an area for acquiring an image of the person 30 at a preset time before the time when the identification device 22 reads the identification information.
  • the area A2 shown in FIG. 3 is an area for collecting images of the person 30.
  • the identification device 22 is, for example, an ID reader that reads identification information for identifying a user from an ID card 31 possessed by a person 30 or the like.
  • the ID card 31 may be, for example, a terminal device such as a ticket or a smartphone.
  • the identification device 22 reads the identification information from a display (for example, a two-dimensional code) that can read the identification information displayed on the ticket, the smartphone, or the like. Further, the identification information may be read from the IC chip provided on the ID card 31.
  • the storage device 23 is a storage device that stores the master feature amount extracted from the user's face image and the user's identification information in association with each other. Specifically, the storage device 23 is a storage device for storing information in which the master feature amount extracted from the user's face image registered in advance and the user's identification information are associated with each other.
  • the storage device 23 is, for example, a device such as a database.
  • the storage device 23 may be provided inside the face recognition device 1 or outside the face recognition device 1.
  • the passage permission device 24 is a device that permits the person 30 to pass. Specifically, the pass permission device 24 determines whether or not to pass the person 30 based on the content of the pass information received from the face recognition device 1. When the passage permission device 24 is a gate device, the passage permission device 24 opens a door or the like provided in the gate device when the person 30 is allowed to pass.
  • the passage permission device 24 may notify the person 30 that the passage is permitted by using voice, an image, or the like. Further, the pass permission device 24 may be provided inside the face recognition device 1.
  • the detection unit 2 detects a region including a face image from the captured image. Specifically, the detection unit 2 first acquires a plurality of images of the person 30 captured by the imaging device 21 in the photographing area. Subsequently, the detection unit 2 detects a face image having a face region corresponding to the face from each of the plurality of captured images.
  • Pattern recognition In face detection, rectangles are cut out in order from the edge of the captured image, and it is determined whether or not the face is included in the rectangles. Pattern recognition technology is used to determine face / non-face. Pattern recognition methods include support vector machines, neural networks, and general learning vector quantization methods.
  • the extraction unit 3 extracts facial features using the detected facial image. Specifically, the extraction unit 3 first acquires a plurality of face images from the detection unit 2. Subsequently, the extraction unit 3 extracts facial features for each facial image.
  • feature point information such as eyes, nose, and corners of the mouth is extracted from the detected face image.
  • Common methods include gradient histograms, support vector machines, neural networks, and optimization and regression using face shape models.
  • the collation unit 4 acquires the identification information for identifying the user, the collation unit 4 acquires the query feature amount before the time when the identification information is acquired and the master feature associated with the identification information registered in the storage device 23 in advance. Collation (first collation) is performed using the quantity.
  • the collation unit 4 first acquires one feature amount corresponding to the face image captured in the area A1 as a query feature amount. For example, in area A1, the face image captured immediately before the identification information is acquired, or the face image captured at a preset time before the time when the identification information is acquired, or immediately after the identification information is acquired.
  • the feature amount corresponding to the face image captured in is used as the query feature amount.
  • the collation unit 4 acquires the master feature amount from the storage device 23 based on the identification information. Subsequently, the collation unit 4 generates a collation score (first collation score) by using the acquired query feature amount and the master feature amount. Subsequently, the collation unit 4 compares the collation score with the threshold value (first collation threshold value), makes a collation determination (first collation determination), and obtains a collation result (face verification result of face authentication). get. For example, if the collation score is equal to or higher than the threshold value, the collation is successful.
  • the collation unit 4 transmits the passage information indicating that the passage is permitted to the passage permission device 24.
  • the collation unit 4 transmits the traffic information indicating that the passage is not permitted to the collation unit 5.
  • FIG. 4 is a diagram for explaining collation (first collation).
  • first collation threshold value first collation threshold value
  • the collation is successful. It is transmitted to the permission device 24.
  • the collation score is 0.40 and the threshold value is 0.50
  • the collation has failed, and the traffic information indicating that the passage is not permitted is transmitted to the collation unit 5.
  • the collation unit 5 sets the temporary master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master. Collation (second collation) is performed using the feature amount.
  • the collation unit 5 first acquires traffic information indicating that the collation has failed from the collation unit 4. Subsequently, the collation unit 5 acquires a provisional master candidate feature amount, which is a feature amount extracted by the extraction unit 3, using the image of the person 30 (the captured image) captured in the area A2.
  • the collation unit 5 generates a collation score (second collation score) for each temporary master candidate feature amount by using the acquired temporary master candidate feature amount and the master feature amount. Subsequently, the collation unit 5 compares the collation score calculated for each provisional master candidate feature amount with the threshold value (second collation threshold value), makes a collation determination (second collation determination), and acquires the collation result. To do.
  • the second collation threshold is determined by, for example, an experiment, a simulation, or the like.
  • FIG. 5 is a diagram for explaining collation (second collation).
  • the collation score (second collation score) of the provisional master candidate feature amount and the master feature amount is 0.75, 0.40, 0.30, 0.50, respectively, and the threshold value (second collation). Threshold) Compared with 0.50, a matching score equal to or higher than the threshold is detected.
  • the determination unit 6 determines the temporary master feature amount from the temporary master candidate feature amount based on the collation (second collation) result of the collation unit 5. Specifically, the determination unit 6 first acquires the collation result from the collation unit 5. Subsequently, the determination unit 6 selects a provisional master candidate feature amount having a collation score equal to or higher than the threshold value, determines the provisional master feature amount, and stores the provisional master feature amount.
  • FIG. 6 is a diagram for explaining the determination of the provisional master feature amount.
  • the feature quantities FV1 and FV4 corresponding to the collation score (second collation score) having the threshold value of 0.50 or more are selected as the provisional master candidate feature quantities.
  • the collation unit 7 performs collation (third collation) using the query feature amount and the temporary master feature amount. Specifically, the collation unit 7 first generates a collation score (third collation score) by using the query feature amount and the provisional master feature amount. Subsequently, the collation unit 7 compares the collation score with the threshold value (third collation threshold value), makes a collation determination (third collation determination), and determines the collation result (the result of face collation in face authentication). get.
  • the threshold value third collation threshold value
  • the third collation threshold is determined by, for example, an experiment, a simulation, or the like.
  • the collation unit 7 After that, if the collation is successful, the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24. On the other hand, when the collation fails, the collation unit 7 transmits the passage information indicating that the passage is not permitted to the passage permission device 24.
  • FIG. 7 is a diagram for explaining collation (third collation).
  • the collation scores (third collation score) 0.85 and 0.75 between the query feature QFV and the provisional master features FV1 and FV4 are generated. Since the generated collation scores 0.85 and 0.75 are at least the threshold value of 0.5, it is determined that the face verification of face recognition is successful. After that, the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24.
  • FIG. 8 is a diagram for explaining an example of the operation of the face recognition device.
  • FIGS. 2 to 7 will be referred to as appropriate.
  • the face authentication method is implemented by operating the face authentication device. Therefore, the description of the face recognition method in the present embodiment is replaced with the following description of the operation of the face recognition device.
  • the detection unit 2 detects a region including a face image from the captured image (step A1). Specifically, in step A1, the detection unit 2 first acquires a plurality of images of the person 30 captured by the imaging device 21 in the photographing area (areas A1 and A2). Subsequently, in step A1, the detection unit 2 detects a face image having a face region corresponding to the face from each of the plurality of captured images.
  • the extraction unit 3 extracts facial features using the detected facial image. Specifically, the extraction unit 3 first acquires a plurality of face images from the detection unit 2. Subsequently, the extraction unit 3 extracts facial features for each facial image (step A2).
  • step A3: Yes when the collation unit 4 acquires the identification information for identifying the user (step A3: Yes), the query feature amount acquired before the time when the identification information is acquired is registered in the storage device 23 in advance. Matching (first matching) is performed using the master feature amount associated with the identification information (step A4). If the identification information has not been acquired (step A3: No), the process proceeds to step A1 and the process is continued.
  • the collation unit 4 first acquires one feature amount corresponding to the face image captured in the area A1 as a query feature amount. For example, in area A1, the face image captured immediately before the identification information is acquired, or the face image captured at a preset time before the time when the identification information is acquired, or immediately after the identification information is acquired.
  • the feature amount corresponding to the face image captured in is used as the query feature amount.
  • step A4 the collating unit 4 acquires the master feature amount from the storage device 23 based on the identification information. Subsequently, in step A4, the collation unit 4 generates a collation score (first collation score) using the acquired query feature amount and master feature amount. Subsequently, in step A4, the collation unit 4 compares the collation score with the threshold value (first collation threshold value), makes a collation determination (first collation determination), and performs a collation result (face collation in face recognition). Result) is obtained. For example, if the collation score is equal to or higher than the threshold value, the collation is successful.
  • the threshold value first collation threshold value
  • step A5 when the collation is successful (step A5: No), the collation unit 4 transmits the passage information indicating that the passage is permitted to the passage permission device 24 (step A10).
  • step A5: Yes when the collation fails (step A5: Yes), the collation unit 4 transmits the passage information indicating that the passage is not permitted to the collation unit 5.
  • step A6 when the collation unit 5 fails in collation (first collation), the temporary master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged. And the master feature amount are collated (second collation) (step A6).
  • step A6 the collation unit 5 first acquires traffic information indicating that the collation has failed from the collation unit 4. Subsequently, in step A6, the collating unit 5 acquires a provisional master candidate feature amount, which is a feature amount extracted by the extraction unit 3, using the image of the person 30 (the captured image) captured in the area A2. To do.
  • step A6 the collation unit 5 generates a collation score (second collation score) for each temporary master candidate feature amount by using the acquired temporary master candidate feature amount and the master feature amount. Subsequently, in step A6, the collation unit 5 compares the collation score calculated for each provisional master candidate feature amount with the threshold value (second collation threshold value), and makes a collation determination (second collation determination). Get the collation result.
  • the second collation threshold is determined by, for example, an experiment, a simulation, or the like.
  • the determination unit 6 determines the temporary master feature amount from the temporary master candidate feature amount based on the collation (second collation) result of the collation unit 5 (step A7). Specifically, in step A7, the determination unit 6 first acquires the collation result from the collation unit 5. Subsequently, in step A7, the determination unit 6 selects a provisional master candidate feature amount having a collation score equal to or higher than the threshold value, determines the provisional master feature amount, and stores the provisional master feature amount.
  • the collation unit 7 performs collation (third collation) using the query feature amount and the temporary master feature amount (step A8). Specifically, in step A8, the collation unit 7 first generates a collation score (third collation score) using the query feature amount and the provisional master feature amount. Subsequently, in step A8, the collation unit 7 compares the collation score with the threshold value (third collation threshold value), makes a collation determination (third collation determination), and performs a collation result (face collation in face recognition). Result) is obtained.
  • the threshold value third collation threshold value
  • the third collation threshold is determined by, for example, an experiment, a simulation, or the like.
  • step A8 when the collation is successful (step A9: Yes), the collation unit 7 transmits the passage information indicating that the passage is permitted to the passage permission device 24 (step A10). On the other hand, when the collation fails (step A9: No), the collation unit 7 transmits the passage information indicating that the passage is not permitted to the passage permission device 24 (step A11).
  • Walk-through face recognition can be realized by repeating the processes of steps A1 to A11 described above.
  • the temporary master feature amount is generated, and the query feature amount and the generated temporary master feature amount are used to perform the third collation. It is less likely that a person will be rejected if the verification fails even though the person is 30 people.
  • the program according to the embodiment of the present invention may be any program that causes a computer to execute steps A1 to A11 shown in FIG. By installing this program on a computer and executing it, the face recognition device and the face recognition method according to the present embodiment can be realized.
  • the computer processor functions as a detection unit 2, an extraction unit 3, a collation unit 4, 5, 7, and a determination unit 6 to perform processing.
  • each computer may function as one of the detection unit 2, the extraction unit 3, the collation unit 4, 5, 7, and the determination unit 6, respectively.
  • FIG. 9 is a block diagram showing an example of a computer that realizes the face recognition device according to the embodiment of the present invention.
  • the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. And. Each of these parts is connected to each other via a bus 121 so as to be capable of data communication.
  • the computer 110 may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU 111 or in place of the CPU 111.
  • the CPU 111 expands the programs (codes) of the present embodiment stored in the storage device 113 into the main memory 112 and executes them in a predetermined order to perform various operations.
  • the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
  • the program according to the present embodiment is provided in a state of being stored in a computer-readable recording medium 120.
  • the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
  • the storage device 113 in addition to a hard disk drive, a semiconductor storage device such as a flash memory can be mentioned.
  • the input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and mouse.
  • the display controller 115 is connected to the display device 119 and controls the display on the display device 119.
  • the data reader / writer 116 mediates the data transmission between the CPU 111 and the recording medium 120, reads the program from the recording medium 120, and writes the processing result in the computer 110 to the recording medium 120.
  • the communication interface 117 mediates data transmission between the CPU 111 and another computer.
  • the recording medium 120 include a general-purpose semiconductor storage device such as CF (CompactFlash (registered trademark)) and SD (SecureDigital), a magnetic recording medium such as a flexible disk, or a CD-.
  • CF CompactFlash (registered trademark)
  • SD Secure Digital
  • magnetic recording medium such as a flexible disk
  • CD- CompactDiskReadOnlyMemory
  • optical recording media such as ROM (CompactDiskReadOnlyMemory).
  • the face recognition device 1 in the present embodiment can also be realized by using hardware corresponding to each part instead of the computer on which the program is installed. Further, the face recognition device 1 may be partially realized by a program and the rest may be realized by hardware.
  • a detection unit that detects a face image corresponding to a face from the image using an image of a user in the shooting area.
  • An extraction unit that extracts features using the detected face image, When the identification information that identifies the user is acquired, the query feature amount acquired before the time when the identification information is acquired and the master feature amount associated with the identification information registered in the master storage unit in advance are stored.
  • the first collation unit which is used to perform the first collation, A face recognition device characterized by having.
  • Appendix 2 The face recognition device described in Appendix 1 When the first collation fails, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master feature amount are displayed.
  • the second collation unit which is used to perform the second collation, A face recognition device having a determination unit that determines a temporary master feature amount from the temporary master candidate feature amount based on the result of the second collation.
  • Appendix 3 The face recognition device described in Appendix 2, A face recognition device having a third collation unit that performs a third collation using the query feature amount and the tentative master feature amount when the tentative master feature amount is determined.
  • the face recognition device described in Appendix 3 The third collation unit is a face recognition device, characterized in that, when the third collation is successful, the third collation unit transmits traffic information indicating that the user is permitted to pass to the pass permission device.
  • the first collation step A face authentication method characterized by having.
  • Appendix 6 The face recognition method described in Appendix 5
  • the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is imaged and the master feature amount are displayed.
  • the second collation step A determination step in which the temporary master feature amount is determined from the temporary master candidate feature amount based on the result of the second collation, and A face authentication method characterized by having.
  • Appendix 7 The face recognition method described in Appendix 6 When the tentative master feature amount is determined, a third collation step of performing a third collation using the query feature amount and the tentative master feature amount, and A face authentication method characterized by having.
  • Appendix 8 The face recognition method described in Appendix 7 In the third collation step, when the third collation is successful, the face recognition method is characterized by having to transmit the pass information indicating that the user is allowed to pass to the pass permit device. ..
  • the first collation step A computer-readable recording medium on which a program is recorded, including instructions to execute.
  • Appendix 10 The computer-readable recording medium according to Appendix 9, which is a computer-readable recording medium.
  • the program fails in the first collation with the computer, the provisional master candidate feature amount corresponding to one or more images captured before the time when the image corresponding to the query feature amount is captured is used.
  • the second collation step in which the second collation is performed using the master feature amount, A determination step in which the temporary master feature amount is determined from the temporary master candidate feature amount based on the result of the second collation, and A computer-readable recording medium recording a program that further contains instructions to execute the program.
  • Appendix 11 The computer-readable recording medium according to Appendix 10.
  • the third collation step is performed by using the query feature amount and the tentative master feature amount.
  • a computer-readable recording medium recording a program that further contains instructions to execute the program.
  • Appendix 12 The computer-readable recording medium according to Appendix 11, wherein the recording medium is readable.
  • the computer readable is characterized by transmitting the pass information indicating that the user is notified of the pass permission to the pass permit device. Recording medium.
  • the present invention is useful in fields where walk-through face recognition is required.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)

Abstract

L'invention concerne un dispositif d'authentification faciale permettant d'empêcher les actes d'usurpation d'identité et comprenant : une unité de détection (2), qui utilise une image, obtenue par imagerie d'un utilisateur dans une zone de capture d'image, permettant de détecter, à partir de l'image, une image faciale correspondant à un visage ; une unité d'extraction (3), qui extrait un certain nombre de caractéristiques à l'aide de l'image faciale détectée ; et une première unité de comparaison (4) qui, si des informations d'identification identifiant l'utilisateur ont été acquises, effectue une première comparaison à l'aide d'un certain nombre de caractéristiques d'interrogation, acquis avant le moment où les informations d'identification ont été acquises, et d'un certain nombre de caractéristiques principales, enregistrées à l'avance dans une unité principale de stockage et associés aux informations d'identification.
PCT/JP2020/035737 2019-09-24 2020-09-23 Dispositif d'authentification faciale, procédé d'authentification faciale et support d'enregistrement lisible par ordinateur WO2021060256A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021548920A JP7248348B2 (ja) 2019-09-24 2020-09-23 顔認証装置、顔認証方法、及びプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-173385 2019-09-24
JP2019173385 2019-09-24

Publications (1)

Publication Number Publication Date
WO2021060256A1 true WO2021060256A1 (fr) 2021-04-01

Family

ID=75166150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/035737 WO2021060256A1 (fr) 2019-09-24 2020-09-23 Dispositif d'authentification faciale, procédé d'authentification faciale et support d'enregistrement lisible par ordinateur

Country Status (2)

Country Link
JP (1) JP7248348B2 (fr)
WO (1) WO2021060256A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236260A (ja) * 2005-02-28 2006-09-07 Toshiba Corp 顔認証装置、顔認証方法および入退場管理装置
JP2008052549A (ja) * 2006-08-25 2008-03-06 Hitachi Kokusai Electric Inc 画像処理システム
JP2009104599A (ja) * 2007-10-04 2009-05-14 Toshiba Corp 顔認証装置、顔認証方法、及び顔認証システム
JP2013210824A (ja) * 2012-03-30 2013-10-10 Secom Co Ltd 顔画像認証装置
JP2018128970A (ja) * 2017-02-10 2018-08-16 株式会社テイパーズ ノンストップ顔認証システム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236260A (ja) * 2005-02-28 2006-09-07 Toshiba Corp 顔認証装置、顔認証方法および入退場管理装置
JP2008052549A (ja) * 2006-08-25 2008-03-06 Hitachi Kokusai Electric Inc 画像処理システム
JP2009104599A (ja) * 2007-10-04 2009-05-14 Toshiba Corp 顔認証装置、顔認証方法、及び顔認証システム
JP2013210824A (ja) * 2012-03-30 2013-10-10 Secom Co Ltd 顔画像認証装置
JP2018128970A (ja) * 2017-02-10 2018-08-16 株式会社テイパーズ ノンストップ顔認証システム

Also Published As

Publication number Publication date
JP7248348B2 (ja) 2023-03-29
JPWO2021060256A1 (fr) 2021-04-01

Similar Documents

Publication Publication Date Title
KR102455633B1 (ko) 라이브니스 검사 방법 및 장치
JP6483485B2 (ja) 人物認証方法
US20150169943A1 (en) System, method and apparatus for biometric liveness detection
US11503021B2 (en) Mobile enrollment using a known biometric
US11682236B2 (en) Iris authentication device, iris authentication method and recording medium
US11756338B2 (en) Authentication device, authentication method, and recording medium
US9292752B2 (en) Image processing device and image processing method
JP2020524860A (ja) 身元認証方法および装置、電子機器、コンピュータプログラムおよび記憶媒体
JP4899552B2 (ja) 認証装置、認証方法、認証プログラム、これを記録したコンピュータ読み取り可能な記録媒体
WO2020070821A1 (fr) Dispositif d'identification biométrique, procédé d'identification biométrique et programme d'identification biométrique
JP2006085268A (ja) 生体認証システムおよび生体認証方法
JP6311237B2 (ja) 照合装置及び照合方法、照合システム、並びにコンピュータ・プログラム
JP2003233816A (ja) アクセスコントロールシステム
JP2007272775A (ja) 生体照合システム
JP6432634B2 (ja) 認証装置、認証方法、及びプログラム
WO2021060256A1 (fr) Dispositif d'authentification faciale, procédé d'authentification faciale et support d'enregistrement lisible par ordinateur
JP2003099780A (ja) アクセスコントロールシステム
JP7504478B2 (ja) ユーザ認証装置、ユーザ認証方法及びコンピュータで読出し可能な記録媒体
JP6438693B2 (ja) 認証装置、認証方法、およびプログラム
JP2006085265A (ja) 認証装置および方法並びにプログラム
US20230206686A1 (en) Face authentication method, storage medium, and face authentication device
WO2023175781A1 (fr) Dispositif d'authentification, procédé d'authentification et programme
JP7415640B2 (ja) 認証方法、情報処理装置、及び認証プログラム
KR102583982B1 (ko) 비대면 출입 통제 방법 및 이를 수행하는 출입 통제 시스템
WO2023188332A1 (fr) Dispositif d'identification de personne, procédé d'identification de personne et programme d'identification de personne

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20867385

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021548920

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20867385

Country of ref document: EP

Kind code of ref document: A1