CN108573203B - Identity authentication method and device and storage medium - Google Patents

Identity authentication method and device and storage medium Download PDF

Info

Publication number
CN108573203B
CN108573203B CN201710218218.8A CN201710218218A CN108573203B CN 108573203 B CN108573203 B CN 108573203B CN 201710218218 A CN201710218218 A CN 201710218218A CN 108573203 B CN108573203 B CN 108573203B
Authority
CN
China
Prior art keywords
authenticated
person
face
image
certificate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710218218.8A
Other languages
Chinese (zh)
Other versions
CN108573203A (en
Inventor
范浩强
陈可卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN202011596232.XA priority Critical patent/CN112651348B/en
Publication of CN108573203A publication Critical patent/CN108573203A/en
Application granted granted Critical
Publication of CN108573203B publication Critical patent/CN108573203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an identity authentication method and device and a storage medium. The method comprises the following steps: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result. The method and the device combine the authenticated information judgment and the living body detection to determine whether the identity of the person to be authenticated is legal, so compared with the conventional mode of performing identity authentication simply based on passwords or certificates, the identity authentication method and the identity authentication device according to the embodiment of the invention have the advantages that the authentication result is more accurate, the security of user authentication can be improved, and the rights and interests of the user can be effectively guaranteed.

Description

Identity authentication method and device and storage medium
Technical Field
The present invention relates to the field of image processing, and more particularly, to an identity authentication method and apparatus, and a storage medium.
Background
The socialization of the scientific and technological products is an attractive landscape line of modern social life, people's clothes and food residents have no close relation with science and technology, and the scientific and technological products are gradually applied to the aspects of social life and become indispensable important components of the daily life of modern human beings. However, people can enjoy the benefits of scientific products and also feel the negative problems caused by the scientific products, such as information security.
At present, many fields relate to the information security problem, and especially in the technical fields of electronic commerce, mobile payment, bank account opening and the like, the information security problem is particularly prominent. Specifically, in the above-mentioned fields, at present, the user interactive authentication (also referred to as identity authentication) is mostly performed by using a password method, and the user interactive authentication is also performed by using a certificate-swiping method. The two modes have certain disadvantages, the former needs a user to remember a password with ease, and the password input every time is cumbersome, once the password is stolen by an illegal person, the user is more likely to be lost of privacy or property, and for the latter, the certificate is easy to forge or falsely use, and the security is low. Therefore, it is necessary to provide a convenient and secure identity authentication method or system for application in the technical fields of e-commerce, mobile payment, bank account opening, etc.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides an identity authentication method and device and a storage medium.
According to an aspect of the present invention, there is provided an identity authentication method. The method comprises the following steps: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the personally identifying information is one or more of a certificate number, a name, a certificate face, a field captured face, a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field captured face.
Before acquiring the personal identification information of the person to be authenticated, the identity authentication method further comprises the following steps: outputting indication information for indicating a person to be authenticated to provide predetermined types of person information; the personal identification information is personnel information provided by a person to be authenticated or obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformation value of a certificate number, a transformation value of a name, a transformation value of a certificate face and a transformation value of a field-collected face, and the acquiring the personal identification information of the person to be authenticated comprises: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field collected face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
Illustratively, acquiring the personal identification information of the person to be authenticated includes: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the method for performing the living body detection on the person to be authenticated to obtain the living body detection result comprises the following steps: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before determining whether the identity of the person to be authenticated is legal at least according to the information authentication result and the living body detection result, the identity authentication method further comprises the following steps: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; determining whether the identity of the person to be authenticated is legal at least according to the information authentication result and the living body detection result comprises: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judgment operation includes a certificate authenticity judgment operation and/or a face consistency judgment operation, and the additional judgment result includes a certificate authenticity judgment result and/or a face consistency judgment result, and the certificate authenticity judgment operation includes: judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result; the human face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judgment result.
Illustratively, acquiring the face of the certificate of the person to be authenticated according to the certificate image comprises the following steps: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, acquiring the face of the certificate of the person to be authenticated according to the certificate image comprises the following steps: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from the authenticated certificate information database based on the character information in the certificate image; and determining the searched and matched certificate face in the certificate information as the certificate face of the person to be authenticated.
Illustratively, judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result comprises: extracting image characteristics of the certificate image; inputting the image characteristics into the trained certificate classifier to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result comprises: identifying an image block containing certificate identification information from a certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, in the process of determining whether the identity of the person to be authenticated is legitimate according to the certificate authentication result, the living body detection result and the additional judgment result, each of the certificate authentication result, the living body detection result and the additional judgment result has a respective weight coefficient.
Illustratively, acquiring a document image of a person to be authenticated includes: acquiring a pre-shot image which is acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition; evaluating the image attribute of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold value, generating pre-shot prompt information according to the image attribute of the pre-shot image, wherein the pre-shot prompt information is used for prompting a person to be authenticated to adjust the shooting condition of the certificate of the person to be authenticated; and saving the pre-shot image as the certificate image when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold value.
Illustratively, the determining whether the personal identification information is authenticated information to obtain the information authentication result includes: performing character recognition on the certificate image to obtain character information in the certificate image; searching in the authenticated certificate information database based on the character information in the certificate image to obtain a certificate authentication result; and the certificate authentication result is the confidence level that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the performing text recognition on the certificate image to obtain text information in the certificate image comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
Illustratively, before recognizing the characters in the image block containing the characters, the identity authentication method further comprises: the image block containing the text is corrected to the horizontal state.
Exemplarily, after recognizing the characters in the image block containing the characters to obtain the character information in the certificate image, performing character recognition on the certificate image to obtain the character information in the certificate image further includes: outputting the text information in the certificate image for a user to check; receiving character correction information input by a user; comparing the characters to be corrected indicated by the character correction information with the corresponding characters in the character information in the certificate image; and if the difference between the characters to be corrected and the corresponding characters in the character information in the certificate image, which is indicated by the character correction information, is smaller than a preset difference threshold value, updating the character information in the certificate image by using the character correction information.
Illustratively, performing the living body detection on the person to be authenticated to obtain the living body detection result comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction or not so as to obtain a living body detection result.
Illustratively, performing the living body detection on the person to be authenticated to obtain the living body detection result comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; capturing a skin area image before a person to be authenticated executes a living body action and a skin area image after the person to be authenticated executes the living body action from the acquired face image; and inputting the skin area image before the living body action is executed by the person to be authenticated and the skin area image after the living body action is executed into the skin elasticity classifier to obtain a living body detection result.
Illustratively, the identity authentication method comprises: acquiring a face image before a real person performs a living body action and a face image after the real person performs the living body action, and acquiring a face image before a false person performs the living body action and a face image after the false person performs the living body action; extracting a face image of a real person before the real person performs the living body action and a skin area image of the real person after the real person performs the living body action from the face image of the real person before the real person performs the living body action and the face image of the real person after the real person performs the living body action as a positive sample image; extracting a face image before the live action of the false person and a skin area image after the live action of the false person from the face image before the live action of the false person and the face image after the live action of the false person as negative sample images; and training a classifier model by using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, capturing, from the acquired face image, a skin area image before the living body motion is performed by the person to be authenticated and a skin area image after the living body motion is performed includes: selecting a face image before a person to be authenticated executes a living body action and a face image after the person to be authenticated executes the living body action from the collected face images; positioning the human face in the human face image before the human body action is executed by the person to be authenticated and the human face in the human face image after the human body action is executed by using the human face detection model; positioning key points of a human face in a human face image before a person to be authenticated executes a living body action and a human face in a human face image after the person to be authenticated executes the living body action by using a human face key point positioning model; and performing region division on the human face in the human face image before the human to be authenticated performs the living body action and the human face in the human face image after the human to be authenticated performs the living body action according to the human face position and the key point position obtained through positioning so as to obtain a skin region image before the human to be authenticated performs the living body action and a skin region image after the human to be authenticated performs the living body action.
Exemplarily, the identity authentication method further comprises: acquiring a sample face image, wherein the position of a face and the position of a key point of the face in the sample face image are marked; and carrying out neural network training by using the sample face image to obtain a face detection model and a face key point positioning model.
Illustratively, performing the living body detection on the person to be authenticated to obtain the living body detection result comprises: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under structured light irradiation; and determining whether the face of the person to be authenticated belongs to the living body according to the face image so as to obtain a living body detection result.
Illustratively, performing the living body detection on the person to be authenticated to obtain the living body detection result comprises: step S110: acquiring one or more illumination images acquired aiming at the face of a person to be authenticated under the irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result; and step S130: determining whether the authenticated person passes the living body verification based on at least the illumination living body verification result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during the illumination of the face of the person to be authenticated.
Illustratively, the pattern of detected light changes between every two consecutive moments in time during the illumination of the face of the person to be authenticated.
For example, in the process of irradiating the face of the person to be authenticated, the pattern of the detection light is changed randomly or set in advance.
Illustratively, performing the living body detection on the person to be authenticated to obtain the living body detection result comprises: step S140: outputting an action instruction, wherein the action instruction is used for instructing an authenticated person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; and step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living body verification result; the step S130 includes: and determining whether the person to be authenticated passes the living body verification or not based on the illumination living body verification result and the action living body verification result so as to obtain a living body detection result.
Exemplarily, the step S170 includes: if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is detected based on a plurality of action images acquired within a period of time not longer than a first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is not detected based on a plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
Exemplarily, the identity authentication method further comprises: counting once in the process of executing step S140 to step S170 each time to obtain the action verification times; after step S170, the identity authentication method further includes: if the action living body verification result indicates that the face of the person to be authenticated does not belong to the living body, outputting first error information, and judging whether the action verification frequency reaches a first time threshold value, if the action verification frequency reaches the first time threshold value, turning to step S130, if the action verification frequency does not reach the first time threshold value, returning to step S140 or returning to step S110 in the case that step S110 is executed before step S140, wherein the first error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Exemplarily, before the step S110, the identity authentication method further includes: step S108: judging whether the image acquisition condition of the person to be authenticated meets a preset requirement, if so, turning to the step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle between the person to be authenticated and the image acquisition device.
Illustratively, before or simultaneously with step S108, the identity authentication method further comprises: step S106: and outputting first prompt information, wherein the first prompt information is used for prompting the person to be authenticated to enable the face to be over against the image acquisition device and to be close to the image acquisition device.
Exemplarily, step S106 includes: the first prompt information is output in one or more of a voice form, an image form and a text form.
Exemplarily, step S108 includes: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; and judging whether the image acquisition condition meets a preset requirement or not according to the face area, if the face area is located in the preset area and the proportion of the face area in the real-time image is greater than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the real-time image is not greater than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Exemplarily, the identity authentication method further comprises: and if the proportion of the face area in the real-time image is not more than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Exemplarily, step S108 includes: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; and judging whether the image acquisition condition meets a preset requirement or not according to the face area, if the face area is located in the preset area and the proportion of the face area in the preset area is greater than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the preset area is not greater than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Exemplarily, the identity authentication method further comprises: and if the proportion of the face area in the preset area is not more than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Exemplarily, the identity authentication method further comprises: judging the relative position relation between the face area and a preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relationship between the face region and the preset region to prompt that the relative position relationship between the person to be authenticated and the image acquisition device is changed so that the face region is close to the preset region.
Exemplarily, step S108 includes: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state or not according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and otherwise, determining that the image acquisition condition does not meet the preset requirement.
Exemplarily, the identity authentication method further comprises: counting once in the process of executing the steps S110 to S120 each time to obtain the illumination verification times; after step S120, the identity authentication method further includes: and if the illumination living body verification result shows that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification frequency reaches a second secondary threshold value, if so, turning to the step S130, and if not, returning to the step S108 or returning to the step S110, wherein the second error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Illustratively, before step S110 or during the process of performing step S110 and step S120, the identity authentication method further includes: and outputting second prompt information, wherein the second prompt information is used for prompting that the person to be authenticated keeps still within a second preset time.
Illustratively, the second prompt message is countdown message corresponding to a second preset time.
The detection light is obtained, for example, by dynamically changing the color and/or position of the light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of light emitted by the display screen is dynamically changed by changing the content displayed on the display screen to emit detection light to the face of the person to be authenticated.
According to another aspect of the present invention, an identity authentication apparatus is provided. The device includes: the information acquisition module is used for acquiring personal identification information of a person to be authenticated; the authenticated information judging module is used for judging whether the personal identification information is authenticated information or not so as to obtain an information authentication result; the living body detection module is used for carrying out living body detection on the personnel to be authenticated so as to obtain a living body detection result; and the identity determining module is used for determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the personally identifying information is one or more of a certificate number, a name, a certificate face, a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field captured face.
Exemplarily, the identity authentication apparatus further comprises: the indication information output module is used for outputting indication information for indicating the person to be authenticated to provide the person information of the preset type before the information acquisition module acquires the personal identification information of the person to be authenticated; the personal identification information is personnel information provided by a person to be authenticated or obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a live captured face, and the information acquisition module includes: the system comprises an initial information acquisition submodule and a verification submodule, wherein the initial information acquisition submodule is used for acquiring initial information of a person to be authenticated, and the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and a transformation submodule for transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
Illustratively, the information acquisition module includes: the certificate image acquisition submodule is used for acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the living body detecting module includes: the face image acquisition sub-module is used for acquiring a face image of a person to be authenticated; and the detection submodule is used for carrying out living body detection by utilizing the face image so as to obtain a living body detection result.
Exemplarily, the identity authentication apparatus further comprises: the additional judgment module is used for executing additional judgment operation by utilizing the certificate image and/or the face image to obtain an additional judgment result before the identity determination module determines whether the identity of the person to be authenticated is legal at least according to the information authentication result and the living body detection result; the identity determination module comprises: and the identity determining submodule is used for determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judging module comprises a certificate authenticity judging sub-module and/or a face consistency judging sub-module, the certificate authenticity judging sub-module is used for executing certificate authenticity judging operation, the face consistency judging sub-module is used for executing face consistency judging operation, and the additional judging result comprises a certificate authenticity judging result and/or a face consistency judging result,
the certificate authenticity judgment sub-module comprises: the certificate authenticity judging unit is used for judging whether the certificate in the certificate image is a real certificate or not so as to obtain a certificate authenticity judging result; the face consistency judging module comprises: the certificate face acquisition unit is used for acquiring the certificate face of the person to be authenticated according to the certificate image; and the face comparison unit is used for comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judgment result.
Illustratively, the certificate face acquisition unit includes: and the certificate face detection subunit is used for detecting the face from the certificate image so as to obtain the certificate face of the person to be authenticated.
Illustratively, the certificate face acquisition unit includes: the character recognition submodule is used for carrying out character recognition on the certificate image so as to obtain character information in the certificate image; a searching subunit, configured to search for matched certificate information from the authenticated certificate information database based on the text information in the certificate image; and the certificate face determining subunit is used for determining the certificate face in the searched and matched certificate information as the certificate face of the person to be authenticated.
Illustratively, the certificate authenticity judging unit includes: the characteristic extraction subunit is used for extracting the image characteristics of the certificate image; the input subunit is used for inputting the image characteristics into the trained certificate classifier so as to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, the certificate authenticity judging unit includes: the image block identification subunit is used for identifying the image block containing the certificate identification information from the certificate image; the identification information identification subunit is used for identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, in the process that the identity determining submodule determines whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result, each of the certificate authentication result, the living body detection result and the additional judgment result has a respective weight coefficient.
Illustratively, the certificate image acquisition sub-module includes: the system comprises a collecting unit, a processing unit and a processing unit, wherein the collecting unit is used for acquiring a pre-shot image which is collected in real time aiming at the certificate of a person to be authenticated under the current shooting condition; the evaluation unit is used for evaluating the image attribute of the pre-shot image in real time; the prompting unit is used for generating pre-shooting prompting information according to the image attribute of the pre-shot image when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold value, and is used for prompting a person to be authenticated to adjust the shooting condition of the certificate of the person to be authenticated; and a saving unit that saves the pre-captured image as a certificate image when the evaluation value of the image attribute of the pre-captured image is equal to or greater than a preset evaluation value threshold.
Illustratively, the authenticated information determination module includes: the character recognition submodule is used for carrying out character recognition on the certificate image so as to obtain character information in the certificate image; the search submodule is used for searching in the authenticated certificate information database based on the text information in the certificate image so as to obtain a certificate authentication result; and the certificate authentication result is the confidence level that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the word recognition sub-module includes: the character positioning unit is used for positioning characters in the certificate image to obtain an image block containing the characters; and the character recognition unit is used for recognizing characters in the image block containing the characters so as to obtain character information in the certificate image.
Exemplarily, the identity authentication apparatus further comprises: and the character correction module is used for correcting the image block containing the characters into a horizontal state before the character recognition unit recognizes the characters in the image block containing the characters.
Illustratively, the word recognition sub-module further comprises: the text output unit is used for outputting text information in the certificate image for a user to check; a correction information receiving unit for receiving character correction information input by a user; the character comparison unit is used for comparing the characters to be corrected indicated by the character correction information with the corresponding characters in the character information in the certificate image; and the character updating unit is used for updating the character information in the certificate image by using the character correction information if the difference between the character to be corrected indicated by the character correction information and the corresponding character in the character information in the certificate image is smaller than a preset difference threshold value.
Illustratively, the liveness detection module includes: the instruction generation submodule is used for generating a living body action instruction, and the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; the real-time image acquisition sub-module is used for acquiring a face image of a person to be authenticated, which is acquired in real time; the face detection submodule is used for detecting the face in the face image; and the living body action execution judging submodule is used for judging whether the human face in the human face image executes the living body action indicated by the living body action instruction or not so as to obtain a living body detection result.
Illustratively, the liveness detection module includes: the instruction generation submodule is used for generating a living body action instruction, and the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; the real-time image acquisition sub-module is used for acquiring a face image of a person to be authenticated, which is acquired in real time; the skin area capturing submodule is used for capturing a skin area image of a person to be authenticated before the living body action is executed and a skin area image of the person after the living body action is executed from the acquired face image; and the input submodule is used for inputting the skin area image before the person to be authenticated performs the living body action and the skin area image after the person to be authenticated performs the living body action into the skin elasticity classifier so as to obtain a living body detection result.
Illustratively, the identity authentication apparatus includes: the real and false image acquisition module is used for acquiring a face image before a real person performs a living body action and a face image after the real person performs the living body action, and a face image before a false person performs the living body action and a face image after the real person performs the living body action; the positive sample extraction submodule is used for extracting a face image of a real person before the real person performs the living body action and a skin area image of the real person after the real person performs the living body action from the face image of the real person before the real person performs the living body action and the face image after the real person performs the living body action as a positive sample image; the negative sample extraction submodule is used for extracting the face image of the false person before the live action is executed and the skin area image of the false person after the live action is executed from the face image of the false person before the live action is executed and the face image of the false person after the live action is executed as a negative sample image; and a training sub-module for training the classifier model using the positive sample image and the negative sample image to obtain the skin elasticity classifier.
Illustratively, the skin region capture submodule includes: the image selection unit is used for selecting a face image before the person to be authenticated executes the living body action and a face image after the person to be authenticated executes the living body action from the collected face images; the human face positioning unit is used for positioning the human faces in the human face images before the human body to be authenticated executes the living body action and the human faces in the human face images after the human body action is executed by utilizing the human face detection model; the key point positioning unit is used for positioning the key points of the face in the face image before the person to be authenticated executes the living body action and the face image after the person to be authenticated executes the living body action by using the face key point positioning model; and the skin area obtaining unit is used for carrying out area division on the human faces in the human face image before the person to be authenticated executes the living body action and the human face in the human face image after the person to be authenticated executes the living body action according to the human face position and the key point position obtained through positioning so as to obtain the skin area image before the person to be authenticated executes the living body action and the skin area image after the person to be authenticated executes the living body action.
Exemplarily, the identity authentication apparatus further comprises: the system comprises a sample image acquisition module, a face recognition module and a face recognition module, wherein the sample image acquisition module is used for acquiring a sample face image, and the position of a face in the sample face image and the position of a key point of the face are marked; and the training module is used for carrying out neural network training by utilizing the sample face image so as to obtain a face detection model and a face key point positioning model.
Illustratively, the liveness detection module includes: the structured light image acquisition sub-module is used for acquiring a face image which is acquired by the binocular camera aiming at the face of the person to be authenticated under the structured light irradiation; and the living body determining submodule is used for determining whether the face of the person to be authenticated belongs to a living body according to the face image so as to obtain a living body detection result.
Illustratively, the liveness detection module includes: the illumination image acquisition sub-module is used for acquiring one or more illumination images acquired aiming at the face of a person to be authenticated under the irradiation of the detection light; the illumination living body verification sub-module is used for determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result; and a living body verification passing determination sub-module for determining whether the authenticated person passes the living body verification based on at least the illumination living body verification result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during the illumination of the face of the person to be authenticated.
Illustratively, the pattern of detected light changes between every two consecutive moments in time during the illumination of the face of the person to be authenticated.
For example, in the process of irradiating the face of the person to be authenticated, the pattern of the detection light is changed randomly or set in advance.
Illustratively, the liveness detection module includes: the instruction output submodule is used for outputting an action instruction, wherein the action instruction is used for indicating an authenticated person to execute a corresponding action; the action image acquisition sub-module is used for acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; the action detection submodule is used for detecting actions performed by the person to be authenticated based on the action images; the action living body verification submodule is used for determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living body verification result; the in-vivo verification pass determination sub-module includes: and a pass determination unit for determining whether the person to be authenticated passes the living body verification based on the illumination living body verification result and the action living body verification result to obtain a living body detection result.
Illustratively, the action liveness verification sub-module includes: a living body determination unit configured to determine that the face of the person to be authenticated belongs to the living body if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period not longer than a first preset time, and determine that the face of the person to be authenticated does not belong to the living body if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time.
Exemplarily, the identity authentication apparatus further comprises: the first counting module is used for counting once in the process from the operation instruction output submodule to the operation action living body verification submodule each time so as to obtain the action verification times; the living body verification device further includes: the first verification error execution module is used for outputting first error information if the action living body verification result indicates that the face of the person to be authenticated does not belong to a living body, judging whether the action verification frequency reaches a first time threshold value, if the action verification frequency reaches the first time threshold value, starting the living body verification pass determining submodule, and if the action verification frequency does not reach the first time threshold value, restarting the instruction output submodule or restarting the illumination image acquisition submodule under the condition that the illumination image acquisition submodule runs before the instruction output submodule, wherein the first error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Exemplarily, the identity authentication apparatus further comprises: and the condition judgment module is used for judging whether the image acquisition condition of the person to be authenticated meets the preset requirement or not, and starting the illumination image acquisition sub-module if the image acquisition condition meets the preset requirement, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle between the person to be authenticated and the image acquisition device.
Exemplarily, the identity authentication apparatus further comprises: the first prompt information output module is used for outputting first prompt information before the condition judgment module judges whether the image acquisition condition of the person to be authenticated meets the preset requirement or simultaneously with the condition judgment module judging whether the image acquisition condition of the person to be authenticated meets the preset requirement, wherein the first prompt information is used for prompting the person to be authenticated to enable the face of the person to be authenticated to be over against the image acquisition device and to be close to the image acquisition device.
Illustratively, the first prompt information output module includes: and the first prompt information output sub-module is used for outputting the first prompt information in one or more of a voice form, an image form and a character form.
Illustratively, the condition judging module includes: the first real-time image acquisition sub-module is used for acquiring a real-time image acquired aiming at the face of a person to be authenticated; the first real-time output submodule is used for outputting a preset area used for calibrating an image acquisition condition and a face area in a real-time image in real time for display; and the first judgment submodule is used for judging whether the image acquisition condition meets the preset requirement or not according to the face area detected in the real-time image, if the face area is positioned in the preset area and the proportion of the face area in the real-time image is greater than a first preset proportion, the image acquisition condition is determined to meet the preset requirement, and if the face area is not positioned in the preset area or the proportion of the face area in the real-time image is not greater than the first preset proportion, the image acquisition condition is determined not to meet the preset requirement.
Exemplarily, the identity authentication apparatus further comprises: and the first acquisition prompt output module is used for outputting first acquisition prompt information in real time to prompt a person to be authenticated to approach the image acquisition device if the proportion of the face area in the real-time image is not greater than a first preset proportion.
Illustratively, the condition judging module includes: the second real-time image acquisition sub-module is used for acquiring a real-time image acquired aiming at the face of the person to be authenticated; the second real-time output submodule is used for outputting the preset area used for calibrating the image acquisition condition and the face area in the real-time image in real time for display; and the second judgment submodule is used for judging whether the image acquisition condition meets the preset requirement or not according to the face area, if the face area is positioned in the preset area and the proportion of the face area in the preset area is greater than a second preset proportion, the image acquisition condition is determined to meet the preset requirement, and if the face area is not positioned in the preset area or the proportion of the face area in the preset area is not greater than the second preset proportion, the image acquisition condition is determined not to meet the preset requirement.
Exemplarily, the identity authentication method further comprises: and if the proportion of the face area in the preset area is not more than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Exemplarily, the identity authentication apparatus further comprises: the second acquisition prompt output module is used for judging the relative position relationship between the face area and the preset area in real time; and the third acquisition prompt output module is used for outputting third acquisition prompt information in real time based on the relative position relationship between the face area and the preset area so as to prompt that the relative position relationship between the person to be authenticated and the image acquisition device is changed to enable the face area to be close to the preset area.
Illustratively, the condition judging module includes: the attitude information acquisition module is used for acquiring attitude information of the image acquisition device; and the third judgment submodule is used for judging whether the image acquisition device is in a vertical placement state or not according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and otherwise, determining that the image acquisition condition does not meet the preset requirement.
Exemplarily, the identity authentication method further comprises: the second counting module is used for counting once in the process from the operation of the illumination image acquisition module to the operation of the illumination living body verification module so as to obtain the illumination verification times; the identity authentication device further comprises: and the second verification error execution module is used for outputting second error information and judging whether the illumination verification frequency reaches a second secondary threshold value or not if the illumination living body verification result shows that the face of the person to be authenticated does not belong to the living body, starting the living body verification passing determination sub-module if the illumination verification frequency reaches the second secondary threshold value, and restarting the condition judgment module or restarting the illumination image acquisition sub-module if the illumination verification frequency does not reach the second secondary threshold value, wherein the second error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Exemplarily, the identity authentication apparatus further comprises: and the second prompt information output module is used for outputting second prompt information before the illumination image acquisition sub-module acquires one or more illumination images acquired aiming at the face of the person to be authenticated under the irradiation of the detection light or in the process that the illumination image acquisition sub-module acquires one or more illumination images acquired aiming at the face of the person to be authenticated under the irradiation of the detection light and the illumination living body verification sub-module determines whether the face of the person to be authenticated belongs to the living body or not based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images, wherein the second prompt information is used for prompting that the person to be authenticated is kept still within a second preset time.
Illustratively, the second prompt message is countdown message corresponding to a second preset time.
The detection light is obtained, for example, by dynamically changing the color and/or position of the light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of light emitted by the display screen is dynamically changed by changing the content displayed on the display screen to emit detection light to the face of the person to be authenticated.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to perform the steps of: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the personally identifying information is one or more of a certificate number, a name, a certificate face, a field captured face, a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field captured face.
Illustratively, before the obtaining of the personal identification information of the person to be authenticated is performed, the computer program instructions are further configured to, when executed by the processor, perform the steps of: outputting indication information for indicating a person to be authenticated to provide predetermined types of person information; the personal identification information is personnel information provided by a person to be authenticated or obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a live captured face, the computer program instructions being executable by the processor to perform the steps of obtaining personal identification information of the person to be authenticated including: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field collected face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
Illustratively, the step of obtaining personal identification information of the person to be authenticated performed by the computer program instructions when executed by the processor comprises: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing a biopsy of the person to be authenticated to obtain a biopsy result, the step being performed by the computer program instructions when executed by the processor, comprises: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before the step of determining whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result, which is performed by the processor, the computer program instructions are further configured to perform the following steps when executed by the processor: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; the step of determining whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result, which is executed by the processor, of the computer program instructions comprises: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judgment operation includes a certificate authenticity judgment operation and/or a face consistency judgment operation, and the additional judgment result includes a certificate authenticity judgment result and/or a face consistency judgment result, and the certificate authenticity judgment operation includes: judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result; the human face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judgment result.
Illustratively, the computer program instructions, when executed by the processor, are further operable to perform the step of obtaining a face of a document of a person to be authenticated from the document image comprising: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, the step of acquiring a face of a document of a person to be authenticated from a document image, the computer program instructions being executable by the processor to perform the steps of: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from the authenticated certificate information database based on the character information in the certificate image; and determining the searched and matched certificate face in the certificate information as the certificate face of the person to be authenticated.
Illustratively, the step of determining whether a document in the document image is a genuine document by the computer program instructions executed by the processor to obtain the document authenticity determination result includes: extracting image characteristics of the certificate image; inputting the image characteristics into the trained certificate classifier to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, the step of determining whether a document in the document image is a genuine document by the computer program instructions executed by the processor to obtain the document authenticity determination result includes: identifying an image block containing certificate identification information from a certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, during execution of the step of determining whether the identity of the person to be authenticated is legitimate from the credential authentication result, the liveness detection result and the additional determination result, which is performed by the computer program instructions when executed by the processor, each of the credential authentication result, the liveness detection result and the additional determination result has a respective weight coefficient.
Illustratively, the computer program instructions, when executed by the processor, perform the step of obtaining an image of a document of a person to be authenticated comprising: acquiring a pre-shot image acquired in real time aiming at a certificate of a person to be authenticated under a current shooting condition; evaluating the image attribute of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold value, generating pre-shot prompt information according to the image attribute of the pre-shot image, wherein the pre-shot prompt information is used for prompting a person to be authenticated to adjust the shooting condition of the certificate of the person to be authenticated; and saving the pre-shot image as the certificate image when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold value.
Illustratively, the step of determining whether the personal identification information is authenticated information to obtain the information authentication result, which is executed by the processor, comprises: performing character recognition on the certificate image to obtain character information in the certificate image; searching in the authenticated certificate information database based on the character information in the certificate image to obtain a certificate authentication result; and the certificate authentication result is the confidence level that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the step of text recognition of the certificate image to obtain text information in the certificate image performed by the processor when the computer program instructions are executed by the processor comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
Illustratively, the computer program instructions when executed by the processor are further operable to perform the steps, prior to the step of identifying a word in an image block containing the word being performed by the processor: the image block containing the text is corrected to the horizontal state.
Illustratively, after the step of recognizing the characters in the image block containing the characters to obtain the character information in the certificate image, which is performed by the computer program instructions when executed by the processor, the step of recognizing the characters in the certificate image to obtain the character information in the certificate image, which is performed by the computer program instructions when executed by the processor, further comprises: outputting the text information in the certificate image for a user to check; receiving character correction information input by a user; comparing the characters to be corrected indicated by the character correction information with the corresponding characters in the character information in the certificate image; and if the difference between the characters to be corrected and the corresponding characters in the character information in the certificate image, which is indicated by the character correction information, is smaller than a preset difference threshold value, updating the character information in the certificate image by using the character correction information.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction or not so as to obtain a living body detection result.
Illustratively, the step of performing a live body test on the person to be authenticated to obtain a live body test result, when the computer program instructions are executed by the processor, further comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; capturing a skin area image before a person to be authenticated executes a living body action and a skin area image after the person to be authenticated executes the living body action from the acquired face image; and inputting the skin area image before the living body action is executed by the person to be authenticated and the skin area image after the living body action is executed into the skin elasticity classifier to obtain a living body detection result.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: acquiring a face image before a real person performs a living body action and a face image after the real person performs the living body action, and acquiring a face image before a false person performs the living body action and a face image after the false person performs the living body action; extracting a face image of a real person before the real person performs the living body action and a skin area image of the real person after the real person performs the living body action from the face image of the real person before the real person performs the living body action and the face image of the real person after the real person performs the living body action as a positive sample image; extracting a face image before the live action of the false person and a skin area image after the live action of the false person from the face image before the live action of the false person and the face image after the live action of the false person as negative sample images; and training a classifier model by using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, the step of capturing an image of a skin area of the person to be authenticated before the living body action is performed and an image of the skin area after the living body action is performed from the captured face image, which is executed by the processor, comprises: selecting a face image before a person to be authenticated executes a living body action and a face image after the person to be authenticated executes the living body action from the collected face images; positioning the human face in the human face image before the human body action is executed by the person to be authenticated and the human face in the human face image after the human body action is executed by using the human face detection model; positioning key points of a human face in a human face image before a person to be authenticated executes a living body action and a human face in a human face image after the person to be authenticated executes the living body action by using a human face key point positioning model; and performing region division on the human face in the human face image before the human to be authenticated performs the living body action and the human face in the human face image after the human to be authenticated performs the living body action according to the human face position and the key point position obtained through positioning so as to obtain a skin region image before the human to be authenticated performs the living body action and a skin region image after the human to be authenticated performs the living body action.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: acquiring a sample face image, wherein the position of a face and the position of a key point of the face in the sample face image are marked; and carrying out neural network training by using the sample face image to obtain a face detection model and a face key point positioning model.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under structured light irradiation; and determining whether the face of the person to be authenticated belongs to the living body according to the face image so as to obtain a living body detection result.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: step S110: acquiring one or more illumination images acquired aiming at the face of a person to be authenticated under the irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result; and step S130: determining whether the authenticated person passes the living body verification based on at least the illumination living body verification result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during the illumination of the face of the person to be authenticated.
Illustratively, the pattern of detected light changes between every two consecutive moments in time during the illumination of the face of the person to be authenticated.
For example, in the process of irradiating the face of the person to be authenticated, the pattern of the detection light is changed randomly or set in advance.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: step S140: outputting an action instruction, wherein the action instruction is used for instructing an authenticated person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; and step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living body verification result; step S130 for execution by the processor when the computer program instructions are executed by the processor comprises: and determining whether the person to be authenticated passes the living body verification or not based on the illumination living body verification result and the action living body verification result so as to obtain a living body detection result.
Illustratively, the step S170 for execution by the processor when the computer program instructions are executed comprises: if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is detected based on a plurality of action images acquired within a period of time not longer than a first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is not detected based on a plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: counting once in the process of executing step S140 to step S170 each time to obtain the action verification times; after step S170, the computer program instructions when executed by the processor are further operable to perform the steps of: if the action living body verification result indicates that the face of the person to be authenticated does not belong to the living body, outputting first error information, and judging whether the action verification frequency reaches a first time threshold value, if the action verification frequency reaches the first time threshold value, turning to step S130, if the action verification frequency does not reach the first time threshold value, returning to step S140 or returning to step S110 in the case that step S110 is executed before step S140, wherein the first error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Illustratively, before the step S110 for execution by the processor, the computer program instructions are further for performing the following steps when executed by the processor: step S108: judging whether the image acquisition condition of the person to be authenticated meets a preset requirement, if so, turning to the step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle between the person to be authenticated and the image acquisition device.
Illustratively, the computer program instructions, when executed by the processor, are further for performing the following steps, before step S108 for execution of the computer program instructions when executed by the processor, or simultaneously with step S108 for execution of the computer program instructions when executed by the processor: step S106: and outputting first prompt information, wherein the first prompt information is used for prompting the person to be authenticated to enable the face to be over against the image acquisition device and to be close to the image acquisition device.
Illustratively, the step S106 for which the computer program instructions are executed by the processor comprises: the first prompt information is output in one or more of a voice form, an image form and a text form.
Illustratively, the step S108 for execution by the computer program instructions when executed by the processor comprises: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; and judging whether the image acquisition condition meets a preset requirement or not according to the face area detected in the real-time image, if the face area is located in the preset area and the proportion of the face area in the real-time image is greater than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the real-time image is not greater than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: and if the proportion of the face area in the real-time image is not more than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, the step S108 for execution by the computer program instructions when executed by the processor comprises: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; and judging whether the image acquisition condition meets a preset requirement or not according to the face area, if the face area is located in the preset area and the proportion of the face area in the preset area is greater than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the preset area is not greater than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: and if the proportion of the face area in the preset area is not more than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: judging the relative position relation between the face area and a preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relationship between the face region and the preset region to prompt that the relative position relationship between the person to be authenticated and the image acquisition device is changed so that the face region is close to the preset region.
Illustratively, the step S108 for execution by the computer program instructions when executed by the processor comprises: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state or not according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and otherwise, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: counting once in the process of executing the steps S110 to S120 each time to obtain the illumination verification times; after step S120 for execution by the processor when the computer program instructions are executed by the processor, the computer program instructions are further for performing the steps of: and if the illumination living body verification result shows that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification frequency reaches a second secondary threshold value, if so, turning to the step S130, and if not, returning to the step S108 or returning to the step S110, wherein the second error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Illustratively, before step S110 for execution when the computer program instructions are executed by the processor or during steps S110 and S120 for execution when the computer program instructions are executed by the processor, the computer program instructions are further operable by the processor to perform the steps of: and outputting second prompt information, wherein the second prompt information is used for prompting that the person to be authenticated keeps still within a second preset time.
Illustratively, the second prompt message is countdown message corresponding to a second preset time.
The detection light is obtained, for example, by dynamically changing the color and/or position of the light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of light emitted by the display screen is dynamically changed by changing the content displayed on the display screen to emit detection light to the face of the person to be authenticated.
According to another aspect of the present invention there is provided a storage medium having stored thereon program instructions operable when executed to perform the steps of: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the personally identifying information is one or more of a certificate number, a name, a certificate face, a field captured face, a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field captured face.
Illustratively, before obtaining the personal identification information of the person to be authenticated, the program instructions are further operable to perform the following steps: outputting indication information for indicating a person to be authenticated to provide predetermined types of person information; the personal identification information is personnel information provided by a person to be authenticated or obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a live captured face, the program instructions being operable to perform the steps of obtaining personal identification information of a person to be authenticated, at runtime, comprising: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field collected face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
Illustratively, the step of obtaining personal identification information of a person to be authenticated, which the program instructions are operable to perform when executed, comprises: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing the live body detection on the person to be authenticated to obtain the live body detection result, which is executed when the program instructions are executed, comprises the following steps: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before the step of determining whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result, the program instructions are further operable to perform the following steps: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; the step of determining whether the identity of the person to be authenticated is legal at least according to the information authentication result and the living body detection result, which is executed when the program instruction is operated, comprises the following steps: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judgment operation includes a certificate authenticity judgment operation and/or a face consistency judgment operation, and the additional judgment result includes a certificate authenticity judgment result and/or a face consistency judgment result, and the certificate authenticity judgment operation includes: judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result; the human face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judgment result.
Illustratively, the program instructions when executed are for performing the step of obtaining a face of a document of a person to be authenticated from a document image comprising: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, the program instructions when executed are for performing the step of obtaining a face of a document of a person to be authenticated from a document image comprising: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from the authenticated certificate information database based on the character information in the certificate image; and determining the searched and matched certificate face in the certificate information as the certificate face of the person to be authenticated.
Illustratively, the step of determining whether the certificate in the certificate image is a real certificate to obtain the certificate authenticity determination result includes: extracting image characteristics of the certificate image; inputting the image characteristics into the trained certificate classifier to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, the step of determining whether the certificate in the certificate image is a real certificate to obtain the certificate authenticity determination result includes: identifying an image block containing certificate identification information from a certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, during execution of the step of determining whether the identity of the person to be authenticated is legitimate according to the certificate authentication result, the liveness detection result and the additional determination result, which is performed by the program instructions at runtime, each of the certificate authentication result, the liveness detection result and the additional determination result has a respective weight coefficient.
Illustratively, the step of capturing an image of a document of a person to be authenticated performed by the program instructions when executed comprises: acquiring a pre-shot image acquired in real time aiming at a certificate of a person to be authenticated under a current shooting condition; evaluating the image attribute of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold value, generating pre-shot prompt information according to the image attribute of the pre-shot image, wherein the pre-shot prompt information is used for prompting a person to be authenticated to adjust the shooting condition of the certificate of the person to be authenticated; and saving the pre-shot image as the certificate image when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold value.
Illustratively, the step of determining whether the personal identification information is authenticated information to obtain the information authentication result, which is executed by the program instructions when running, comprises: performing character recognition on the certificate image to obtain character information in the certificate image; searching in the authenticated certificate information database based on the character information in the certificate image to obtain a certificate authentication result; and the certificate authentication result is the confidence level that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the step of text recognition of the certificate image to obtain text information in the certificate image, which the program instructions are operable to perform at runtime, comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
Illustratively, prior to the step of identifying a word in an image block containing the word, which the program instructions are operable to perform at runtime, the program instructions are further operable to perform the steps of: the image block containing the text is corrected to the horizontal state.
Illustratively, after the step of recognizing the characters in the image block containing the characters to obtain the character information in the certificate image, the step of recognizing the characters in the certificate image to obtain the character information in the certificate image further comprises: outputting the text information in the certificate image for a user to check; receiving character correction information input by a user; comparing the characters to be corrected indicated by the character correction information with the corresponding characters in the character information in the certificate image; and if the difference between the characters to be corrected and the corresponding characters in the character information in the certificate image, which is indicated by the character correction information, is smaller than a preset difference threshold value, updating the character information in the certificate image by using the character correction information.
Illustratively, the step of performing a live body test on the person to be authenticated to obtain a live body test result, which is executed by the program instructions when running, comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction or not so as to obtain a living body detection result.
Illustratively, the step of performing a live body test on the person to be authenticated to obtain a live body test result, which is executed by the program instructions when running, comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; capturing a skin area image before a person to be authenticated executes a living body action and a skin area image after the person to be authenticated executes the living body action from the acquired face image; and inputting the skin area image before the living body action is executed by the person to be authenticated and the skin area image after the living body action is executed into the skin elasticity classifier to obtain a living body detection result.
Illustratively, the program instructions are further operable when executed to perform the steps of: acquiring a face image before a real person performs a living body action and a face image after the real person performs the living body action, and acquiring a face image before a false person performs the living body action and a face image after the false person performs the living body action; extracting a face image of a real person before the real person performs the living body action and a skin area image of the real person after the real person performs the living body action from the face image of the real person before the real person performs the living body action and the face image of the real person after the real person performs the living body action as a positive sample image; extracting a face image before the live action of the false person and a skin area image after the live action of the false person from the face image before the live action of the false person and the face image after the live action of the false person as negative sample images; and training a classifier model by using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, the step of capturing, from the captured face image, a skin area image of the person to be authenticated before the living body motion is performed and a skin area image of the person after the living body motion is performed, which program instructions are for performing when running, includes: selecting a face image before a person to be authenticated executes a living body action and a face image after the person to be authenticated executes the living body action from the collected face images; positioning the human face in the human face image before the human body action is executed by the person to be authenticated and the human face in the human face image after the human body action is executed by using the human face detection model; positioning key points of a human face in a human face image before a person to be authenticated executes a living body action and a human face in a human face image after the person to be authenticated executes the living body action by using a human face key point positioning model; and performing region division on the human face in the human face image before the human to be authenticated performs the living body action and the human face in the human face image after the human to be authenticated performs the living body action according to the human face position and the key point position obtained through positioning so as to obtain a skin region image before the human to be authenticated performs the living body action and a skin region image after the human to be authenticated performs the living body action.
Illustratively, the program instructions are further operable when executed to perform the steps of: acquiring a sample face image, wherein the position of a face and the position of a key point of the face in the sample face image are marked; and carrying out neural network training by using the sample face image to obtain a face detection model and a face key point positioning model.
Illustratively, the step of performing a live body test on the person to be authenticated to obtain a live body test result, which is executed by the program instructions when running, comprises: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under structured light irradiation; and determining whether the face of the person to be authenticated belongs to the living body according to the face image so as to obtain a living body detection result.
Illustratively, the step of performing a live body test on the person to be authenticated to obtain a live body test result, which is executed by the program instructions when running, comprises: step S110: acquiring one or more illumination images acquired aiming at the face of a person to be authenticated under the irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result; and step S130: determining whether the authenticated person passes the living body verification based on at least the illumination living body verification result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during the illumination of the face of the person to be authenticated.
Illustratively, the pattern of detected light changes between every two consecutive moments in time during the illumination of the face of the person to be authenticated.
For example, in the process of irradiating the face of the person to be authenticated, the pattern of the detection light is changed randomly or set in advance.
Illustratively, the step of performing a live body test on the person to be authenticated to obtain a live body test result, which is executed by the program instructions when running, comprises: step S140: outputting an action instruction, wherein the action instruction is used for instructing an authenticated person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; and step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living body verification result; step S130 for execution of the program instructions when executed includes: and determining whether the person to be authenticated passes the living body verification or not based on the illumination living body verification result and the action living body verification result so as to obtain a living body detection result.
Illustratively, the step S170 for execution of the program instructions when running includes: if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is detected based on a plurality of action images acquired within a period of time not longer than a first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is not detected based on a plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
Illustratively, the program instructions are further operable when executed to perform the steps of: counting once in the process of executing step S140 to step S170 each time to obtain the action verification times; after step S170, in which the program instructions are used to execute when running, the program instructions are further used to execute the following steps when running: if the action living body verification result indicates that the face of the person to be authenticated does not belong to the living body, outputting first error information, and judging whether the action verification frequency reaches a first time threshold value, if the action verification frequency reaches the first time threshold value, turning to step S130, if the action verification frequency does not reach the first time threshold value, returning to step S140 or returning to step S110 in the case that step S110 is executed before step S140, wherein the first error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Illustratively, prior to step S110, in which the program instructions are for execution at runtime, the program instructions are further for performing the following steps at runtime: step S108: judging whether the image acquisition condition of the person to be authenticated meets a preset requirement, if so, turning to the step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle between the person to be authenticated and the image acquisition device.
Illustratively, prior to or simultaneously with step S108 for execution of the program instructions when executed, the program instructions are further operable to perform the steps of: step S106: and outputting first prompt information, wherein the first prompt information is used for prompting the person to be authenticated to enable the face to be over against the image acquisition device and to be close to the image acquisition device.
Illustratively, step S106 for execution of the program instructions when executed includes: the first prompt information is output in one or more of a voice form, an image form and a text form.
Illustratively, the step S108 for execution of the program instructions when running comprises: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; and judging whether the image acquisition condition meets a preset requirement or not according to the face area detected in the real-time image, if the face area is located in the preset area and the proportion of the face area in the real-time image is greater than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the real-time image is not greater than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the program instructions are further operable when executed to perform the steps of: and if the proportion of the face area in the real-time image is not more than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, the step S108 for execution of the program instructions when running comprises: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; and judging whether the image acquisition condition meets a preset requirement or not according to the face area, if the face area is located in the preset area and the proportion of the face area in the preset area is greater than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the preset area is not greater than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the program instructions are further operable when executed to perform the steps of: and if the proportion of the face area in the preset area is not more than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, the program instructions are further operable when executed to perform the steps of: judging the relative position relation between the face area and a preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relationship between the face region and the preset region to prompt that the relative position relationship between the person to be authenticated and the image acquisition device is changed so that the face region is close to the preset region.
Illustratively, the step S108 for execution of the program instructions when running comprises: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state or not according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and otherwise, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the program instructions are further operable when executed to perform the steps of: counting once in the process of executing the steps S110 to S120 each time to obtain the illumination verification times; after step S120, the program instructions are further configured to perform the following steps when executed: and if the illumination living body verification result shows that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification frequency reaches a second secondary threshold value, if so, turning to the step S130, and if not, returning to the step S108 or returning to the step S110, wherein the second error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Illustratively, before step S110 for execution of the program instructions when running or during steps S110 and S120 for execution of the program instructions when running, the program instructions are further configured to perform the following steps when running: and outputting second prompt information, wherein the second prompt information is used for prompting that the person to be authenticated keeps still within a second preset time.
Illustratively, the second prompt message is countdown message corresponding to a second preset time.
The detection light is obtained, for example, by dynamically changing the color and/or position of the light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of light emitted by the display screen is dynamically changed by changing the content displayed on the display screen to emit detection light to the face of the person to be authenticated.
According to another aspect of the present invention, an identity authentication apparatus is provided, which includes an information acquisition device, a processor and a memory, wherein the information acquisition device is configured to acquire initial information of a person to be authenticated; the memory has stored therein computer program instructions which, when executed by the processor, are operable to perform the steps of: acquiring personal identification information of a person to be authenticated, wherein the personal identification information is initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the information acquisition device includes an image acquisition device and/or an input device.
Illustratively, the personally identifying information is one or more of a certificate number, a name, a certificate face, a field captured face, a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field captured face.
Illustratively, before the obtaining of the personal identification information of the person to be authenticated is performed, the computer program instructions are further configured to, when executed by the processor, perform the steps of: outputting indication information for indicating a person to be authenticated to provide predetermined types of person information; the personal identification information is personnel information provided by a person to be authenticated or obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a live captured face, the computer program instructions being executable by the processor to perform the steps of obtaining personal identification information of the person to be authenticated including: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field collected face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
Exemplarily, the information acquisition device comprises an image acquisition device, and the image acquisition device is further used for acquiring a certificate image and a face image of a person to be authenticated; the computer program instructions, when executed by the processor, perform the steps of obtaining personal identification information of a person to be authenticated comprising: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing a biopsy of the person to be authenticated to obtain a biopsy result, the step being performed by the computer program instructions when executed by the processor, comprises: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
The identity authentication device further comprises an image acquisition device, wherein the image acquisition device is used for acquiring a certificate image and a face image of a person to be authenticated; the computer program instructions, when executed by the processor, perform the steps of obtaining personal identification information of a person to be authenticated comprising: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing a biopsy of the person to be authenticated to obtain a biopsy result, the step being performed by the computer program instructions when executed by the processor, comprises: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before the step of determining whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result, which is performed by the processor, the computer program instructions are further configured to perform the following steps when executed by the processor: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; the step of determining whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result, which is executed by the processor, of the computer program instructions comprises: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judgment operation includes a certificate authenticity judgment operation and/or a face consistency judgment operation, and the additional judgment result includes a certificate authenticity judgment result and/or a face consistency judgment result, and the certificate authenticity judgment operation includes: judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result; the human face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judgment result.
Illustratively, the computer program instructions, when executed by the processor, are further operable to perform the step of obtaining a face of a document of a person to be authenticated from the document image comprising: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, the step of acquiring a face of a document of a person to be authenticated from a document image, the computer program instructions being executable by the processor to perform the steps of: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from the authenticated certificate information database based on the character information in the certificate image; and determining the searched and matched certificate face in the certificate information as the certificate face of the person to be authenticated.
Illustratively, the step of determining whether a document in the document image is a genuine document by the computer program instructions executed by the processor to obtain the document authenticity determination result includes: extracting image characteristics of the certificate image; inputting the image characteristics into the trained certificate classifier to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, the step of determining whether a document in the document image is a genuine document by the computer program instructions executed by the processor to obtain the document authenticity determination result includes: identifying an image block containing certificate identification information from a certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, during execution of the step of determining whether the identity of the person to be authenticated is legitimate from the credential authentication result, the liveness detection result and the additional determination result, which is performed by the computer program instructions when executed by the processor, each of the credential authentication result, the liveness detection result and the additional determination result has a respective weight coefficient.
Exemplarily, the image acquisition device is further used for acquiring a pre-shot image in real time aiming at the certificate of the person to be authenticated; the computer program instructions, when executed by the processor, perform the steps of obtaining an image of a document of a person to be authenticated comprising: acquiring a pre-shot image which is acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition; evaluating the image attribute of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold value, generating pre-shot prompt information according to the image attribute of the pre-shot image, wherein the pre-shot prompt information is used for prompting a person to be authenticated to adjust the shooting condition of the certificate of the person to be authenticated; and saving the pre-shot image as the certificate image when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold value.
Illustratively, the step of determining whether the personal identification information is authenticated information to obtain the information authentication result, which is executed by the processor, comprises: performing character recognition on the certificate image to obtain character information in the certificate image; searching in the authenticated certificate information database based on the character information in the certificate image to obtain a certificate authentication result; and the certificate authentication result is the confidence level that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the step of text recognition of the certificate image to obtain text information in the certificate image performed by the processor when the computer program instructions are executed by the processor comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
Illustratively, the computer program instructions when executed by the processor are further operable to perform the steps, prior to the step of identifying a word in an image block containing the word being performed by the processor: the image block containing the text is corrected to the horizontal state.
Illustratively, after the step of recognizing the characters in the image block containing the characters to obtain the character information in the certificate image, which is performed by the computer program instructions when executed by the processor, the step of recognizing the characters in the certificate image to obtain the character information in the certificate image, which is performed by the computer program instructions when executed by the processor, further comprises: outputting the text information in the certificate image for a user to check; receiving character correction information input by a user; comparing the characters to be corrected indicated by the character correction information with the corresponding characters in the character information in the certificate image; and if the difference between the characters to be corrected and the corresponding characters in the character information in the certificate image, which is indicated by the character correction information, is smaller than a preset difference threshold value, updating the character information in the certificate image by using the character correction information.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction or not so as to obtain a living body detection result.
Illustratively, the step of performing a live body test on the person to be authenticated to obtain a live body test result, when the computer program instructions are executed by the processor, further comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; capturing a skin area image before a person to be authenticated executes a living body action and a skin area image after the person to be authenticated executes the living body action from the acquired face image; and inputting the skin area image before the living body action is executed by the person to be authenticated and the skin area image after the living body action is executed into the skin elasticity classifier to obtain a living body detection result.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: acquiring a face image before a real person performs a living body action and a face image after the real person performs the living body action, and acquiring a face image before a false person performs the living body action and a face image after the false person performs the living body action; extracting a face image of a real person before the real person performs the living body action and a skin area image of the real person after the real person performs the living body action from the face image of the real person before the real person performs the living body action and the face image of the real person after the real person performs the living body action as a positive sample image; extracting a face image before the live action of the false person and a skin area image after the live action of the false person from the face image before the live action of the false person and the face image after the live action of the false person as negative sample images; and training a classifier model by using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, the step of capturing an image of a skin area of the person to be authenticated before the living body action is performed and an image of the skin area after the living body action is performed from the captured face image, which is executed by the processor, comprises: selecting a face image before a person to be authenticated executes a living body action and a face image after the person to be authenticated executes the living body action from the collected face images; positioning the human face in the human face image before the human body action is executed by the person to be authenticated and the human face in the human face image after the human body action is executed by using the human face detection model; positioning key points of a human face in a human face image before a person to be authenticated executes a living body action and a human face in a human face image after the person to be authenticated executes the living body action by using a human face key point positioning model; and performing region division on the human face in the human face image before the human to be authenticated performs the living body action and the human face in the human face image after the human to be authenticated performs the living body action according to the human face position and the key point position obtained through positioning so as to obtain a skin region image before the human to be authenticated performs the living body action and a skin region image after the human to be authenticated performs the living body action.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: acquiring a sample face image, wherein the position of a face and the position of a key point of the face in the sample face image are marked; and carrying out neural network training by using the sample face image to obtain a face detection model and a face key point positioning model.
Exemplarily, the identity authentication device further comprises a structured light source and a binocular camera, wherein the structured light source is used for emitting structured light to the face of the person to be authenticated; the binocular camera is used for collecting a face image aiming at the face of a person to be authenticated under the structured light irradiation; the step of performing a biopsy of the person to be authenticated to obtain a biopsy result, the step being performed by the computer program instructions when executed by the processor, comprises: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under structured light irradiation; and determining whether the face of the person to be authenticated belongs to the living body according to the face image so as to obtain a living body detection result.
Exemplarily, the information acquisition device comprises an image acquisition device, and the identity authentication device further comprises a light source, wherein the light source is used for emitting detection light to the face of the person to be authenticated; the image acquisition device is also used for acquiring one or more illumination images of the face of the person to be authenticated under the illumination of the detection light; the step of performing a biopsy of the person to be authenticated to obtain a biopsy result, the step being performed by the computer program instructions when executed by the processor, comprises: step S110: acquiring one or more illumination images acquired aiming at the face of a person to be authenticated under the irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result; and step S130: determining whether the authenticated person passes the living body verification based on at least the illumination living body verification result to obtain a living body detection result.
The identity authentication device further comprises a light source and an image acquisition device, wherein the light source is used for emitting detection light to the face of a person to be authenticated; the image acquisition device is also used for acquiring one or more illumination images of the face of the person to be authenticated under the illumination of the detection light; the step of performing a biopsy of the person to be authenticated to obtain a biopsy result, the step being performed by the computer program instructions when executed by the processor, comprises: step S110: acquiring one or more illumination images acquired aiming at the face of a person to be authenticated under the irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result; and step S130: determining whether the authenticated person passes the living body verification based on at least the illumination living body verification result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during the illumination of the face of the person to be authenticated.
Illustratively, the pattern of detected light changes between every two consecutive moments in time during the illumination of the face of the person to be authenticated.
For example, in the process of irradiating the face of the person to be authenticated, the pattern of the detection light is changed randomly or set in advance.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: step S140: outputting an action instruction, wherein the action instruction is used for instructing an authenticated person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; and step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living body verification result; step S130 for execution by the processor when the computer program instructions are executed by the processor comprises: determining whether the person to be authenticated passes the living body verification or not based on the illumination living body verification result and the action living body verification result so as to obtain a living body detection result; the image acquisition device is also used for acquiring the plurality of motion images.
Illustratively, the step S170 for execution by the processor when the computer program instructions are executed comprises: if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is detected based on a plurality of action images acquired within a period of time not longer than a first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is not detected based on a plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: counting once in the process of executing step S140 to step S170 each time to obtain the action verification times; after step S170, the computer program instructions when executed by the processor are further operable to perform the steps of: if the action living body verification result indicates that the face of the person to be authenticated does not belong to the living body, outputting first error information, and judging whether the action verification frequency reaches a first time threshold value, if the action verification frequency reaches the first time threshold value, turning to step S130, if the action verification frequency does not reach the first time threshold value, returning to step S140 or returning to step S110 in the case that step S110 is executed before step S140, wherein the first error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Illustratively, before the step S110 for execution by the processor, the computer program instructions are further for performing the following steps when executed by the processor: step S108: judging whether the image acquisition condition of the person to be authenticated meets a preset requirement, if so, turning to the step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle between the person to be authenticated and the image acquisition device.
Illustratively, the computer program instructions, when executed by the processor, are further for performing the following steps, before step S108 for execution of the computer program instructions when executed by the processor, or simultaneously with step S108 for execution of the computer program instructions when executed by the processor: step S106: and outputting first prompt information, wherein the first prompt information is used for prompting the person to be authenticated to enable the face to be over against the image acquisition device and to be close to the image acquisition device.
Illustratively, the step S106 for which the computer program instructions are executed by the processor comprises: the first prompt information is output in one or more of a voice form, an image form and a text form.
Illustratively, the identity authentication device further comprises a display device, wherein the step S108 for execution by the processor of the computer program instructions comprises: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; judging whether the image acquisition condition meets a preset requirement or not according to the face area detected in the real-time image, if the face area is located in the preset area and the proportion of the face area in the real-time image is greater than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the real-time image is not greater than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement; the image acquisition device is also used for acquiring the real-time image; the display device is used for displaying a preset area and a human face area.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: and if the proportion of the face area in the real-time image is not more than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, the living body verification device further comprises a display device, wherein the step S108 for execution by the computer program instructions when executed by the processor comprises: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; judging whether the image acquisition condition meets a preset requirement or not according to the face area, if the face area is located in the preset area and the proportion of the face area in the preset area is greater than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the preset area is not greater than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement; the image acquisition device is also used for acquiring the real-time image; the display device is used for displaying a preset area and a human face area.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: and if the proportion of the face area in the preset area is not more than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: judging the relative position relation between the face area and a preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relationship between the face region and the preset region to prompt that the relative position relationship between the person to be authenticated and the image acquisition device is changed so that the face region is close to the preset region.
Illustratively, the step S108 for execution by the computer program instructions when executed by the processor comprises: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state or not according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and otherwise, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: counting once in the process of executing the steps S110 to S120 each time to obtain the illumination verification times; after step S120 for execution by the processor when the computer program instructions are executed by the processor, the computer program instructions are further for performing the steps of: and if the illumination living body verification result shows that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification frequency reaches a second secondary threshold value, if so, turning to the step S130, and if not, returning to the step S108 or returning to the step S110, wherein the second error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Illustratively, before step S110 for execution when the computer program instructions are executed by the processor or during steps S110 and S120 for execution when the computer program instructions are executed by the processor, the computer program instructions are further operable by the processor to perform the steps of: and outputting second prompt information, wherein the second prompt information is used for prompting that the person to be authenticated keeps still within a second preset time.
Illustratively, the second prompt message is countdown message corresponding to a second preset time.
The detection light is obtained, for example, by dynamically changing the color and/or position of the light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of light emitted by the display screen is dynamically changed by changing the content displayed on the display screen to emit detection light to the face of the person to be authenticated.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising: the information acquisition device is used for acquiring initial information of a person to be authenticated; and the transmission device is used for sending the initial information to the server and receiving the authentication information which is obtained by the server and is about whether the identity of the person to be authenticated is legal or not from the server in the following way: acquiring personal identification information of a person to be authenticated, wherein the personal identification information is initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the information acquisition device includes an image acquisition device and/or an input device.
According to another aspect of the present invention, there is provided an identity authentication apparatus, comprising a transmission device, a processor and a memory, wherein the transmission device is configured to receive initial information of a person to be authenticated from a client, and send authentication information about whether the identity of the person to be authenticated is legitimate to the client; the memory has stored therein computer program instructions which, when executed by the processor, are operable to perform the steps of: acquiring personal identification information of a person to be authenticated, wherein the personal identification information is initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the personally identifying information is one or more of a certificate number, a name, a certificate face, a field captured face, a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field captured face.
Illustratively, before the obtaining of the personal identification information of the person to be authenticated is performed, the computer program instructions are further configured to, when executed by the processor, perform the steps of: outputting indication information for indicating a person to be authenticated to provide predetermined types of person information; the transmission device is also used for sending the indication information to the client to be output by the client; the personal identification information is personnel information provided by a person to be authenticated or obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a live captured face, the computer program instructions being executable by the processor to perform the steps of obtaining personal identification information of the person to be authenticated including: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field collected face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
Illustratively, the step of obtaining personal identification information of the person to be authenticated performed by the computer program instructions when executed by the processor comprises: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing a biopsy of the person to be authenticated to obtain a biopsy result, the step being performed by the computer program instructions when executed by the processor, comprises: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before the step of determining whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result, which is performed by the processor, the computer program instructions are further configured to perform the following steps when executed by the processor: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; the step of determining whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result, which is executed by the processor, of the computer program instructions comprises: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judgment operation includes a certificate authenticity judgment operation and/or a face consistency judgment operation, and the additional judgment result includes a certificate authenticity judgment result and/or a face consistency judgment result, and the certificate authenticity judgment operation includes: judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result; the human face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judgment result.
Illustratively, the computer program instructions, when executed by the processor, are further operable to perform the step of obtaining a face of a document of a person to be authenticated from the document image comprising: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, the step of acquiring a face of a document of a person to be authenticated from a document image, the computer program instructions being executable by the processor to perform the steps of: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from the authenticated certificate information database based on the character information in the certificate image; and determining the searched and matched certificate face in the certificate information as the certificate face of the person to be authenticated.
Illustratively, the step of determining whether a document in the document image is a genuine document by the computer program instructions executed by the processor to obtain the document authenticity determination result includes: extracting image characteristics of the certificate image; inputting the image characteristics into the trained certificate classifier to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, the step of determining whether a document in the document image is a genuine document by the computer program instructions executed by the processor to obtain the document authenticity determination result includes: identifying an image block containing certificate identification information from a certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Illustratively, during execution of the step of determining whether the identity of the person to be authenticated is legitimate from the credential authentication result, the liveness detection result and the additional determination result, which is performed by the computer program instructions when executed by the processor, each of the credential authentication result, the liveness detection result and the additional determination result has a respective weight coefficient.
Illustratively, the computer program instructions, when executed by the processor, perform the step of obtaining an image of a document of a person to be authenticated comprising: acquiring a pre-shot image which is acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition; evaluating the image attribute of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold value, generating pre-shot prompt information according to the image attribute of the pre-shot image, wherein the pre-shot prompt information is used for prompting a person to be authenticated to adjust the shooting condition of the certificate of the person to be authenticated; when the evaluation value of the image attribute of the pre-shot image is equal to or larger than a preset evaluation value threshold value, saving the pre-shot image as a certificate image; the transmission device is also used for receiving the pre-shot image.
Illustratively, the step of determining whether the personal identification information is authenticated information to obtain the information authentication result, which is executed by the processor, comprises: performing character recognition on the certificate image to obtain character information in the certificate image; searching in the authenticated certificate information database based on the character information in the certificate image to obtain a certificate authentication result; and the certificate authentication result is the confidence level that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the step of text recognition of the certificate image to obtain text information in the certificate image performed by the processor when the computer program instructions are executed by the processor comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
Illustratively, the computer program instructions when executed by the processor are further operable to perform the steps, prior to the step of identifying a word in an image block containing the word being performed by the processor: the image block containing the text is corrected to the horizontal state.
Illustratively, after the step of recognizing the characters in the image block containing the characters to obtain the character information in the certificate image, which is performed by the computer program instructions when executed by the processor, the step of recognizing the characters in the certificate image to obtain the character information in the certificate image, which is performed by the computer program instructions when executed by the processor, further comprises: outputting the text information in the certificate image for a user to check; receiving character correction information input by a user; comparing the characters to be corrected indicated by the character correction information with the corresponding characters in the character information in the certificate image; and if the difference between the characters to be corrected and the corresponding characters in the character information in the certificate image, which is indicated by the character correction information, is smaller than a preset difference threshold value, updating the character information in the certificate image by using the character correction information.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction or not so as to obtain a living body detection result.
Illustratively, the step of performing a live body test on the person to be authenticated to obtain a live body test result, when the computer program instructions are executed by the processor, further comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; capturing a skin area image before a person to be authenticated executes a living body action and a skin area image after the person to be authenticated executes the living body action from the acquired face image; and inputting the skin area image before the living body action is executed by the person to be authenticated and the skin area image after the living body action is executed into the skin elasticity classifier to obtain a living body detection result.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: acquiring a face image before a real person performs a living body action and a face image after the real person performs the living body action, and acquiring a face image before a false person performs the living body action and a face image after the false person performs the living body action; extracting a face image of a real person before the real person performs the living body action and a skin area image of the real person after the real person performs the living body action from the face image of the real person before the real person performs the living body action and the face image of the real person after the real person performs the living body action as a positive sample image; extracting a face image before the live action of the false person and a skin area image after the live action of the false person from the face image before the live action of the false person and the face image after the live action of the false person as negative sample images; and training a classifier model by using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, the step of capturing an image of a skin area of the person to be authenticated before the living body action is performed and an image of the skin area after the living body action is performed from the captured face image, which is executed by the processor, comprises: selecting a face image before a person to be authenticated executes a living body action and a face image after the person to be authenticated executes the living body action from the collected face images; positioning the human face in the human face image before the human body action is executed by the person to be authenticated and the human face in the human face image after the human body action is executed by using the human face detection model; positioning key points of a human face in a human face image before a person to be authenticated executes a living body action and a human face in a human face image after the person to be authenticated executes the living body action by using a human face key point positioning model; and performing region division on the human face in the human face image before the human to be authenticated performs the living body action and the human face in the human face image after the human to be authenticated performs the living body action according to the human face position and the key point position obtained through positioning so as to obtain a skin region image before the human to be authenticated performs the living body action and a skin region image after the human to be authenticated performs the living body action.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: acquiring a sample face image, wherein the position of a face and the position of a key point of the face in the sample face image are marked; and carrying out neural network training by using the sample face image to obtain a face detection model and a face key point positioning model.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under structured light irradiation; and determining whether the face of the person to be authenticated belongs to the living body according to the face image so as to obtain a living body detection result.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: step S110: acquiring one or more illumination images acquired aiming at the face of a person to be authenticated under the irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result; and step S130: determining whether the authenticated person passes the living body verification based on at least the illumination living body verification result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during the illumination of the face of the person to be authenticated.
Illustratively, the pattern of detected light changes between every two consecutive moments in time during the illumination of the face of the person to be authenticated.
For example, in the process of irradiating the face of the person to be authenticated, the pattern of the detection light is changed randomly or set in advance.
Illustratively, the step of performing a live detection of the person to be authenticated by the computer program instructions for execution when executed by the processor to obtain a live detection result comprises: step S140: outputting an action instruction, wherein the action instruction is used for instructing an authenticated person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; and step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living body verification result; step S130 for execution by the processor when the computer program instructions are executed by the processor comprises: determining whether the person to be authenticated passes the living body verification or not based on the illumination living body verification result and the action living body verification result so as to obtain a living body detection result; the transmission device is also used for sending the action instruction to the client and receiving a plurality of action images from the client.
Illustratively, the step S170 for execution by the processor when the computer program instructions are executed comprises: if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is detected based on a plurality of action images acquired within a period of time not longer than a first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is not detected based on a plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: counting once in the process of executing step S140 to step S170 each time to obtain the action verification times; after step S170, the computer program instructions when executed by the processor are further operable to perform the steps of: if the action living body verification result shows that the face of the person to be authenticated does not belong to the living body, outputting first error information, judging whether the action verification frequency reaches a first time threshold value, if the action verification frequency reaches the first time threshold value, turning to the step S130, and if the action verification frequency does not reach the first time threshold value, returning to the step S140 or returning to the step S110 under the condition that the step S110 is executed before the step S140, wherein the first error information is used for prompting that the living body verification aiming at the person to be authenticated fails; the transmission device is further configured to send the first error information to the client for output by the client.
Illustratively, before the step S110 for execution by the processor, the computer program instructions are further for performing the following steps when executed by the processor: step S108: judging whether the image acquisition condition of the person to be authenticated meets a preset requirement, if so, turning to the step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle between the person to be authenticated and the image acquisition device.
Illustratively, the computer program instructions, when executed by the processor, are further for performing the following steps, before step S108 for execution of the computer program instructions when executed by the processor, or simultaneously with step S108 for execution of the computer program instructions when executed by the processor: step S106: outputting first prompt information, wherein the first prompt information is used for prompting a person to be authenticated to enable the face to be over against the image acquisition device and to be close to the image acquisition device; the transmission device is also used for sending the first prompt message to the client to be output by the client.
Illustratively, the step S106 for which the computer program instructions are executed by the processor comprises: the first prompt information is output in one or more of a voice form, an image form and a text form.
Illustratively, the identity authentication device further comprises a display device, wherein the step S108 for execution by the processor of the computer program instructions comprises: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; judging whether the image acquisition condition meets a preset requirement or not according to the face area detected in the real-time image, if the face area is located in the preset area and the proportion of the face area in the real-time image is greater than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the real-time image is not greater than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement; the transmission device is also used for receiving the real-time image from the client and sending the preset area and the face area to the client to be output by the client for display.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: if the proportion of the face area in the real-time image is not more than a first preset proportion, outputting first acquisition prompt information in real time to prompt a person to be authenticated to approach the image acquisition device; the transmission device is also used for sending the first acquisition prompt message to the client side to be output by the client side.
Illustratively, the living body verification device further comprises a display device, wherein the step S108 for execution by the computer program instructions when executed by the processor comprises: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; judging whether the image acquisition condition meets a preset requirement or not according to the face area, if the face area is located in the preset area and the proportion of the face area in the preset area is greater than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the preset area is not greater than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement; the transmission device is also used for receiving the real-time image from the client and sending the preset area and the face area to the client to be output by the client for display.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: if the proportion of the face area in the preset area is not more than a second preset proportion, outputting second acquisition prompt information in real time to prompt a person to be authenticated to approach the image acquisition device; the transmission device is also used for sending the second acquisition prompt message to the client side to be output by the client side.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: judging the relative position relation between the face area and a preset area in real time; outputting third acquisition prompt information in real time based on the relative position relationship between the face area and the preset area to prompt that the relative position relationship between the person to be authenticated and the image acquisition device is changed so that the face area is close to the preset area; the transmission device is also used for sending the third acquisition prompt message to the client side to be output by the client side.
Illustratively, the step S108 for execution by the computer program instructions when executed by the processor comprises: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state or not according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and otherwise, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the computer program instructions when executed by the processor are further for performing the steps of: counting once in the process of executing the steps S110 to S120 each time to obtain the illumination verification times; after step S120 for execution by the processor when the computer program instructions are executed by the processor, the computer program instructions are further for performing the steps of: if the illumination living body verification result shows that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification frequency reaches a second secondary threshold value, if so, turning to the step S130, and if not, returning to the step S108 or returning to the step S110, wherein the second error information is used for prompting that the living body verification aiming at the person to be authenticated fails; the transmission device is further configured to send the second error information to the client for output by the client.
Illustratively, before step S110 for execution when the computer program instructions are executed by the processor or during steps S110 and S120 for execution when the computer program instructions are executed by the processor, the computer program instructions are further operable by the processor to perform the steps of: outputting second prompt information, wherein the second prompt information is used for prompting that the person to be authenticated keeps still within a second preset time; the transmission device is also used for sending the second prompt message to the client end to be output by the client end.
Illustratively, the second prompt message is countdown message corresponding to a second preset time.
The detection light is obtained, for example, by dynamically changing the color and/or position of the light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of light emitted by the display screen is dynamically changed by changing the content displayed on the display screen to emit detection light to the face of the person to be authenticated.
According to the identity authentication method and device and the storage medium, the identity of the person to be authenticated is determined to be legal or not by combining authenticated information judgment and living body detection, so that compared with a conventional mode of performing identity authentication based on a password or a certificate, the identity authentication method provided by the embodiment of the invention has the advantages that the authentication result is more accurate, the authentication safety of a user can be improved, and the rights and interests of the user can be effectively guaranteed.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 illustrates a schematic block diagram of an example electronic device for implementing the identity authentication method and apparatus in accordance with embodiments of the present invention;
FIG. 2 shows a schematic flow diagram of a method of identity authentication according to one embodiment of the present invention;
FIG. 3 shows a schematic flow diagram of a method of identity authentication according to another embodiment of the present invention;
FIG. 4 shows a schematic flow diagram of a method of identity authentication according to another embodiment of the present invention;
FIG. 5 shows a schematic flow diagram of the training steps of a skin elasticity classifier according to one embodiment of the present invention;
FIG. 6 illustrates a schematic block diagram of an example electronic device for implementing an identity authentication method and apparatus in accordance with another embodiment of the present invention;
FIG. 7 shows a schematic flow diagram of a liveness detection step according to one embodiment of the invention;
FIG. 8 shows a schematic flow chart of a live body detection step according to another embodiment of the present invention;
FIG. 9 shows a flow of implementing the living body detecting step according to one embodiment of the present invention;
FIG. 10 shows a schematic block diagram of an identity authentication device in accordance with one embodiment of the present invention; and
FIG. 11 shows a schematic block diagram of an identity authentication system in accordance with one embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In order to solve the above-mentioned problems, embodiments of the present invention provide an identity authentication method and apparatus, and a storage medium. The identity authentication method, the identity authentication device and the storage medium are combined with personal identification information identification and face identification to carry out identity authentication so as to determine whether the identity of a person to be authenticated is legal or not, namely determine whether the person to be authenticated has the authority to carry out subsequent operations such as consumption payment and the like. The identity authentication method and the identity authentication device can conveniently and safely identify the identity of the person to be authenticated, are a safe interactive authentication mode, and can be well applied to the technical fields of electronic commerce, mobile payment, bank account opening and the like.
First, an example electronic device 100 for implementing an identity authentication method and apparatus and a storage medium according to an embodiment of the present invention is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images and/or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, etc.
The image capture device 110 can capture credential images (including video frames) and/or facial images (including video frames) and store the captured images in the storage device 104 for use by other components. The image capture device 110 may be a camera. It should be understood that the image capture device 110 is merely an example, and the electronic device 100 may not include the image capture device 110. In this case, other image capturing devices may be used to capture the document image and/or the face image and transmit the captured image to the electronic apparatus 100.
Exemplary electronic devices for implementing the identity authentication method and apparatus according to embodiments of the present invention may be implemented on devices such as personal computers or remote servers, for example.
The embodiment of the invention provides an identity authentication method, which comprises the following steps: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on the person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Hereinafter, an identity authentication method according to an embodiment of the present invention will be described with reference to fig. 2. Fig. 2 shows a schematic flow diagram of an identity authentication method 200 according to one embodiment of the present invention. As shown in fig. 2, the identity authentication method 200 includes the following steps.
In step S210, personal identification information of a person to be authenticated is acquired.
The identity authentication method described herein is mainly described below by taking the personal identification information of the person to be authenticated as the certificate in the certificate image as an example, and in a practical use scenario, the personal identification information is not limited thereto, and may be, for example, one or more of a certificate number, a name, a certificate face (i.e., a face image detected from the certificate image) and a field captured face (i.e., a face image captured by an image capture device for the face of the person to be authenticated), and/or one or more of a transformed value of the certificate number (e.g., an output value of a certain hash algorithm), a transformed value of the name (e.g., an output value of a certain hash algorithm), a transformed value of the certificate face (e.g., an output value of a certain hash algorithm), and a transformed value of the field captured face (e.g., an output value of a certain hash algorithm). The type of the personal identification information can be selected according to a specific use scene, and the type of the personal identification information is not particularly limited by the invention.
Step S210 may include: and acquiring a certificate image of the person to be authenticated. The credential information in the credential image can be considered personally identifying information.
Documents described herein may include, but are not limited to, identification cards, driver licenses, passports, social security cards, and the like.
The document image may be an image captured of a document for a person to be authenticated. The certificate image may be an original image captured by an image capturing device such as a camera, or may be an image obtained by preprocessing the original image.
The credential image may be sent by a client device (e.g., a mobile terminal including a camera, a remote Video Teller Machine (VTM), etc.) to electronic device 100 for processing by processor 102 of electronic device 100, or may be captured by an image capture device 110 (e.g., a camera) included in electronic device 100 and transmitted to processor 102 for processing.
According to the embodiment of the invention, acquiring the certificate image of the person to be authenticated can comprise: acquiring a pre-shot image acquired in real time aiming at a certificate of a person to be authenticated under a current shooting condition; evaluating the image attribute of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold value, generating pre-shot prompt information according to the image attribute of the pre-shot image, wherein the pre-shot prompt information is used for prompting a person to be authenticated to adjust the shooting condition of the certificate of the person to be authenticated; and saving the pre-shot image as the certificate image when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold value.
The pre-shooting refers to a process of starting a shooting mode of an image acquisition device (for example, a camera of a mobile terminal such as a mobile phone and a tablet computer) and placing a certificate to be shot in a shooting range of the image acquisition device for shooting and framing (photo shooting is not actually completed yet).
Alternatively, the quality evaluation may be performed on a pre-shot image obtained by shooting. For example, in the pre-photographing process, the image attributes of the pre-photographed image obtained by photographing under the current photographing condition may be calculated in real time. Illustratively, the photographing condition may include, but is not limited to, one or more of the following: the placement position of the certificate, the placement angle of the certificate, the shooting position of the image acquisition device, the shooting angle of the image acquisition device and the like. Illustratively, the image attributes may include, but are not limited to, one or more of the following: the method comprises the following steps of document fuzzy degree, document outline, document key parts, document shielding condition, document size, document character definition and the like. And when the evaluation value of the image attribute is smaller than a preset evaluation value threshold value, the shot pre-shot image is considered to be unqualified. At this time, corresponding pre-shot prompt information can be generated according to the image attribute and output to prompt the user to adjust the angle, position and the like of the certificate or the image acquisition device until a qualified pre-shot image is shot. The qualified pre-shot image is the pre-shot image of which the evaluation value of the image attribute is equal to or greater than the preset evaluation value threshold. When a qualified pre-captured image is captured in the pre-capture mode, the pre-captured image may be saved as the certificate image acquired in step S210 for subsequent steps such as authenticated certificate determination. Optionally, when some image attributes of the pre-shot image are not qualified, the pre-shot image may be adjusted to be qualified, so as to obtain the required certificate image. For example, when the certificate size in the pre-shot image is not qualified, operations such as cropping and scaling can be performed on the pre-shot image to make the certificate size in the pre-shot image qualified.
In step S220, it is determined whether the individual identification information is authenticated information to obtain an information authentication result.
In the case where the personal identification information is certificate information in a certificate image, the authenticated information may be authenticated certificate information, and the information authentication result may be a certificate authentication result. That is, in the case where the personally identifying information is certificate information in the certificate image, step S220 may include: and judging whether the certificate information in the certificate image is authenticated certificate information or not to obtain a certificate authentication result.
For example, some certificate information related to the certificate of the person to be authenticated, such as the identification number, name, etc., on the identification card, may be recognized from the certificate image, and then, it may be determined whether the certificate information is authenticated certificate information, that is, whether the certificate of the person to be authenticated is authenticated.
Authenticated certificate information regarding authenticated certificates may be stored in a database, referred to herein as an authenticated certificate information database. A search may be performed in the authenticated certificate information database based on the certificate information identified from the certificate image, i.e., the certificate information identified from the certificate image is compared to the authenticated certificate information in the authenticated certificate information database to determine whether the certificate in the certificate image is an authenticated certificate.
In one example, the authenticated certificate information database may be stored locally, such as in a storage device (e.g., storage device 104 shown in FIG. 1) of a device, such as a server or client, used to implement the identity authentication method and apparatus.
In another example, the authenticated certificate information database may be stored in a server of some public service system (e.g., a public security system). The equipment such as a server or a client and the like for realizing the identity authentication method and device can communicate with the server of the public service system in a networking docking mode, and the certificate information is searched from the server of the public service system. For example, the public security network usually has the certificate information (docket information) of the authenticated legal person, and the certificate information (docket information) matching the certificate information identified from the certificate image can be found by searching and searching in the public security network based on the certificate image acquired in step S210.
In another example, the personally identifying information is an identification number. The person to be authenticated can input the identity card number into the identity authentication device. The identity authentication device may retrieve from an authenticated personnel information database, which may store the identity numbers of a large number of authenticated personnel, based on the received identity numbers. If the matched identity card number exists, the person to be authenticated is the authenticated person, and the personal identification information of the person to be authenticated is authenticated information. Similarly, the above-described authenticated personnel information base may optionally be stored in a server of a local or public service system.
Those skilled in the art can understand that, in the embodiment where the personal identification information is a name, a certificate face, a field-collected face, a transformation value of a certificate number, a transformation value of a name, a transformation value of a certificate face, or a transformation value of a field-collected face, the implementation manner of the identity authentication method is similar to that of the embodiment where the personal identification information is a personal identification number, and is not described again.
In step S230, the person to be authenticated is subjected to the living body detection to obtain a living body detection result.
Step S230 may include: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
The face image may be an image acquired for the face of the person to be authenticated. The face image may be an original image acquired by an image acquisition device such as a camera, or may be an image obtained by preprocessing the original image.
The face image may be sent to the electronic device 100 by a client device (e.g., a mobile terminal including a camera) to be processed by the processor 102 of the electronic device 100, or may be collected by an image collecting device 110 (e.g., a camera) included in the electronic device 100 and transmitted to the processor 102 for processing. Step S230 may be implemented by any existing or future implemented in vivo detection method, which is not limited by the present invention. Illustratively, when the face in the face image is a real face, the person to be authenticated is considered to be a living body, and the living body detection result may be 1, and when the face in the face image is a false face, the person to be authenticated is not considered to be a living body, and the living body detection result may be 0.
In step S240, it is determined whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result.
In one example, the information authentication result may be one of 1 and 0, where 1 indicates that the personal identification information of the person to be authenticated is authenticated information, and 0 indicates that the personal identification information of the person to be authenticated is not authenticated information. Similarly, the living body detection result may be one of 1 and 0, where 1 indicates that the person to be authenticated is a living body (i.e., passes living body verification), and 0 indicates that the person to be authenticated is not a living body (i.e., fails living body verification). Illustratively, if any one of the information authentication result and the living body detection result is 0, the identity of the person to be authenticated may be considered to be illegal, i.e. the authentication of the person to be authenticated fails, in which case, it may be prohibited to perform subsequent business operations, e.g. to perform online transactions or bank account opening.
In another example, the information authentication result may be any numerical value in the range of [0,1], indicating the confidence that the personal identification information of the person to be authenticated is the authenticated information. In this case, an operation such as a weighted average may be performed on the information authentication result and the living body detection result, and whether or not the person to be authenticated is a legitimate person may be measured according to the operation result. Similar embodiments will be described in detail below, which are not repeated here.
It should be understood that the execution order of the steps of the identity authentication method 200 shown in fig. 2 is merely an example and not a limitation, for example, step S230 may be executed before step S210, between step S210 and step S220, or simultaneously with step S210 or S220.
According to the identity authentication method provided by the embodiment of the invention, the identity of the person to be authenticated is determined to be legal or not by combining authenticated information judgment and living body detection, so that compared with the conventional identity authentication method based on a password or a certificate, the identity authentication method provided by the embodiment of the invention has the advantages that the authentication result is more accurate, the security of user authentication can be improved, and the rights and interests of the user can be effectively guaranteed. The method can be well applied to various fields related to identity authentication, such as fields of electronic commerce, mobile payment or banking business and the like.
Illustratively, the identity authentication method according to embodiments of the present invention may be implemented in a device, apparatus or system having a memory and a processor.
The identity authentication method according to the embodiment of the invention may be deployed at an image acquisition end, for example, at an image acquisition end of a financial system such as a bank management system or at a mobile terminal such as a smart phone or a tablet computer. Alternatively, the identity authentication method according to the embodiment of the present invention may also be distributively deployed at the server side (or cloud side) and the client side. For example, the personal identification information (for example, a certificate image is collected or a certificate number, a name, and the like input by a person to be authenticated are received) and/or a face image is collected at the client, and the client transmits the collected personal identification information and/or the collected face image to the server (or the cloud) for identity authentication at the server (or the cloud).
According to the embodiment of the present invention, before step S210, the identity authentication method 200 may further include: outputting indication information for indicating a person to be authenticated to provide predetermined types of person information; the personal identification information is personnel information provided by a person to be authenticated or obtained based on the personnel information.
Since the type of authenticated information stored in a database for storing authenticated information (such as the authenticated certificate information database or the authenticated person information database described above) is determined, in order to smoothly perform identity authentication, it is necessary for a user (i.e., a person to be authenticated) to provide personal identification information of a type that matches the type of authenticated information stored in the database. For this, indication information may be output to indicate that the person to be authenticated inputs a predetermined type of person information. Illustratively, the indication information may be output in the form of text, image, or voice.
For example, "name: and identification number: "such indication information, instruct the person to be authenticated to input the corresponding person information at the blank behind the information type (i.e. name and identification number). The personal information input by the person to be authenticated can be directly used as the personal identification information, and the personal identification information can also be obtained by converting the personal information.
For another example, a voice prompt of "please show the identity card" may be sent out through a speaker, and after the person to be authenticated provides the identity card, image acquisition may be performed on the identity card to obtain an identity card image. In this example, the personally identifying information is an identification card image.
The output of the indication information for indicating the person to be authenticated to provide the person information of the predetermined type is beneficial to obtaining valuable personal identification information which can be compared with the authenticated information, thereby being beneficial to smoothly implementing identity authentication. In addition, the output of the indication information can improve the interaction experience between the user and the identity authentication device.
As described above, the individual identification information may be original information such as a certificate number, a name, a certificate face, a live-action face, and the like, or may be a converted value obtained by converting the original information. According to an embodiment of the present invention, in case that the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field-collected face, step S210 may include: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field collected face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
When storing the person information of the authenticated person in the database, the person information of the authenticated person may be transformed using a predetermined algorithm (e.g., some kind of hash algorithm) to obtain the authenticated information of the authenticated person. The conversion method is arbitrary, and it can be set as needed. The transformation process may be understood as an encoding process.
The initial information of the person to be authenticated, which is acquired by the identity authentication device, is usually information without transformation, such as a certificate number, a name, a certificate face, a field-collected face, and the like. To facilitate subsequent comparison with the authenticated information, the initial information may be transformed using an algorithm consistent with the algorithm used to generate the authenticated information to obtain the personal identification information.
According to the embodiment of the present invention, in the case that the personal identification information is the document information in the document image, before step S240, the identity authentication method 200 may further include: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; step S240 may include: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
The additional determination operations may include one or more operations for making a determination of authenticity or identity for a document in the document image and/or a face in the face image. For example, the additional determination operation may include a certificate authenticity determination operation and/or a face identity determination operation. Besides the authenticated certificate judgment operation and the living body detection operation, some other operations for judging the authenticity or consistency of the certificate in the certificate image and/or the human face in the human face image are added, which is beneficial to further improving the reliability of the identity authentication result, thereby improving the safety of the application related to identity authentication.
Embodiments of the additional determination operation are described below by way of example.
According to one embodiment, the additional determination operation may include a certificate authenticity determination operation. The certificate authenticity judging operation may include: and judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result. According to another embodiment, the additional determination operation may include a face consistency determination operation. The face consistency judgment operation may include: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judgment result. The two embodiments described above are described below with reference to fig. 3 and 4, respectively.
Fig. 3 shows a schematic flow diagram of an identity authentication method 300 according to another embodiment of the present invention. FIG. 3 illustrates an embodiment in which the personally identifying information of the person to be authenticated is credential information in a credential image. Steps S310, S330 to S350 of the identity authentication method 300 shown in fig. 3 have been introduced in the above description about steps S210 to S230 of the identity authentication method 200 shown in fig. 2, and are not described again here. According to this embodiment, before step S360, the identity authentication method 300 may further include step S320. In step S320, the certificate authenticity determination operation is performed. In step S360, it is determined whether the identity of the person to be authenticated is legal according to the certificate authentication result (i.e., the information authentication result), the biometric detection result, and the certificate authenticity determination result.
After the certificate authenticity judging operation is performed, a certificate authenticity judging result can be obtained. Illustratively, the certificate authenticity judgment result may be one of 1 and 0, where 1 denotes that the certificate in the certificate image is a real certificate and 0 denotes that the certificate in the certificate image is a false certificate. Of course, the certificate authenticity judgment result can also be any value in the range of [0,1], and represents the confidence that the certificate in the certificate image is a real certificate. The false certificate may be, for example, a certificate obtained for screen copying on a device such as a mobile phone or a computer, or a certificate obtained by counterfeiting using a computer graphics technology, or the like.
Illustratively, if any one of the certificate authentication result, the living body detection result, and the certificate authenticity judgment result is 0, it may be determined that the identity of the person to be authenticated is not legitimate, otherwise it may be determined that the identity of the person to be authenticated is legitimate.
Two exemplary embodiments of step S320 are described below.
In one example, step S320 may include: extracting image characteristics of the certificate image; inputting the image characteristics into the trained certificate classifier to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
Due to the interference effect of the waves between the photosensitive element of the image capture device and the display, there are distinct periodic color fringes, called "moir é", that appear in the image obtained by copying a document photograph on a computer or cell phone screen. Moire patterns are important cues to distinguish between authentic and copied documents. Since the moire pattern exhibits periodicity, the characteristics of the moire pattern may be particularly apparent in the frequency domain. In addition, the colour of the moir é is also distinguished from the colour of the authentic document. Thus, it is possible to identify whether a document in the document image is a reproduction document based on moir é.
Illustratively, the image features may include, but are not limited to, at least one of spectral features, texture features, and color features.
The certificate classifier involved in the certificate authenticity judging operation can be trained by adopting a large number of sample certificate images in advance. Illustratively, a "classifier" as described herein may be any existing or future possibly implemented Machine learning based classifier, such as a Support Vector Machine (SVM) or the like.
Taking an identity card as an example, the training process of the document classifier may include: acquiring and labeling an identity card image containing a real identity card and an identity card image containing a copied identity card; respectively calculating frequency spectrum information of an identity card image containing a real identity card and an identity card image containing a copied identity card to be used as respective image characteristics; and taking the image characteristics of the identity card image containing the real identity card as a positive sample, and taking the image characteristics of the identity card image containing the copied identity card as a negative sample to train a classifier model so as to obtain the identity card classifier. Subsequently, in the actual identity authentication process, for the acquired identity card image, the frequency spectrum information of the acquired identity card image can be calculated to serve as the image characteristic of the identity card image, and then the extracted image characteristic is input into a trained identity card classifier to judge whether the identity card in the acquired identity card image is a copied identity card.
The certificate authenticity judgment result output by the certificate classifier can be the confidence level that the certificate in the certificate image is the real certificate. The confidence level may be any value in the range of 0, 1.
In another example, step S320 may include: identifying an image block containing certificate identification information from a certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
The credential identification information can be any information that can identify an authentic credential. For example, the document identification information may include a national emblem pattern on an identification card or social security card, some special anti-counterfeiting mark, and the like. For example, authentic documents often have a relatively covert security marking, and the authenticity of the document can be determined by identifying the security marking.
The above two examples of performing certificate authenticity judgment through the image features and performing certificate authenticity judgment through the certificate identification information can be simultaneously realized, that is, authenticity of a certificate can be judged based on the image features of a certificate image and the certificate identification information thereof, and a person skilled in the art can understand an implementation manner of the judgment manner by reading the above description, and details are not repeated here.
It should be understood that the execution sequence of the steps of the identity authentication method 300 shown in fig. 3 is merely an example and not a limitation, for example, step S320 may be executed at any time between step S310 and step S360.
Fig. 4 shows a schematic flow diagram of an identity authentication method 400 according to another embodiment of the present invention. FIG. 4 illustrates an embodiment in which the personally identifying information of the person to be authenticated is credential information in a credential image. Steps S410 to S440 of the identity authentication method 400 shown in fig. 4 have been introduced in the above description of steps S210 to S240 of the identity authentication method 200 shown in fig. 2, and are not described again here. According to this embodiment, before step S470, the identity authentication method 400 may further include steps S450 and S460. In steps S450 and S460, the above-described face consistency determination operation is performed. In step S470, it is determined whether the identity of the person to be authenticated is legal according to the certificate authentication result, the living body detection result, and the face consistency determination result.
In step S450, the face of the certificate of the person to be authenticated is acquired according to the certificate image.
In one example, step S450 may include: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
A face (typically a photograph of a person's face), referred to herein as a "certificate face," is typically included on the certificate to distinguish it from the face in the face image. In the case where the certificate includes a human face, the human face can be detected from the certificate image. Therefore, the face detected from the certificate image can be directly used as the certificate face of the person to be authenticated and is used for comparing with the face in the face image. As described above, the face detected from the certificate image can be used as the personal identification information.
In another example, step S450 may include: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from the authenticated certificate information database based on the character information in the certificate image; and determining the searched and matched certificate face in the certificate information as the certificate face of the person to be authenticated.
In some certificates, the face may not be included, in which case the face of the certificate of the person to be authenticated can be found by using the certificate information in the authenticated certificate information database. Of course, in the case of a certificate comprising a face, the face of the certificate of the person to be authenticated can also be found in this way.
For example, the document image is an identification card image of the person X, and text recognition may be performed from the identification card image to recognize text information such as an identification card number, and then matching identification card information may be searched from an identification card database of the public security system based on the text information such as the identification card number. If the identity card information of the person X is already recorded in the identity card database, the matched identity card information can be searched. The identity card information may typically include basic information such as the identity card number, name, sex, face photograph of the person X. The face photo is the required certificate face. The document face can then be compared with the face in the previously acquired face image.
In step S460, the certificate face of the person to be authenticated is compared with the face in the face image to obtain a face consistency determination result.
The face (namely the face in the face image) collected by the image collecting device is compared with the certificate face, if the similarity between the face and the certificate face is greater than a preset similarity threshold value, the collected face and the certificate face can be considered to belong to the same person, otherwise, the collected face and the certificate face can be considered to not belong to the same person. Therefore, a face consistency judgment result can be obtained. Illustratively, the consistency judgment result may be the similarity between the certificate face and the face in the face image, that is, the consistency result may be any value in the range of [0,1 ]. Illustratively, the consistency judgment result may also be one of 1 and 0, where 1 indicates that the certificate face and the face in the face image belong to the same person, and 0 indicates that the certificate face and the face in the face image do not belong to the same person.
Illustratively, if any one of the certificate authentication result, the living body detection result, and the face consistency determination result is 0, it may be determined that the identity of the person to be authenticated is not legitimate, otherwise it may be determined that the identity of the person to be authenticated is legitimate.
It should be understood that, similar to fig. 3, the execution sequence of the steps of the identity authentication method 400 shown in fig. 4 is only an example and not a limitation, for example, step S450 may be executed before step S420, or after step S420 and before step S430, or after step S430 and before step S440, or simultaneously with step S420 or S430, while step S460 may be executed before, after, or simultaneously with step S440. Of course, step S450 may be performed simultaneously with step S440.
While the embodiments of the certificate authenticity judging operation and the face identity judging operation have been described above in connection with fig. 3 and 4, respectively, it will be appreciated that the additional judging operation may include both the certificate authenticity judging operation and the face identity judging operation. That is to say, in the identity authentication process, the authenticity of the certificate and the consistency of the face can be judged at the same time, and whether the identity of the person to be authenticated is legal or not is determined according to the certificate authentication result, the living body detection result, the certificate authenticity judgment result and the face consistency judgment result.
According to the embodiment of the invention, in the process of determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result, each result of the certificate authentication result, the living body detection result and the additional judgment result has a respective weight coefficient.
A weight coefficient may be assigned in advance to each of the certificate authentication result, the living body detection result, and the additional determination result (including, for example, the above-described certificate authenticity determination result and/or face consistency determination result). The weight coefficient of each result can be determined according to the requirement, and the invention is not limited to this.
For example, the certificate authentication result, the biometric detection result, and the additional determination result may be weighted-averaged based on the weight coefficient to obtain an averaged result. Then, the averaged result may be compared with a preset threshold, and if the averaged result is greater than the threshold, the identity of the person to be authenticated may be considered to be legitimate, otherwise, the identity of the person to be authenticated may be considered to be illegitimate.
Under the condition that the personal identification information of the person to be authenticated is the certificate information in the certificate image, the certificate authentication result, the certificate authenticity judgment result and the face consistency judgment result can be any value within the range of [0,1 ]. Of course, the values of the three may be only one of 0 and 1. The value of the in vivo detection result is one of 0 and 1. Therefore, these results may be weighted or arithmetically averaged to obtain an averaged result. The averaged results are then compared to a threshold.
Of course, various detection or judgment results participating in identity authentication can also be directly and simply summed to obtain a total result, and the total result is compared with a threshold value to judge whether the identity of the person to be authenticated is legal or not.
Each result has a respective weight coefficient, so that the identity authentication system can conveniently adjust the participation degree of the identity authentication system in the identity authentication process according to the importance of each judgment or detection operation involved in the identity authentication, and further the accuracy of the identity authentication can be improved.
According to an embodiment of the present invention, the step S220(S330, S420) may include: performing character recognition on the certificate image to obtain character information in the certificate image; searching in the authenticated certificate information database based on the character information in the certificate image to obtain a certificate authentication result; and the certificate authentication result is the confidence level that the certificate information in the certificate image is authenticated certificate information.
As described above, the text information in the certificate image may be recognized from the certificate image, and then the matched certificate information may be searched from the authenticated certificate information database based on the recognized text information. The search result is typically the probability (which may also be referred to as confidence) that there is matching credential information in the database of authenticated credential information, in which case the credential authentication result may be the search result.
According to the embodiment of the invention, the character recognition of the certificate image to obtain the character information in the certificate image may include: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
The step of text recognition of the document image may be implemented in any suitable text recognition manner. One embodiment of the text recognition step is described below.
Firstly, the characters in the certificate image can be positioned, and the positions of the characters are determined. An image block containing text can then be extracted from the document image. Illustratively, the credential image can be input into a trained neural network to locate text in the credential image.
For example, a large number of sample document images can be collected, manually or by machine labeling, to indicate where text is located on the sample document images. And training a neural network for positioning the position of the characters through a machine learning algorithm based on the marked large amount of sample certificate images. The certificate image acquired in the actual identity authentication process is input into the trained neural network, and the neural network can output the position of the characters in the certificate image, for example, the vertex coordinates of the area where the characters are located.
Then, the characters in the image block (character area) containing the characters can be identified, and a character identification result is obtained. Recognizing characters in image blocks containing characters refers to a process of converting image contents of image blocks containing characters into character strings. Illustratively, the Recognition may be performed by a conventional Optical Character Recognition (OCR) method: firstly, each character is segmented by utilizing binarization operation, and then all characters are identified in a template matching or mode classification mode.
Optionally, a sliding window (sliding window) recognition method can be adopted to locate and recognize characters from the certificate image without depending on the result of the binary segmentation.
According to an embodiment of the present invention, before recognizing the characters in the image block containing the characters, the identity authentication method 200(300, 400) may further include: the image block containing the text is corrected to the horizontal state.
A step of adjusting (correcting) the position of the characters may also be included between the steps of locating the characters in the image of the certificate and identifying the characters in the image block containing the characters. In practical applications, a certificate in a certificate image may have a certain inclination angle, that is, an image block containing characters may have a certain inclination angle. Therefore, before characters in the certificate image are recognized, the image block containing the characters (namely the area where the characters are located) can be corrected and converted into a horizontal and level state. For example, in the step of positioning the characters in the certificate image, coordinates of four vertexes of an area where the characters are located in the certificate image are already obtained, and image areas included in the four vertexes, that is, image blocks including the characters are already obtained, so that the image blocks including the characters only need to be rotated to a horizontal state according to the coordinates of the image blocks including the characters.
Correcting the image block containing the characters to be in a horizontal state can facilitate the subsequent identification of the characters in the image block containing the characters.
According to the embodiment of the present invention, after recognizing the characters in the image block including the characters to obtain the character information in the certificate image, performing character recognition on the certificate image to obtain the character information in the certificate image further includes: outputting the text information in the certificate image for a user to check; receiving character correction information input by a user; comparing the characters to be corrected indicated by the character correction information with the corresponding characters in the character information in the certificate image; and if the difference between the characters to be corrected and the corresponding characters in the character information in the certificate image, which is indicated by the character correction information, is smaller than a preset difference threshold value, updating the character information in the certificate image by using the character correction information.
In the process of character recognition, a character correction step can be added. When the OCR method is adopted to recognize characters, the situations of recognition errors or recognition failure exist for some rarely-used characters, characters with near shapes and characters with a large stroke number. Therefore, the problem of character recognition can be flexibly and conveniently solved by allowing a user to modify characters, and the precision of character recognition is improved. The user may be the person to be authenticated, or may be a person other than the person to be authenticated, such as an administrator of the identity authentication system.
For example, the text information recognized from the certificate image can be output in the form of text display or voice playing and the like for the user to view. When the user finds that the character recognition result has an error, the character correction information can be input. After receiving the text correction information input by the user, the text to be corrected indicated by the text correction information may be compared with the identified corresponding text. If the difference between the certificate image and the certificate image is smaller than a preset difference threshold value, the character information in the certificate image can be updated by using the character correction information, otherwise, the character information can not be updated. For example, if a word on the document is recognized as "one", the word correction information input by the user indicates that the word should be corrected to "one", and since the difference between the two is large, the identity authentication system can reject the correction request of the user without correcting the word.
According to an embodiment of the present invention, the step S230(S340 and S350, or S430 and S440) may include: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction or not so as to obtain a living body detection result.
In one example, the live action instructions may be generated and output by a client device (e.g., a mobile terminal including a camera, a VTM, etc.), face images (including still images or video) may be captured, and live detection may be performed using the face images. In another example, the live body action instruction may be generated by the cloud server, the live body action instruction is output by a client device (e.g., a mobile terminal including a camera, a VTM, etc.) and a face image is collected, then the collected face image is uploaded to the cloud server by the client device, and the cloud server detects a face in the face image and determines authenticity of the face. If the human face is judged to be the real human face, the person to be authenticated can be considered to be the living body, otherwise, the person to be authenticated can be considered not to be the living body. Illustratively, a trained face authenticity classifier and a fake face type classifier can be included in the cloud server. The face truth classifier can be used for judging the truth of the face, and the false face type classifier can be used for judging the type of the false face under the condition that the face is the false face.
An embodiment of performing living body detection based on the motion of a face in a face image is described below by way of example to facilitate understanding of the present embodiment.
The living body action instruction can instruct the person to be authenticated to make a corresponding living body action according to the instruction. The living body motion indicated by the living body motion command may be a single static motion (corresponding to a single gesture) or may be a variable motion. For example, the living body motion instruction may be generated and output before the face image is acquired, and is not changed during the acquisition of the face image. The living body action instruction can also be a continuous instruction sequence, namely, different instructions are continuously generated and output in the process of acquiring the face image, and the person to be authenticated is instructed to change the living body action made by the person to be authenticated along with the instruction.
The above-mentioned living body action may be, for example, pressing the skin of the cheeks with a finger, inflating the mouth to bulge the cheeks, or reading a text. When the person to be authenticated executes one or more living body actions, the face image of the person can be collected, whether the executed living body action is qualified or not is judged, if yes, the living body detection is successful, and otherwise, the living body detection is failed. For example, if the living body motion indicated by the living body motion instruction is to read a segment of characters aloud, a face image in the reading process can be collected, whether the lip motion of the face in the face image is matched with the lip motion of the corresponding character or not is judged, and if so, the living body detection is successful.
According to an embodiment of the present invention, the step S230(S340 and S350, or S430 and S440) may include: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring a face image of a person to be authenticated, which is acquired in real time; capturing a skin area image before a person to be authenticated executes a living body action and a skin area image after the person to be authenticated executes the living body action from the acquired face image; and inputting the skin area image before the living body action is executed by the person to be authenticated and the skin area image after the living body action is executed into the skin elasticity classifier to obtain a living body detection result.
When a person to be authenticated performs one or more living body actions, skin area images before and after the person to be authenticated performs the living body action may be captured from the acquired face image, respectively. Whether the person to be authenticated starts to execute the living body action can be judged according to the collected face image. For example, the face in the face image may be monitored, and the time when the person to be authenticated starts to perform the living body action may be determined according to the state change of the face. The face image collected before the starting time is used as the face image before the person to be authenticated executes the living body action. Subsequently, the face in the face image may be continuously monitored for a period of time, which is an estimated duration of the motion of the living body. And taking the face image acquired after the duration is over as the face image after the person to be authenticated executes the living body action. Skin area images before and after the living body action of the person to be authenticated is performed can be extracted from the face images before and after the living body action of the person to be authenticated is performed, respectively.
The obtained skin region image may then be input to a skin elasticity classifier, which is a pre-trained classification model. For example, if the acquired human face skin is a living body skin, the skin elasticity classifier may output 1, otherwise output 0.
According to the embodiment of the invention, the step of capturing the skin area image of the person to be authenticated before the living body action is executed and the skin area image of the person after the living body action is executed from the collected face image comprises the following steps: selecting a face image before a person to be authenticated executes a living body action and a face image after the person to be authenticated executes the living body action from the collected face images; positioning the human face in the human face image before the human body action is executed by the person to be authenticated and the human face in the human face image after the human body action is executed by using the human face detection model; positioning key points of a human face in a human face image before a person to be authenticated executes a living body action and a human face in a human face image after the person to be authenticated executes the living body action by using a human face key point positioning model; and performing region division on the human face in the human face image before the human to be authenticated performs the living body action and the human face in the human face image after the human to be authenticated performs the living body action according to the human face position and the key point position obtained through positioning so as to obtain a skin region image before the human to be authenticated performs the living body action and a skin region image after the human to be authenticated performs the living body action.
The extraction of the skin region image can be realized based on the existing face detection and face key point positioning algorithm. For example, a face image before a person to be authenticated performs a living body action may be respectively input into the trained face detection model and the face key point location model, and a face position (for example, coordinates of a face contour point) and a key point position (coordinates of each key point) may be respectively obtained. The key points may be any points on the face, such as the left corner of the eye, the tip of the nose, the left lip angle, etc. Of course, the key points may also be face contour points. Illustratively, the face region may be divided into a series of triangular patches according to the face position and the key point position, and the triangular patch image block located in the chin, the cheekbone, the two cheeks, etc. may be used as the face skin region to obtain the skin region image. The extraction method of the skin area image after the person to be authenticated executes the living body action is similar to the above method, and is not described again. For each face image, the region division manner, the number of the selected regions as the face skin regions, and the positions included in each region may be set as needed, which is not limited by the present invention.
In one example, the face detection model and the face keypoint localization model may be implemented using a deep neural network. The deep neural network is a network capable of learning autonomously, and the deep neural network can be used for accurately and efficiently detecting and positioning the face and key points in the face image.
According to the embodiment of the present invention, the identity authentication method 200(300, 400) may further include: acquiring a sample face image, wherein the position of a face and the position of a key point of the face in the sample face image are marked; and carrying out neural network training by using the sample face image to obtain a face detection model and a face key point positioning model.
A large number (e.g., about 10000) of sample face images can be collected in advance, and the positions of a series of key points such as the canthus, the corner of the mouth, the alar part of the nose, the peak of the cheekbone and the like of the face and the positions of the face contour points can be marked in each sample face image in a manual mode. Subsequently, a machine learning algorithm (such as deep learning, or a local feature-based regression algorithm, etc.) may be used to perform neural network training using the labeled sample face image as an input, so as to obtain a desired face detection model and a face key point location model.
According to an embodiment of the invention, the identity authentication method 200(300, 400) may further comprise a training step of a skin elasticity classifier. A schematic flow chart of the training step S500 of the skin elasticity classifier according to one embodiment of the present invention is described below with reference to fig. 5.
As shown in fig. 5, the training step S500 of the skin elasticity classifier includes the following steps.
In step S510, a face image before a live action is performed by a real person and a face image after the live action is performed by the real person, and a face image before a live action is performed by a dummy person and a face image after the live action is performed by the dummy person are acquired.
In step S520, the face image before the live action is performed and the skin area image after the live action is performed by the real person are extracted as positive sample images from the face image before the live action is performed by the real person and the face image after the live action is performed.
In step S530, the face image before the live action is performed by the dummy person and the skin area image after the live action is performed by the dummy person are extracted as negative sample images from the face image before the live action is performed by the dummy person and the face image after the live action is performed.
In step S540, a classifier model is trained using the positive sample images and the negative sample images to obtain a skin elasticity classifier.
Illustratively, the training of the skin elasticity classifier may be performed off-line. The face images before and after the real person performs the specified living body action can be collected in advance, and the face images before and after the false person performs the specified living body action can be collected. The dummy person may be, for example, a photograph containing a human face, a video containing a human face, a paper mask, or a three-dimensional (3D) model of a human face, etc.
The manner of extracting the skin region image from the face image of the real person or the false person may refer to the above description of the extraction manner of the skin region image in the actual identity authentication process, and is not described herein again. Illustratively, after obtaining the positive sample image and the negative sample image, a classifier model may be trained using a statistical learning method such as a deep learning or SVM, thereby obtaining a skin elasticity classifier.
According to an embodiment of the present invention, the step S230(S340 and S350, or S430 and S440) may include: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under structured light irradiation; and determining whether the face of the person to be authenticated belongs to the living body according to the face image so as to obtain a living body detection result.
The above-described method of performing living body detection according to the actions of human faces or by using a skin elasticity classifier is mainly directed to some application scenarios with weak safety requirements. For some application scenarios with higher safety requirements, a mode of performing in-vivo detection based on special hardware can be selected. For example, a binocular camera may be used to collect a face image under structured light illumination to perform living body detection using the collected face information and structured light illumination information.
In one example, a detection parameter indicating a degree of sub-surface scattering of structured light on the face of the person to be authenticated may be determined based on the acquired image of the face under illumination of the structured light, and then, whether the face of the person to be authenticated is a living body may be determined based on the detection parameter and a predetermined parameter threshold. Because the sub-surface scattering degrees of the false face and the real face such as a 3D mask are different (when the sub-surface scattering is stronger, the image gradient is smaller, and thus the scattering degree is smaller), for example, the sub-surface scattering degree of the mask made of materials such as general paper or plastics is far weaker than the real face, and the sub-surface scattering degree of the mask made of materials such as general silica gel is far stronger than the real face, the false face and the real face can be distinguished through judging the image scattering degree, and therefore a mask attacker can be effectively defended.
In another example, depth information of a face of a person to be authenticated may be obtained from a face image. In addition, a light spot pattern formed by the face of the person to be authenticated under the structured light irradiation can be obtained. Texture information of the face of the person to be authenticated can be obtained according to the light spot pattern. Subsequently, it may be determined whether the face of the person to be authenticated belongs to a living body in combination with the depth information and the texture information.
Different material structures can form different light spot patterns under the structured light. And obtaining texture information of the human face, namely the material property of the surface of the human face according to the light spot pattern. And if the texture information of the face of the person to be authenticated is found not to conform to the human skin texture distribution rule, determining that the face of the person to be authenticated does not belong to a living body, and judging the face to be authenticated as a mask attack and the like. Since the attacker can use the mask made of the artificial leather material to carry out the attack, even if the texture information of the face of the person to be authenticated conforms to the human leather texture distribution rule, the face of the person to be authenticated cannot be determined to belong to the living body, and therefore whether the face of the person to be authenticated belongs to the living body can be judged in combination with the depth information. The depth information of the face of the person to be authenticated can be obtained from the face images acquired at two different viewing angles. It should be understood that a real face is usually fluctuated, for example, the coordinate depths of the eyes and the nose are different and have a large difference, while a mask made of a material imitating the human skin is fluctuated a little and has a small difference between the coordinate depths of the eyes and the nose. Therefore, whether the face of the person to be authenticated belongs to the living body can be further judged by combining the depth information.
Therefore, in the embodiment of the invention, a binocular camera and structured light can be combined, a 3D face with a structured light pattern is collected through the binocular camera, and then living body detection is performed according to the sub-surface scattering degree of the structured light on the 3D face or the combination of the depth information of the face and a spot pattern formed by the structured light on the face.
According to the embodiment of the invention, the living body detection process in the identity authentication method can also have other implementation modes. Another biopsy format is described below. In the liveness detection mode, a real human face is distinguished from a false human face on a picture or a screen based on light reflection characteristics.
The following describes exemplary implementations of a live body detection step in the identity authentication method and a live body detection module in the identity authentication apparatus. It is noted that "in vivo verification" and "in vivo testing" as described herein are synonymous and may be used interchangeably.
Fig. 6 illustrates an example electronic device 600 for implementing the identity authentication method and apparatus in accordance with another embodiment of the present invention. The electronic device 600 includes one or more processors 602, one or more memory devices 604, an input device 606, an output device 608, an image capture device 610, and a light source 612, which are interconnected via a bus system 614 and/or other form of connection mechanism (not shown). The processor 602, the storage device 604, the input device 606, the output device 608, the image capturing device 110, and the bus system 614 in the electronic device 600 shown in fig. 6 are similar to the processor 102, the storage device 104, the input device 106, the output device 108, the image capturing device 110, and the bus system 112 shown in fig. 1 in structure and operation principle, and are not described again.
The light source 612 may be a device capable of emitting light, and may include a dedicated light source such as a light emitting diode, or may include an irregular light source such as a display screen. In case the identity authentication method and apparatus are implemented in a mobile terminal such as a smartphone, the input device 606, the output device 608 and the light source 612 may be the same display screen.
Fig. 7 shows a schematic flowchart of the living body detection step S700 (corresponding to the above-described steps S220, or S340 and S350, or S430 and S440) according to one embodiment of the present invention. As shown in fig. 7, the living body detecting step S700 includes the following steps.
In step S710, one or more illumination images acquired for the face of a person to be authenticated under illumination with detection light are acquired.
Illustratively, the detection light may be emitted to the face of the person to be authenticated by a light source. The light source may be controlled by the processor to emit light. For example, the light source may share other light-emitting devices (e.g., at least a partial region of a display screen, a light source in a projector) as the light source. Also for example, the light source may be a dedicated light source (e.g., one or more light emitting diodes or laser diodes arranged in a manner, such as a flash for a camera, etc.), a combination of a display screen and other types of light sources, etc.
The pattern of the detection light may include, but is not limited to, the color of the detection light, the position of the light emitting region, the intensity of the detection light, the irradiation angle of the detection light, the wavelength of the detection light, the frequency of the detection light, and the like.
For example, in the process of illuminating the face of the person to be authenticated, the pattern of the detection light may not change, that is, the light source may employ a single unchanging light to illuminate the face of the person to be authenticated. For example, in a preferred embodiment, the light source employed is a display screen of the mobile terminal. On the display screen, the color, brightness, and the like of each pixel may be controlled so that the display screen can emit light exhibiting a specific pattern, such as structured light, in which case the specific color or brightness displayed by the screen in a specific pixel region may be a specific pattern of detection light selected after optimization according to a large amount of experimental data, and in such a pattern of detection light, the living body authentication of the object to be authenticated can be performed quickly and accurately by a specific algorithm corresponding to the pattern of detection light. In this case, one or more illumination images may be acquired under constant illumination of detection light, and living body verification may be performed based on the illumination images.
Preferably, the pattern of the detection light is changed at least once during the process of illuminating the face of the person to be authenticated. In this case, the frequency of mode change of the detection light and the frequency of acquisition of the images may be controlled in coordination so that at least one illumination image may be acquired under each mode of detection light.
Preferably, the pattern of detected light is changed between each two consecutive instants. The time may be any particular point in time within a predetermined period of time. For example, the pattern of detection light may change every 1 second. The mode of the detection light is constantly changed, richer light reflection characteristic information can be obtained, and the living body verification based on the light reflection characteristic can be accurately and efficiently implemented.
Alternatively, the pattern of the detection light is randomly changed or preset in the process of irradiating the face of the person to be authenticated. In one example, the pattern of detected light is varied completely randomly. For example, in a preferred embodiment, the light source employed is a display screen of the mobile terminal. On the display screen, the color of each area can be controlled, and for each area, a certain RGB value is randomly selected in a preset RGB value range every time to be used as the color value of the area for display. The division of the regions may be arbitrarily set, for example, each region may include one or more pixels, and the two different regions may be the same or different in size.
In another example, the pattern of detecting light may be set in advance. For example, the detection light may be set to be continuously irradiated for 10 seconds, one pattern may be changed every second, and the color, position, intensity, and the like of the detection light emitted every time are set in advance. During the in-vivo authentication, the light source may sequentially emit 10 patterns of detection light in a preset manner. The preset pattern of detecting light may be a pattern obtained based on prior experience and effective for in vivo authentication, which is advantageous for improving accuracy and efficiency of in vivo authentication.
For example, the pattern of the detection light irradiated to the face of the person to be authenticated can be dynamically changed by dynamically changing the light emission color of the detection light. For another example, the pattern of the detection light irradiated to the face of the person to be authenticated may also be dynamically changed by dynamically changing the position of the light emitting region of the detection light (i.e., changing the position of the detection light). For another example, it is also possible to dynamically change the pattern of the detection light irradiated to the face of the person to be authenticated by dynamically changing the light emission color of the detection light and the position of the light emission region of the detection light at the same time.
For example, the position of the light emitting region of the detection light may be dynamically changed by changing the position of the light source, which may change the position at which the detection light is irradiated to the face of the person to be authenticated. For another example, the position of the face of the person to be authenticated irradiated with the detection light may be dynamically changed by changing the angle of the outgoing light of the light source.
In a preferred embodiment, the light source used is a display screen of the mobile terminal, and the image acquisition device is a camera (e.g., a front camera) of the mobile terminal located on the same side as the display screen. Compared with the scheme of adopting an additional special light source, the scheme can be realized by adopting the existing mobile terminal such as a mobile phone and the like, is not limited by external conditions, and can be better applied to application scenes such as remote account opening and the like through a personal mobile terminal.
Further, in a further preferred version of the above preferred embodiment, the mode of light employed is a combination of the color of the light and the position of the light emitting area, for example: the light emitting device emits light of different colors at different positions of the display screen at the same time, or emits light of the same color at different positions of the display screen at the same time but emits light of different colors at different times, and the like. The scheme of the combination mode of the colour of adoption light and the position of luminous region for the scheme of selecting for use the mode that changes other lights such as light intensity, not only live body detection effect is better, can reduce the stimulation of light to people's eye moreover to promote user experience.
In the case where the face of the person to be authenticated is irradiated with the detection light, an image of the face of the person to be authenticated under irradiation with the detection light may be acquired by using an image acquisition device (e.g., the image acquisition device 610 of the electronic apparatus 600) to obtain the illumination image. The image capture device may be controlled by the processor to capture an image. The image acquisition device transmits the one or more illumination images to a processor of an identity authentication system for in vivo verification. For example, the number of illumination images collected under the illumination of the same pattern of light may be one or more, and the present invention is not limited thereto. As will be appreciated by those skilled in the art, in-vivo authentication is mainly based on human face authentication, and therefore, according to the embodiments herein, when acquiring an illumination image and then an action image and a real-time image, an image including a human face is acquired for in-vivo authentication.
Illustratively, the illumination image may be transmitted to the electronic device 600 by a client device (e.g., a mobile terminal including a camera, a remote Video Teller Machine (VTM), etc.) for processing by the processor 602 of the electronic device 600, or may be captured by an image capture device 610 (e.g., a camera) included in the electronic device 600 and transmitted to the processor 602 for processing.
In step S720, it is determined whether the face of the person to be authenticated belongs to a living body based on the light reflection characteristics of the face of the person to be authenticated represented in the one or more illumination images to obtain an illumination living body verification result.
Human skin, such as human face skin, is a diffusely reflective material, and the human face is three-dimensional; in contrast to this, a display screen such as a Liquid Crystal Display (LCD) or an Organic Light Emitting Diode (OLED) display may be considered as a self-luminous object and also generally includes a partially specular reflection component, while a photograph or the like is generally planar and also generally includes a partially specular reflection component, and the reflection characteristic thereof as a whole is uniform and lacks the three-dimensional characteristic of a human face, regardless of the display screen or the photograph. The light reflection characteristic of the face is different from the light reflection characteristic of the display screen or the photograph, and thus it is possible to judge whether the face of the person to be authenticated belongs to a living body by the light reflection characteristic based on the face of the person to be authenticated.
In step S730, it is determined whether the person to be authenticated passes the living body verification based on at least the illumination living body verification result to obtain a living body detection result.
In one example, the final live body detection result may be determined directly based on the illumination live body verification result, i.e., if the illumination live body verification result indicates that the face of the person to be authenticated belongs to a live body, it is determined that the person to be authenticated passes the live body verification, and the live body detection result may be exemplarily 1; if the illumination living body verification result indicates that the face of the person to be authenticated does not belong to a living body, it is determined that the person to be authenticated does not pass the living body verification, and the living body detection result may be exemplarily 0. The living body detection mode has the advantages of small calculation amount and high efficiency. In another example, the illumination living body verification result may be considered in combination with other living body verification manners, together with other living body verification results obtained based on the other living body verification manners, to finally determine whether the person to be authenticated passes the living body verification. The accuracy of the living body detection mode is high.
As described above, since the light reflection characteristic of a human face is different from the light reflection characteristic of an object such as a display screen or a photograph, a real human face and a human face played back on the screen or a human face on the photograph can be effectively distinguished based on the light reflection characteristic. Therefore, the identity authentication method and the identity authentication device adopting the living body detection method can effectively defend screen or photo attackers, so that the safety and the user experience of the identity authentication system can be improved.
Illustratively, the identity authentication method employing the liveness detection method according to an embodiment of the present invention may be implemented in a device, apparatus, or system having a memory and a processor.
The identity authentication method according to the embodiment of the invention may be deployed at an image acquisition end, for example, at an image acquisition end of a financial system such as a bank management system or at a mobile terminal such as a smart phone or a tablet computer. Alternatively, the identity authentication method according to the embodiment of the present invention may also be distributively deployed at the server side (or cloud side) and the client side. For example, the personal identification information may be acquired at the client, the light is emitted, and an image of the face of the person to be authenticated is acquired, the client transmits the acquired personal identification information and the acquired image to the server (or cloud), the server (or cloud) performs authenticated information judgment and living body detection to obtain an identity authentication result, and the identity authentication result is returned to the client. The server has larger data operation capacity relative to the client, the identity authentication is carried out by the server, the authentication speed can be improved, and the user experience is improved.
Although in-vivo authentication based on light reflection characteristics can defend against screen or photo attacks, there are many attack modes of an attacker, and some other attack modes may break through in-vivo authentication based on light reflection characteristics, such as three-dimensional simulation mask attacks. In the case of a mask attack, the light reflection characteristic-based in-vivo authentication method may not be well protected. Therefore, in order to further improve the biometric authentication method and improve the security of the biometric authentication, the biometric detection may be performed in combination with other biometric authentication methods in addition to the biometric authentication based on the light reflection characteristics. An exemplary implementation is described below.
Fig. 8 shows a schematic flowchart of the living body detecting step S800 (corresponding to the above-described steps S220, or S340 and S350, or S430 and S440) according to another embodiment of the present invention. Steps S810 to S830 in the living body detection step S800 shown in fig. 8 correspond to steps S710 to S730 of the living body detection step S700 shown in fig. 7, and those skilled in the art can understand steps S810 to S830 shown in fig. 8 with reference to the description of fig. 7, and the details are not repeated here. According to the present embodiment, the living body detecting step S800 may further include steps S840 to S870, and the step S830 may include: and determining whether the person to be authenticated passes the living body verification or not based on the illumination living body verification result and the action living body verification result so as to obtain a living body detection result.
In step S840, an action instruction is output, where the action instruction is used to instruct the person to be authenticated to perform a corresponding action.
Illustratively, the action instructions may be output randomly or according to a predetermined rule. The action instructions may comprise a single instruction, or a sequence of instructions consisting of a series of instructions. For example, the action instructions may indicate that the person to be authenticated nod his head, shake his head, blink his mouth, open his mouth, and so on.
In step S850, a plurality of motion images acquired for the face of the person to be authenticated are acquired.
The method can acquire images aiming at the face of a person to be authenticated while outputting the action instruction or within a period of time after outputting the action instruction, and obtain a plurality of action images. Illustratively, the plurality of motion images may be consecutive video frames. The motion image may also be captured by the image capture device 110 described above, or by other image capture devices.
In step S860, the action performed by the person to be authenticated is detected based on the plurality of action images.
For example, the face detection and the key point recognition may be performed in each motion image, and the motion performed by the face may be determined based on the face contour and/or the face key point in the plurality of motion images, for example, the motion performed by the face may be determined by recognizing the variation trend of the face contour and/or the face key point in the acquired consecutive plurality of motion images. Subsequently, it can be determined whether the action performed by the human face is consistent with the action indicated by the action instruction.
In step S870, it is determined whether the face of the person to be authenticated belongs to a living body according to the motion detection result and the motion instruction to obtain a motion living body verification result.
Illustratively, if the actions performed by the person to be authenticated in the plurality of action images are consistent with the actions indicated by the action instructions, the face of the person to be authenticated is determined to belong to the living body, and if the actions performed by the person to be authenticated in the plurality of action images are inconsistent with the actions indicated by the action instructions or the person to be authenticated does not perform any action in the plurality of action images (i.e., the action of the person to be authenticated is not detected), the face of the person to be authenticated is determined not to belong to the living body. Of course, the above-described manner is merely an example, and whether the action living body verification passes or not may be determined in other determination manners, for example, if the person to be authenticated performs a plurality of actions in a plurality of action images, and the plurality of actions includes an action that coincides with the action indicated by the action instruction, it is determined that the face of the person to be authenticated belongs to the living body.
Illustratively, in step S830, if both the illumination living body verification result and the action living body verification result indicate that the face of the person to be authenticated belongs to a living body, it is determined that the person to be authenticated passes the living body verification, and if either one of the illumination living body verification result and the action living body verification result indicates that the face of the person to be authenticated does not belong to a living body, it is determined that the person to be authenticated does not pass the living body verification. Of course, the above-described manner is merely an example, and there may be other determination manners as to whether the living body verification is passed.
It should be noted that the order of execution of the above-described action-based living body verification steps (steps S840 to S870) and the light reflection characteristic-based living body verification steps (steps S810 to S820) may be arbitrarily set, and the present invention is not limited thereto.
The living body detection method including the action-based living body verification step may be independently performed by an image capturing side, for example, an image capturing side of a financial system such as a bank management system or a mobile terminal such as a smart phone, a tablet computer, or the like. Alternatively, the living body detection method including the action-based living body verification step may also be cooperatively performed by the server side (or cloud side) and the client side. For example, an action instruction may be generated at a server or a client, an action image of a person to be authenticated is collected by the client, the collected action image is transmitted to the server (or a cloud), a living body verification based on an action is performed by the server (or the cloud), and a verification result is returned to the client.
The action-based in-vivo verification method can defend attack methods such as mask attacks, and can effectively defend various attacks by combining with the in-vivo verification method based on the light reflection characteristic, so that the safety of an identity authentication system or a similar system adopting the in-vivo detection method is further ensured, the information safety and the rights and interests of users are also protected, and the method has extremely wide application value and market prospect.
According to the embodiment of the present invention, step S870 may include: if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is detected based on a plurality of action images acquired within a period of time not longer than a first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is not detected based on a plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
Randomly outputting action instructions (such as characters or voice instructions of 'please nod head' and 'please open mouth') to indicate the person to be authenticated to execute corresponding actions (such as nod head and open mouth), and detecting key points of the face area to judge whether the actions executed by the person to be authenticated are consistent with the output action instructions. If the fact that the action executed by the person to be authenticated is consistent with the output action instruction is detected within the first preset time, determining that the face of the person to be authenticated belongs to a living body; if the action executed by the person to be authenticated is detected to be inconsistent with the output action instruction within the first preset time, or the action of the person to be authenticated is not detected within the first preset time, it can be determined that the face of the person to be authenticated does not belong to the living body.
According to the embodiment of the present invention, the identity authentication method (200, 300 or 400) may further include: counting once in each execution of step S840 to step S870 to obtain the number of action verifications; after step S870, the identity authentication method (200, 300 or 400) may further include: if the action living body verification result shows that the face of the person to be authenticated does not belong to the living body, outputting first error information, and judging whether the action verification frequency reaches a first time threshold value, if the action verification frequency reaches the first time threshold value, turning to the step S330, and if the action verification frequency does not reach the first time threshold value, returning to the step S340 or returning to the step S310 under the condition that the step S310 is executed before the step S340, wherein the first error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Illustratively, a counter may be provided for counting the number of executions of the action-based liveness verification step (steps S340-S370), the counter being incremented by one each time the action-based liveness verification step is executed. The output result of the counter is the action verification times. After the entire living body detecting step (living body detecting step S800) is terminated, the counter may be cleared.
If the current action living body verification result shows that the face of the person to be authenticated does not belong to the living body, first error information can be output. The first error information may prompt that the living body verification for the face of the person to be authenticated fails and prompt that the living body verification is performed again. If the number of executions of the motion-based living body verification step (the number of motion verifications) at this time has not reached the preset first-time threshold value, it may be attempted to re-execute the motion-based living body verification step. Alternatively, if the light reflection characteristic-based living body verification step (steps S810-S820) is performed before the motion-based living body verification step, it is possible to directly return to step S810, i.e., re-perform the light reflection characteristic-based living body verification step and the motion-based living body verification step again to improve the accuracy of the living body verification.
The first time threshold may be any suitable value, which may be set according to the requirement, and the present invention does not limit this.
In the actual living body verification process, various unexpected situations may occur, for example, the user does not make a specified action in time, the acquired image is not clear enough, the face detection result is not accurate enough, and the situations may cause the user to be mistakenly identified as a non-living body. Therefore, to compromise user experience and system security, a threshold number of times may be set, allowing the user to attempt live verification multiple times within a reasonable range. If the user has not been correctly identified as a living body when the threshold number of times is reached, it can be determined that it does not belong to a living body.
According to the embodiment of the present invention, before the step S710 (or S810), the identity authentication method (200, 300, or 400) may further include: step S708: and judging whether the image acquisition condition of the person to be authenticated meets a preset requirement, and if so, turning to the step S710 (or S810), wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle between the person to be authenticated and the image acquisition device.
Before executing the living body verification step based on the light reflection characteristic or other living body verification steps, the image acquisition condition of the person to be authenticated can be firstly detected to judge whether the image acquisition condition meets the preset requirement. The subsequent living body verification step based on the light reflection characteristic or other living body verification steps can be executed only under the condition that the image acquisition condition of the person to be authenticated meets the preset requirement, so that the quality of images (including illumination images, action images and the like) for living body verification can be ensured, the face in the images can be detected correctly, and the accuracy of the living body verification can be improved.
According to an embodiment of the present invention, before step S708 or simultaneously with step S708, the identity authentication method (200, 300, or 400) may further include: step S706: and outputting first prompt information, wherein the first prompt information is used for prompting the person to be authenticated to enable the face to be over against the image acquisition device and to be close to the image acquisition device.
The first prompt may be output in any suitable manner. Exemplarily, step S706 may include: the first prompt information is output in one or more of a voice form, an image form and a text form. For example, a text "please face the screen" (facing the screen is equivalent to facing the image capturing device) may be output on the display screen of the mobile terminal, or a prompt "please face the screen" may be issued through a speaker of the mobile terminal.
Illustratively, the identity authentication method may be implemented by an Application (APP) installed on an electronic device such as a mobile terminal. When the user opens the application and then enters a living body detection stage, the first prompt information can be output to prompt the user to keep a proper relative position relation with the mobile terminal, so that a camera of the mobile terminal can conveniently acquire an ideal face image for living body verification. In one example, the first prompt message may be continuously or intermittently output until the image capturing condition of the person to be authenticated satisfies a preset requirement.
The first prompt information is output, so that the user can be guided to adjust the relative position relationship between the identity authentication device (mainly an image acquisition device) and the user in time, and meanwhile, the interaction between the user and the system can also improve the user experience.
According to an embodiment of the present invention, step S708 may include: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; and judging whether the image acquisition condition meets a preset requirement or not according to the face area detected in the real-time image, if the face area is located in the preset area and the proportion of the face area in the real-time image is greater than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the real-time image is not greater than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
In one example, whether the image acquisition condition of the person to be authenticated meets the preset requirement may be determined according to an image acquired for the face of the person to be authenticated. For example, the mobile terminal may acquire a real-time image through a camera and perform face detection. Face detection may obtain a face region, which is an image block containing a face. Whether the image acquisition condition of the person to be authenticated meets the preset requirement can be judged according to the position of the face area in the real-time image and the proportion of the face area in the real-time image. For example, a preset area may be defined in the real-time image. The position of the face of the person to be authenticated in the image acquisition area of the image acquisition device can be defined through the preset area. The size of the face area can reflect the distance and the relative angle between the person to be authenticated and the image acquisition device. The preset area and the first preset proportion can be set according to needs, and the invention does not limit the preset area and the first preset proportion.
For example, if the face area of the person to be authenticated is located in the preset area, but the proportion of the face area in the real-time image is smaller than the first preset proportion (for example, two thirds), the person to be authenticated may be too inclined with respect to the image capturing device and/or far away from the image capturing device, and at this time, the image capturing condition may be considered as not meeting the preset requirement.
Illustratively, the identity authentication method (200, 300 or 400) may further include: and if the proportion of the face area in the real-time image is not more than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Alternatively, the first collection prompt information may be output in one or more of a voice form, an image form, and a text form. For example, if the face area is found to occupy a proportion of the real-time image that is not greater than a first preset proportion, a prompt message such as "please get close to the camera" (or "please get close to the mobile phone") may be displayed on the display screen.
According to an embodiment of the present invention, step S708 may include: acquiring a real-time image acquired by a face of a person to be authenticated; outputting a preset area for calibrating an image acquisition condition and a human face area in a real-time image in real time for displaying; and judging whether the image acquisition condition meets a preset requirement or not according to the face area detected in the real-time image, if the face area is located in the preset area and the proportion of the face area in the preset area is greater than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the preset area is not greater than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
In one example, whether the image acquisition condition of the person to be authenticated meets the preset requirement may be determined according to an image acquired for the face of the person to be authenticated. For example, the mobile terminal may acquire a real-time image through a camera and perform face detection. Face detection may obtain a face region, which is an image block containing a face. Whether the image acquisition condition of the person to be authenticated meets the preset requirement can be judged according to the position of the face area in the real-time image and the proportion of the face area in the preset area. For example, a preset area may be displayed on the display screen. The relative position between the person to be authenticated and the screen can be defined by the preset area. The size of the face region can reflect the distance and the relative angle between the person to be authenticated and the image acquisition device, for example, when the face is detected to be far away from the screen, the face region displayed on the screen in real time is changed from small to large, and the size of the face region displayed on the screen meets the preset condition when the face is close to the screen, of course, the size of the face region displayed on the screen in real time can be adjusted only under the condition that the face is close to the screen enough to meet the preset condition, and the size is not limited herein. The preset area and the second preset proportion can be set according to needs, and the invention does not limit the preset area and the second preset proportion.
For example, if the face area of the person to be authenticated is located in the preset area, but the proportion of the face area in the preset area is smaller than a second preset proportion (for example, two thirds), the person to be authenticated may be too inclined with respect to the image capturing device and/or far away from the image capturing device, and at this time, the image capturing condition may be considered as not meeting the preset requirement.
Illustratively, the identity authentication method (200, 300 or 400) may further include: and if the proportion of the face area in the preset area is not more than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Alternatively, the second collection prompt information may be output in one or more of a voice form, an image form, and a text form. For example, if the face area is found to occupy no more than a second predetermined ratio in the predetermined area, a prompt message such as "please get close to the camera" (or "please get close to the mobile phone") may be displayed on the display screen.
According to the embodiment of the present invention, the identity authentication method (200, 300 or 400) may further include: judging the relative position relation between the face area and a preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relationship between the face region and the preset region to prompt that the relative position relationship between the person to be authenticated and the image acquisition device is changed so that the face region is close to the preset region.
For example, when the identity authentication method and apparatus according to the embodiments of the present invention are implemented on a mobile terminal, a face region (i.e., an image block including a face extracted from a real-time image) and an icon representing a preset region (i.e., a preset region displayed on a screen in real time) may be displayed on a display screen of the mobile terminal in real time. The real-time display of the face area and the icons representing the preset area can facilitate the user to know the current image acquisition condition and how large the difference is from the preset requirement, so that the user can conveniently adjust the posture of the user or the image acquisition device (or the identity authentication device comprising the image acquisition device) to enter the subsequent in-vivo verification stage as soon as possible. Accordingly, displaying the face region and the icon for representing the preset region in real time may improve user experience and may improve the efficiency of the in-vivo authentication.
In addition, third acquisition prompt information can be output to prompt the user to change the relative position relationship between the person to be authenticated and the image acquisition device so that the face area is close to the preset area. Alternatively, the third collection prompt information may be output in one or more of a voice form, an image form, and a text form. For example, if the face area is found not to be located within the preset area, a prompt message such as "please get close to the center of the circle" (the preset area is displayed as a circle icon on the display screen) may be displayed on the display screen. In addition, an arrow pointing to the preset area from the face area can be displayed on the display screen, so that a user can conveniently know how to move the user or the image acquisition device to enable the face area to enter the preset area as soon as possible. The text message such as "please get close to the center of the circle" and the image message such as the arrow may be displayed simultaneously or alternatively.
According to an embodiment of the present invention, step S708 may include: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state or not according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and otherwise, determining that the image acquisition condition does not meet the preset requirement.
For example, when the identity authentication method according to the embodiment of the present invention is applied to a mobile terminal scenario, the attitude information of the image capture device (i.e., a camera of the mobile terminal) may be measured by using a device such as a gyroscope sensor and/or an acceleration sensor built in the mobile terminal. When the mobile terminal is in a vertical state, the image acquisition device is also in a vertical state, and under the condition, an ideal face image can be acquired. Therefore, whether the image capturing condition of the person to be authenticated satisfies the requirement can be measured based on the posture of the image capturing device.
According to the embodiment of the present invention, the identity authentication method (200, 300 or 400) may further include: counting once in the process of performing steps S710 to S720 (or steps S810 to S820) each time to obtain the number of times of illumination verification; after step S720 (or S820), the identity authentication method (200, 300 or 400) may further include: and if the illumination living body verification result shows that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification frequency reaches a second secondary threshold value, if so, turning to the step S230 (or S330), and if not, returning to the step S208 or returning to the step S210 (or S310), wherein the second error information is used for prompting that the living body verification aiming at the person to be authenticated fails.
Similarly to the action-based live body verification step, for the live body verification step based on the light reflection characteristic (steps S710 to S720 shown in fig. 7 or steps S810 to S820 shown in fig. 8), if it is determined that the face of the person to be authenticated does not belong to the live body, the live body verification step based on the light reflection characteristic may be attempted to be re-executed again, and the principle and advantages thereof are similar to those of the action-based live body verification step and will not be described again here.
Exemplarily, in the case that the identity authentication method includes the above step S708, it may be re-executed from step S708.
According to the embodiment of the present invention, before step S710 (or S810) or during the process of performing step S710 (or S810) and step S720 (or S820), the identity authentication method (200, 300, or 400) may further include: and outputting second prompt information, wherein the second prompt information is used for prompting that the person to be authenticated keeps still within a second preset time.
Illustratively, the second preset time may be an execution time of the living body verification steps (steps S710-S720 shown in fig. 7 or steps S810-S820 shown in fig. 8) based on the light reflection characteristic. When the living body verification step based on the light reflection characteristic is executed, namely when the detection light is adopted to irradiate the face of the person to be authenticated, the person to be authenticated can be prompted to keep still in the period, so that the image acquisition effect and the living body verification result are not influenced. For example, if the person to be authenticated moves within the second preset time, which results in that the image capturing condition of the person to be authenticated no longer meets the preset requirement, the method may return to step S706 or step S708, and one or more of the following steps may be executed again: judging image acquisition conditions, outputting first prompt information, outputting various acquisition prompt information and the like.
Illustratively, the second prompt message may be countdown message corresponding to a second preset time. Alternatively, the countdown information may be implemented in one or more of text, moving images, and voice. The countdown information may facilitate a user to know the in-vivo detection progress and may improve the user's interactive experience.
An implementation flow of the living body detecting step according to the embodiment of the present invention is described below with reference to fig. 9. The application scenario shown in fig. 9 is a mobile terminal.
As shown in fig. 9, first, a text "please face the screen" is displayed on the display screen of the mobile terminal, prompting the user to face the screen, and simultaneously, icons (indicated by circles) indicating preset areas and a face area detected based on a real-time image are displayed on the display screen. When the user changes the position and/or posture of the face of the user and/or changes the position and/or posture of the mobile terminal, the text of 'please be right to the screen' and the icon for representing the preset area can be continuously displayed, and the two kinds of information can be kept unchanged. However, the size and the position of the face area may change, so that the face area which changes constantly can be displayed in real time, and the user can conveniently check the face area. Subsequently, when the image capturing condition of the user satisfies the preset requirement, the next stage, i.e., the living body verification step based on the light reflection characteristic, may be entered.
During the execution of the living body authentication step based on the light reflection characteristics, a text "please keep still" may be displayed on the display screen (as shown in fig. 9, 2 nd and 3 rd images), prompting the user to keep still, while also displaying countdown information on the display screen. The countdown information is represented in the 3 rd image shown in fig. 9 by a colored progress bar marked on an icon (i.e., a circle) for representing a preset area.
When the light reflection characteristic-based living body authentication step is completed, the action-based living body authentication step may be started to be performed. As shown in the 4 th image of fig. 9, a text "please click" is displayed on the display screen to instruct the user to perform the corresponding action.
Finally, the final result of the living body test, such as a word "living body verification passed" is output on the display screen.
According to another aspect of the present invention, an identity authentication apparatus is provided. Fig. 10 shows a schematic block diagram of an identity authentication device 1000 according to one embodiment of the present invention.
As shown in fig. 10, the identity authentication apparatus 1000 according to the embodiment of the present invention includes an information acquisition module 1010, an authenticated information judgment module 1020, a living body detection module 1030, and an identity determination module 1040. The various modules may perform the various steps/functions of the identity authentication method described above in connection with fig. 2-9, respectively. Only the main functions of the components of the identity authentication device 1000 will be described below, and details that have been described above will be omitted.
The information obtaining module 1010 is configured to obtain personal identification information of a person to be authenticated. The information acquisition module 1010 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The authenticated information determining module 1020 is configured to determine whether the personal identification information is authenticated information to obtain an information authentication result. Authenticated information determination module 1020 may be implemented by processor 102 in the electronic device shown in fig. 1 executing program instructions stored in storage 104.
The living body detection module 1030 is used for performing living body detection on a person to be authenticated to obtain a living body detection result. The liveness detection module 1030 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The identity determination module 1040 is configured to determine whether the identity of the person to be authenticated is legal at least according to the certificate authentication result and the living body detection result. The identity determination module 1040 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
FIG. 11 shows a schematic block diagram of an identity authentication system 1100 in accordance with one embodiment of the present invention. The identity authentication system 1100 includes an image capture device 1110, a storage device 1120, and a processor 1130.
The image capturing device 1110 is used to capture an image of a person to be authenticated, such as a face image, a document image, and the like. Image capture device 1110 is optional and identity authentication system 1100 may not include image capture device 1110. In this case, an image for authentication may be captured using another image capturing apparatus and the captured image may be transmitted to the authentication system 1100.
The storage 1120 stores computer program instructions for implementing the corresponding steps in the identity authentication method according to an embodiment of the present invention.
The processor 1130 is configured to run the computer program instructions stored in the storage device 1120 to perform the corresponding steps of the identity authentication method according to the embodiment of the present invention, and is configured to implement the information acquisition module 1010, the authenticated information judgment module 1020, the living body detection module 1030, and the identity determination module 1040 in the identity authentication device according to the embodiment of the present invention.
In one embodiment, the computer program instructions, when executed by the processor 1130, are for performing the steps of: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Furthermore, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor are used for executing the corresponding steps of the identity authentication method according to an embodiment of the present invention, and for implementing the corresponding modules in the identity authentication apparatus according to an embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media.
In one embodiment, the computer program instructions, when executed by a computer or a processor, may cause the computer or the processor to implement the respective functional modules of the identity authentication apparatus according to the embodiment of the present invention, and/or may perform the identity authentication method according to the embodiment of the present invention.
In one embodiment, the computer program instructions are operable to perform the steps of: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
The modules in the identity authentication system according to the embodiment of the present invention may be implemented by a processor of an electronic device implementing identity authentication according to the embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer readable storage medium of a computer program product according to the embodiment of the present invention are run by a computer.
According to another aspect of the present invention, an identity authentication apparatus is provided, which includes an information acquisition device, a processor and a memory, wherein the information acquisition device is configured to receive personal identification information of a person to be authenticated; the memory having stored therein computer program instructions for execution by the processor for performing the steps of: acquiring personal identification information of the person to be authenticated, wherein the personal identification information is the initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on the person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the personal information acquisition device includes an image acquisition device and/or an input device.
Illustratively, the identity authentication apparatus may further include: the image acquisition device is used for acquiring a face image of a person to be authenticated; the processing device is further used for carrying out living body detection by utilizing the face image so as to obtain a living body detection result.
Illustratively, the identity authentication apparatus may further include: the light source is used for emitting detection light to the face of a person to be authenticated; the image acquisition device is further used for acquiring one or more illumination images of the face of the person to be authenticated under the irradiation of the detection light as the face image; the processing device is further used for determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result, and determining whether the person to be authenticated passes through living body verification or not based on at least the illumination living body verification result so as to obtain a living body detection result.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising: the image acquisition device is used for acquiring personal identification information of a person to be authenticated; and the processing device is used for judging whether the personal identification information is authenticated information so as to obtain an information authentication result, carrying out living body detection on the person to be authenticated so as to obtain a living body detection result, and determining whether the identity of the person to be authenticated is legal at least according to the information authentication result and the living body detection result.
Exemplarily, the image acquisition device is further used for acquiring a face image of the person to be authenticated; the processing device is further used for carrying out living body detection by utilizing the face image so as to obtain a living body detection result.
Illustratively, the identity authentication apparatus may further include: the light source is used for emitting detection light to the face of a person to be authenticated; the image acquisition device is further used for acquiring one or more illumination images of the face of the person to be authenticated under the irradiation of the detection light as the face image; the processing device is further used for determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result, and determining whether the person to be authenticated passes through living body verification or not based on at least the illumination living body verification result so as to obtain a living body detection result.
Illustratively, the identity authentication method (200, 300 or 400) may be implemented on a separate identity authentication device (e.g., a mobile terminal, etc.). The input device may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. In the case that the personal identification information is one or more of a certificate number, a name, a certificate face, a field-collected face, a conversion value of the certificate number, a conversion value of the name, a conversion value of the certificate face, and a conversion value of the field-collected face, the information input by the user may be received by the input device to obtain the desired personal identification information. In the case where the personal identification information is document information in a document image, an image capturing device may be used to capture the document image of the person to be authenticated to obtain the desired personal identification information. Of course, the personal information such as the certificate number, the name, and the certificate face may be extracted from the certificate image, and the conversion value of the personal information may be obtained by a predetermined algorithm, that is, the image capturing apparatus may also obtain the personal identification information such as the certificate number, the name, the certificate face, the conversion value of the certificate number, the conversion value of the name, and the conversion value of the certificate face. Of course, the identity authentication device may include both an input device and an image acquisition device, and the two information acquisition methods are combined to acquire the personal identification information.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising: an input device for receiving personal identification information of a person to be authenticated; the transmission device is used for transmitting the personal identification information to the server and receiving the authentication information which is obtained by the server and is about whether the identity of the person to be authenticated is legal or not from the server in the following way: and judging whether the personal identification information is authenticated information to obtain an information authentication result, carrying out living body detection on the person to be authenticated to obtain a living body detection result, and determining whether the identity of the person to be authenticated is legal at least according to the information authentication result and the living body detection result.
Exemplarily, the identity authentication apparatus further comprises: the image acquisition device is used for acquiring a face image of a person to be authenticated; the transmission device is further used for sending the face image to a server, wherein the server performs living body detection by using the face image to obtain a living body detection result.
Illustratively, the identity authentication apparatus may further include: the light source is used for emitting detection light to the face of a person to be authenticated; the image acquisition device is further used for acquiring a plurality of illumination images of the face of the person to be authenticated under the irradiation of the detection light as the face images; the transmission device is further used for sending one or more illumination images to a server; wherein, the server performs the living body detection by the following modes: and determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result, and determining whether the person to be authenticated passes through living body verification or not based on at least the illumination living body verification result so as to obtain a living body detection result.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising: the image acquisition device is used for acquiring personal identification information of a person to be authenticated; the transmission device is used for transmitting the personal identification information to the server and receiving the authentication information which is obtained by the server and is about whether the identity of the person to be authenticated is legal or not from the server in the following way: and judging whether the personal identification information is authenticated information to obtain an information authentication result, carrying out living body detection on the person to be authenticated to obtain a living body detection result, and determining whether the identity of the person to be authenticated is legal at least according to the information authentication result and the living body detection result.
Exemplarily, the image acquisition device is further used for acquiring a face image of the person to be authenticated; the processing device is further used for carrying out living body detection by utilizing the face image so as to obtain a living body detection result.
Illustratively, the identity authentication apparatus may further include: the light source is used for emitting detection light to the face of a person to be authenticated; the image acquisition device is further used for acquiring one or more illumination images of the face of the person to be authenticated under the irradiation of the detection light as the face image; the transmission device is further used for sending one or more illumination images to a server; wherein, the server performs the living body detection by the following modes: and determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result, and determining whether the person to be authenticated passes through living body verification or not based on at least the illumination living body verification result so as to obtain a living body detection result.
Illustratively, the identity authentication method (200, 300 or 400) may be implemented on separate devices, e.g., on a client and a server. In this case, the client may comprise input means and/or image acquisition means and transmission means. Optionally, the client may upload the acquired personal identification information and the acquired face image (including the illumination image, the motion image, the real-time image, and the like) to the server, and perform the identity authentication by the server. After receiving the authentication information sent by the server, the client performs an authentication pass or fail operation, for example, outputs information about identity authentication pass or fail, allows or denies the user to perform a subsequent business operation, and the like.
For example, the transmission device of the client may transmit the personal identification information and the face image to the server via a network or other technology, and receive the authentication information from the server via the network or other technology. For example, the network may be the internet, a wireless local area network, a mobile communication network, etc., and the other technologies may include, for example, bluetooth communication, infrared communication, etc. For example, the server may be a general-purpose server or a dedicated server, may be a virtual server or a cloud server, and the like. The transmission device of the client (or the server) may include a modem, a network adapter, a bluetooth transceiver unit, an infrared transceiver unit, etc., and may also encode and decode transmitted or received information, etc.
Because most of the identity authentication is completed in the server, the computing resources of the processing device of the client can be saved, so that the requirements on the performance of the client and the manufacturing cost of the identity authentication device can be reduced, and the user experience can be improved.
According to another aspect of the present invention, an identity authentication apparatus is provided, which includes a transmission device and a processing device, wherein the transmission device is configured to receive personal identification information of a person to be authenticated from a client, and send authentication information about whether the identity of the person to be authenticated is legal or not to the client; the processing device is used for judging whether the personal identification information is authenticated information or not to obtain an information authentication result, carrying out living body detection on the person to be authenticated to obtain a living body detection result, and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result to obtain the authentication information.
Illustratively, the transmission device is further used for receiving a face image of a person to be authenticated; the processing device is further used for carrying out living body detection by utilizing the face image so as to obtain a living body detection result.
The transmission device is further used for receiving one or more illumination images of the person to be authenticated from the client, wherein the one or more illumination images are acquired aiming at the face of the person to be authenticated under the irradiation of the detection light; the processing device is further used for determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living body verification result, and determining whether the person to be authenticated passes through living body verification or not based on at least the illumination living body verification result so as to obtain a living body detection result.
As described above, the identity authentication method (200, 300 or 400) may be implemented on separate devices, for example on a client and a server. The present embodiment describes an implementation scheme in which at least a part of the identity authentication method is implemented on a server.
According to the identity authentication method and device provided by the embodiment of the invention, the identity of the person to be authenticated is determined to be legal or not by combining authenticated information judgment and living body detection, so that compared with a conventional mode of performing identity authentication simply based on a password or a certificate, the identity authentication method and device provided by the embodiment of the invention have the advantages that the authentication result is more accurate, the security of user authentication can be improved, and the rights and interests of users can be effectively guaranteed. The method and the device can be well applied to various fields related to identity authentication, such as fields of electronic commerce, mobile payment or banking business and the like.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an identity authentication device according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (41)

1. An identity authentication method comprising:
acquiring personal identification information of a person to be authenticated;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the person to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
wherein the performing the living body detection on the person to be authenticated to obtain a living body detection result comprises:
step S110: acquiring a plurality of illumination images acquired aiming at the face of the person to be authenticated under the irradiation of detection light, wherein the detection light is emitted by a display screen, the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly or preset; the detection light is obtained by dynamically changing the color and/or position of light emitted to the face of the person to be authenticated; the detection light is obtained by: dynamically changing a mode of light emitted by the display screen by changing content displayed on the display screen to emit the detection light to a face of the person to be authenticated;
Step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in the plurality of illumination images so as to obtain an illumination living body verification result, wherein the light reflection characteristics comprise: uniformity and three-dimensional characteristics of the face; and
step S130: determining whether the authenticated person passes the in-vivo verification based at least on the illumination in-vivo verification result to obtain the in-vivo detection result.
2. The identity authentication method of claim 1, wherein the personal identification information is one or more of a certificate number, a name, a certificate face, a live capture face, a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a live capture face.
3. The identity authentication method of claim 1, wherein before the obtaining of the personal identification information of the person to be authenticated, the identity authentication method further comprises:
outputting indication information for indicating the person to be authenticated to provide the person information of the preset type;
wherein the personal identification information is personnel information provided by the personnel to be authenticated or obtained based on the personnel information.
4. The identity authentication method of claim 1, wherein the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a live-action face,
the acquiring of the personal identification information of the person to be authenticated comprises:
acquiring initial information of the person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field collected face; and
the initial information is transformed based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
5. The identity authentication method of claim 1,
the acquiring of the personal identification information of the person to be authenticated comprises:
acquiring a certificate image of the person to be authenticated, wherein the information authentication result is a certificate authentication result;
the performing the living body detection on the person to be authenticated to obtain a living body detection result comprises:
acquiring a face image of the person to be authenticated; and
and performing living body detection by using the face image to obtain a living body detection result.
6. The identity authentication method of claim 5, wherein before the determining whether the identity of the person to be authenticated is legitimate at least according to the information authentication result and the living body detection result, the identity authentication method further comprises:
Performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result;
the determining whether the identity of the person to be authenticated is legal at least according to the information authentication result and the living body detection result comprises:
and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
7. The identity authentication method of claim 6, wherein the additional judgment operation comprises a certificate authenticity judgment operation and/or a face consistency judgment operation, and the additional judgment result comprises a certificate authenticity judgment result and/or a face consistency judgment result,
the certificate authenticity judging operation comprises the following steps: judging whether the certificate in the certificate image is a real certificate or not to obtain a certificate authenticity judgment result;
the human face consistency judging operation comprises the following steps: acquiring the certificate face of the person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judgment result.
8. The identity authentication method of claim 7, wherein the acquiring of the certificate face of the person to be authenticated according to the certificate image comprises:
And detecting a face from the certificate image to obtain the certificate face of the person to be authenticated.
9. The identity authentication method of claim 7, wherein the acquiring of the certificate face of the person to be authenticated according to the certificate image comprises:
performing character recognition on the certificate image to obtain character information in the certificate image;
searching matched certificate information from an authenticated certificate information database based on the character information in the certificate image; and
and determining the searched certificate face in the matched certificate information as the certificate face of the person to be authenticated.
10. The identity authentication method of claim 7, wherein the judging whether the certificate in the certificate image is a real certificate or not to obtain the certificate authenticity judging result comprises:
extracting image features of the certificate image; and
inputting the image characteristics into a trained certificate classifier to obtain a certificate authenticity judgment result;
and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
11. The identity authentication method of claim 7, wherein the judging whether the certificate in the certificate image is a real certificate or not to obtain the certificate authenticity judging result comprises:
Identifying an image block containing certificate identification information from the certificate image; and
identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result;
and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is the real certificate.
12. The identity authentication method of claim 6, wherein in the determining whether the identity of the person to be authenticated is legitimate according to the document authentication result, the liveness detection result, and the additional determination result, each of the document authentication result, the liveness detection result, and the additional determination result has a respective weight coefficient.
13. The identity authentication method of claim 5, wherein the acquiring the document image of the person to be authenticated comprises:
acquiring a pre-shot image which is acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition;
evaluating the image attribute of the pre-shot image in real time;
when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold value, generating pre-shot prompt information according to the image attribute of the pre-shot image, wherein the pre-shot prompt information is used for prompting the person to be authenticated to adjust the shooting condition of the certificate of the person to be authenticated; and
And when the evaluation value of the image attribute of the pre-shot image is equal to or larger than the preset evaluation value threshold value, saving the pre-shot image as the certificate image.
14. The identity authentication method of claim 5, wherein the determining whether the personal identification information is authenticated information to obtain an information authentication result comprises:
performing character recognition on the certificate image to obtain character information in the certificate image; and
searching in an authenticated certificate information database based on the text information in the certificate image to obtain the certificate authentication result;
and the certificate authentication result is the confidence level that the certificate information in the certificate image is authenticated certificate information.
15. The identity authentication method of claim 9 or 14, wherein the performing text recognition on the document image to obtain text information in the document image comprises:
positioning characters in the certificate image to obtain an image block containing the characters; and
and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
16. The identity authentication method of claim 15, wherein before the recognizing the characters in the image block containing characters, the identity authentication method further comprises:
and correcting the image blocks containing the characters into a horizontal state.
17. The identity authentication method of claim 15, wherein after the recognizing the characters in the image block containing characters to obtain the character information in the document image, the character recognizing the document image to obtain the character information in the document image further comprises:
outputting the text information in the certificate image for a user to check;
receiving character correction information input by the user;
comparing the characters to be corrected indicated by the character correction information with the corresponding characters in the character information in the certificate image; and
and if the difference between the characters to be corrected indicated by the character correction information and the corresponding characters in the character information in the certificate image is smaller than a preset difference threshold value, updating the character information in the certificate image by using the character correction information.
18. An identity authentication method according to claim 1, wherein the pattern of detected light changes between every two consecutive moments in time during the illumination of the face of the person to be authenticated.
19. The identity authentication method of claim 1,
the performing the living body detection on the person to be authenticated to obtain a living body detection result comprises:
step S140: outputting an action instruction, wherein the action instruction is used for instructing the authenticated person to execute a corresponding action;
step S150: acquiring a plurality of action images acquired aiming at the face of the person to be authenticated;
step S160: detecting an action performed by the person to be authenticated based on the plurality of action images; and
step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living body verification result;
the step S130 includes:
determining whether the person to be authenticated passes the in-vivo verification based on the illumination in-vivo verification result and the action in-vivo verification result to obtain the in-vivo detection result.
20. The identity authentication method of claim 19, wherein the step S170 comprises:
determining that the face of the person to be authenticated belongs to a living body if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period of time not longer than a first preset time, and determining that the face of the person to be authenticated does not belong to a living body if an action performed by the person to be authenticated, which is in accordance with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time.
21. The identity authentication method of claim 19,
the identity authentication method further comprises the following steps:
counting once in the process of executing the steps S140 to S170 each time to obtain the action verification times;
after the step S170, the identity authentication method further includes:
if the action living body verification result indicates that the face of the person to be authenticated does not belong to a living body, outputting first error information, and determining whether the action verification frequency reaches a first time threshold value, if the action verification frequency reaches the first time threshold value, turning to the step S130, and if the action verification frequency does not reach the first time threshold value, returning to the step S140 or returning to the step S110 in the case that the step S110 is executed before the step S140, wherein the first error information is used for prompting that the living body verification for the person to be authenticated fails.
22. The identity authentication method of claim 1, wherein, prior to the step S110, the identity authentication method further comprises:
step S108: judging whether the image acquisition condition of the person to be authenticated meets a preset requirement, and if the image acquisition condition meets the preset requirement, turning to the step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in an image acquisition area of an image acquisition device and/or the relative angle between the person to be authenticated and the image acquisition device.
23. The identity authentication method of claim 22, wherein prior to or simultaneously with the step S108, the identity authentication method further comprises:
step S106: and outputting first prompt information, wherein the first prompt information is used for prompting the person to be authenticated to enable the face to be over against the image acquisition device and to be close to the image acquisition device.
24. The identity authentication method of claim 23, wherein the step S106 comprises: outputting the first prompt information in one or more of a voice form, an image form and a text form.
25. The identity authentication method of claim 22, wherein the step S108 comprises:
acquiring a real-time image acquired aiming at the face of the person to be authenticated;
outputting a preset area for calibrating the image acquisition condition and a face area in the real-time image in real time for display; and
judging whether the image acquisition condition meets the preset requirement or not according to the face area, if the face area is located in the preset area and the proportion of the face area in the real-time image is greater than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the real-time image is not greater than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
26. The identity authentication method of claim 25, wherein the identity authentication method further comprises:
and if the proportion of the face area in the real-time image is not more than the first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
27. The identity authentication method of claim 22, wherein the step S108 comprises:
acquiring a real-time image acquired aiming at the face of the person to be authenticated;
outputting a preset area for calibrating the image acquisition condition and a face area in the real-time image in real time for display; and
judging whether the image acquisition condition meets the preset requirement or not according to the face area,
if the face area is located in a preset area and the proportion of the face area in the preset area is larger than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the preset area is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
28. The identity authentication method of claim 27, wherein the identity authentication method further comprises:
and if the proportion of the face area in the preset area is not more than the second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
29. An identity authentication method as claimed in claim 25 or 27, wherein the identity authentication method further comprises:
judging the relative position relation between the face area and the preset area in real time; and
outputting third acquisition prompt information in real time based on the relative position relationship between the face area and the preset area to prompt that the relative position relationship between the person to be authenticated and the image acquisition device is changed so that the face area is close to the preset area.
30. The identity authentication method of claim 22, wherein the step S108 comprises:
acquiring attitude information of the image acquisition device; and
and judging whether the image acquisition device is in a vertical placement state or not according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and otherwise, determining that the image acquisition condition does not meet the preset requirement.
31. The identity authentication method of claim 22,
the identity authentication method further comprises the following steps:
counting once in the process of executing the steps S110 to S120 each time to obtain the number of times of illumination verification;
after the step S120, the identity authentication method further includes:
if the illumination living body verification result shows that the face of the person to be authenticated does not belong to a living body, outputting second error information, and judging whether the illumination verification frequency reaches a second secondary threshold value, if so, turning to the step S130, and if not, returning to the step S108 or returning to the step S110, wherein the second error information is used for prompting that the living body verification for the person to be authenticated fails.
32. The identity authentication method of claim 1, wherein before the step S110 or in the process of performing the steps S110 and S120, the identity authentication method further comprises:
and outputting second prompt information, wherein the second prompt information is used for prompting that the person to be authenticated keeps still within a second preset time.
33. The identity authentication method of claim 32, wherein the second prompting message is countdown information corresponding to the second preset time.
34. An identity authentication device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are operable to perform the steps of:
acquiring personal identification information of a person to be authenticated;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the person to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
wherein the step of performing a live body test on the person to be authenticated to obtain a live body test result, the step being performed by the computer program instructions when executed by the processor, comprises:
step S110: acquiring a plurality of illumination images acquired aiming at the face of the person to be authenticated under the irradiation of detection light, wherein the detection light is emitted by a display screen, the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly or preset; the detection light is obtained by dynamically changing the color and/or position of light emitted to the face of the person to be authenticated; the detection light is obtained by: dynamically changing a mode of light emitted by the display screen by changing content displayed on the display screen to emit the detection light to a face of the person to be authenticated;
Step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in the plurality of illumination images so as to obtain an illumination living body verification result, wherein the light reflection characteristics comprise: uniformity and three-dimensional characteristics of the face; and
step S130: determining whether the authenticated person passes the in-vivo verification based at least on the illumination in-vivo verification result to obtain the in-vivo detection result.
35. A storage medium having stored thereon program instructions which when executed are for performing the steps of:
acquiring personal identification information of a person to be authenticated;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the person to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
wherein the step of performing the living body detection on the person to be authenticated to obtain the living body detection result, which is executed when the program instruction is executed, includes:
Step S110: acquiring a plurality of illumination images acquired aiming at the face of the person to be authenticated under the irradiation of detection light, wherein the detection light is emitted by a display screen, the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly or preset; the detection light is obtained by dynamically changing the color and/or position of light emitted to the face of the person to be authenticated; the detection light is obtained by: dynamically changing a mode of light emitted by the display screen by changing content displayed on the display screen to emit the detection light to a face of the person to be authenticated;
step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in the plurality of illumination images so as to obtain an illumination living body verification result, wherein the light reflection characteristics comprise: uniformity and three-dimensional characteristics of the face; and
step S130: determining whether the authenticated person passes the in-vivo verification based at least on the illumination in-vivo verification result to obtain the in-vivo detection result.
36. An identity authentication device comprises an information acquisition device, a processor and a memory, wherein,
the information acquisition device is used for acquiring initial information of a person to be authenticated;
the memory having stored therein computer program instructions for execution by the processor for performing the steps of:
acquiring personal identification information of the person to be authenticated, wherein the personal identification information is the initial information or is acquired based on the initial information;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the person to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
wherein the information acquisition device comprises an image acquisition device, the identity authentication device further comprises a light source, wherein,
the light source is used for emitting detection light to the face of the person to be authenticated;
the image acquisition device is also used for acquiring one or more illumination images of the face of the person to be authenticated under the illumination of the detection light;
The computer program instructions, when executed by the processor, perform a biopsy of the person to be authenticated to obtain a biopsy result comprising:
step S110: acquiring a plurality of illumination images acquired aiming at the face of the person to be authenticated under the irradiation of detection light, wherein the detection light is emitted by a display screen, the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly or preset; the detection light is obtained by dynamically changing the color and/or position of light emitted to the face of the person to be authenticated; the detection light is obtained by: dynamically changing a mode of light emitted by the display screen by changing content displayed on the display screen to emit the detection light to a face of the person to be authenticated;
step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in the plurality of illumination images so as to obtain an illumination living body verification result, wherein the light reflection characteristics comprise: uniformity and three-dimensional characteristics of the face; and
Step S130: determining whether the authenticated person passes the in-vivo verification based at least on the illumination in-vivo verification result to obtain the in-vivo detection result.
37. An identity authentication device as claimed in claim 36, wherein the information capture device comprises an image capture device and/or an input device.
38. An identity authentication apparatus comprising:
the information acquisition device is used for acquiring initial information of a person to be authenticated; and
the transmission device is used for sending the initial information to a server and receiving authentication information which is obtained by the server and is about whether the identity of the person to be authenticated is legal or not from the server in the following way: acquiring personal identification information of the person to be authenticated, wherein the personal identification information is the initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on the person to be authenticated to obtain a living body detection result; determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
wherein the server performs the living body detection on the person to be authenticated to obtain a living body detection result by the following method:
Step S110: acquiring a plurality of illumination images acquired aiming at the face of the person to be authenticated under the irradiation of detection light, wherein the detection light is emitted by a display screen, the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly or preset; the detection light is obtained by dynamically changing the color and/or position of light emitted to the face of the person to be authenticated; the detection light is obtained by: dynamically changing a mode of light emitted by the display screen by changing content displayed on the display screen to emit the detection light to a face of the person to be authenticated;
step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in the plurality of illumination images so as to obtain an illumination living body verification result, wherein the light reflection characteristics comprise: uniformity and three-dimensional characteristics of the face; and
step S130: determining whether the authenticated person passes the in-vivo verification based at least on the illumination in-vivo verification result to obtain the in-vivo detection result.
39. An identity authentication device as claimed in claim 38, wherein the information capture device comprises an image capture device and/or an input device.
40. An identity authentication device comprising a transmission device, a processor and a memory, wherein,
the transmission device is used for receiving initial information of a person to be authenticated from a client and sending authentication information about whether the identity of the person to be authenticated is legal to the client;
the memory having stored therein computer program instructions for execution by the processor for performing the steps of:
acquiring personal identification information of the person to be authenticated, wherein the personal identification information is the initial information or is acquired based on the initial information;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the person to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
wherein the step of performing a live detection of the person to be authenticated to obtain a live detection result, the step being performed by the computer program instructions when executed by the processor, comprises:
Step S110: acquiring a plurality of illumination images acquired aiming at the face of the person to be authenticated under the irradiation of detection light, wherein the detection light is emitted by a display screen, the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly or preset; the detection light is obtained by dynamically changing the color and/or position of light emitted to the face of the person to be authenticated; the detection light is obtained by: dynamically changing a mode of light emitted by the display screen by changing content displayed on the display screen to emit the detection light to a face of the person to be authenticated;
step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in the plurality of illumination images so as to obtain an illumination living body verification result, wherein the light reflection characteristics comprise: uniformity and three-dimensional characteristics of the face; and
step S130: determining whether the authenticated person passes the in-vivo verification based at least on the illumination in-vivo verification result to obtain the in-vivo detection result.
41. An identity authentication apparatus comprising:
the information acquisition module is used for acquiring personal identification information of a person to be authenticated;
the authenticated information judging module is used for judging whether the personal identification information is authenticated information or not so as to obtain an information authentication result;
the living body detection module is used for carrying out living body detection on the personnel to be authenticated so as to obtain a living body detection result; and
the identity determining module is used for determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
wherein the in-vivo detection module includes:
the illumination image acquisition sub-module is used for acquiring a plurality of illumination images acquired aiming at the face of the person to be authenticated under the irradiation of detection light, wherein the detection light is emitted by a display screen, the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly or preset; the detection light is obtained by dynamically changing the color and/or position of light emitted to the face of the person to be authenticated; the detection light is obtained by: dynamically changing a mode of light emitted by the display screen by changing content displayed on the display screen to emit the detection light to a face of the person to be authenticated;
An illumination living body verification sub-module, configured to determine whether the face of the person to be authenticated belongs to a living body based on light reflection characteristics of the face of the person to be authenticated in the plurality of illumination images, so as to obtain an illumination living body verification result, where the light reflection characteristics include: uniformity and three-dimensional characteristics of the face; and
and the living body verification passing determination sub-module is used for determining whether the authenticated person passes the living body verification or not at least based on the illumination living body verification result so as to obtain the living body detection result.
CN201710218218.8A 2017-03-17 2017-04-05 Identity authentication method and device and storage medium Active CN108573203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011596232.XA CN112651348B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2017101622458 2017-03-17
CN2017101616851 2017-03-17
CN201710162245 2017-03-17
CN201710161685 2017-03-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202011596232.XA Division CN112651348B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium

Publications (2)

Publication Number Publication Date
CN108573203A CN108573203A (en) 2018-09-25
CN108573203B true CN108573203B (en) 2021-01-26

Family

ID=63575988

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201710218218.8A Active CN108573203B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium
CN201710218512.9A Active CN108629260B (en) 2017-03-17 2017-04-05 Living body verification method and apparatus, and storage medium
CN202011596232.XA Active CN112651348B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201710218512.9A Active CN108629260B (en) 2017-03-17 2017-04-05 Living body verification method and apparatus, and storage medium
CN202011596232.XA Active CN112651348B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium

Country Status (1)

Country Link
CN (3) CN108573203B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046696A (en) * 2018-10-12 2020-04-21 宏碁股份有限公司 Living body identification method and electronic device
US10885363B2 (en) * 2018-10-25 2021-01-05 Advanced New Technologies Co., Ltd. Spoof detection using structured light illumination
CN109448193A (en) * 2018-11-16 2019-03-08 广东电网有限责任公司 Identity information recognition methods and device
CN109766849B (en) * 2019-01-15 2023-06-20 深圳市凯广荣科技发展有限公司 Living body detection method, detection device and self-service terminal equipment
CN109618100B (en) * 2019-01-15 2020-11-27 北京旷视科技有限公司 Method, device and system for judging field shooting image
CN109993124B (en) * 2019-04-03 2023-07-14 深圳华付技术股份有限公司 Living body detection method and device based on video reflection and computer equipment
CN110135326B (en) * 2019-05-10 2021-10-29 中汇信息技术(上海)有限公司 Identity authentication method, electronic equipment and computer readable storage medium
CN112906741A (en) * 2019-05-21 2021-06-04 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
SG10201906721SA (en) 2019-07-19 2021-02-25 Nec Corp Method and system for chrominance-based face liveness detection
CN110443237B (en) * 2019-08-06 2023-06-30 北京旷视科技有限公司 Certificate identification method, device, electronic equipment and computer readable storage medium
US10974537B2 (en) 2019-08-27 2021-04-13 Advanced New Technologies Co., Ltd. Method and apparatus for certificate identification
CN111898536A (en) * 2019-08-27 2020-11-06 创新先进技术有限公司 Certificate identification method and device
CN110909264B (en) * 2019-11-29 2023-08-29 北京三快在线科技有限公司 Information processing method, device, equipment and storage medium
CN111523438B (en) * 2020-04-20 2024-02-23 支付宝实验室(新加坡)有限公司 Living body identification method, terminal equipment and electronic equipment
CN111723655B (en) * 2020-05-12 2024-03-08 五八有限公司 Face image processing method, device, server, terminal, equipment and medium
CN111784498A (en) * 2020-06-22 2020-10-16 北京海益同展信息科技有限公司 Identity authentication method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488495A (en) * 2016-01-05 2016-04-13 上海川织金融信息服务有限公司 Identity identification method and system based on combination of face characteristics and device fingerprint
CN105868693A (en) * 2016-03-21 2016-08-17 深圳市商汤科技有限公司 Identity authentication method and system
CN105912986A (en) * 2016-04-01 2016-08-31 北京旷视科技有限公司 In vivo detection method, in vivo detection system and computer program product
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
CN106407914A (en) * 2016-08-31 2017-02-15 北京旷视科技有限公司 Method for detecting human faces, device and remote teller machine system

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001069520A2 (en) * 2000-03-10 2001-09-20 Ethentica, Inc. Biometric sensor
CN1209073C (en) * 2001-12-18 2005-07-06 中国科学院自动化研究所 Identity discriminating method based on living body iris
JP2006043029A (en) * 2004-08-03 2006-02-16 Matsushita Electric Ind Co Ltd Living body distinguishing device, and authenticating device using the same, and living body distinguishing method
JP2006230603A (en) * 2005-02-23 2006-09-07 Canon Inc Imaging apparatus, biometric identification system, and image acquisition method
CN102262727A (en) * 2011-06-24 2011-11-30 常州锐驰电子科技有限公司 Method for monitoring face image quality at client acquisition terminal in real time
US9082235B2 (en) * 2011-07-12 2015-07-14 Microsoft Technology Licensing, Llc Using facial data for device authentication or subject identification
CN103310179A (en) * 2012-03-06 2013-09-18 上海骏聿数码科技有限公司 Method and system for optimal attitude detection based on face recognition technology
CN102622588B (en) * 2012-03-08 2013-10-09 无锡中科奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN103440479B (en) * 2013-08-29 2016-12-28 湖北微模式科技发展有限公司 A kind of method and system for detecting living body human face
CN104104867B (en) * 2014-04-28 2017-12-29 三星电子(中国)研发中心 The method and apparatus that control camera device is shot
CN104184956A (en) * 2014-08-29 2014-12-03 宇龙计算机通信科技(深圳)有限公司 Mobile communication terminal photographing method and system and mobile communication terminal
CN105468950B (en) * 2014-09-03 2020-06-30 阿里巴巴集团控股有限公司 Identity authentication method and device, terminal and server
WO2016127437A1 (en) * 2015-02-15 2016-08-18 北京旷视科技有限公司 Live body face verification method and system, and computer program product
CN104766063B (en) * 2015-04-08 2018-01-05 宁波大学 A kind of living body faces recognition methods
CN104881632A (en) * 2015-04-28 2015-09-02 南京邮电大学 Hyperspectral face recognition method
US9922238B2 (en) * 2015-06-25 2018-03-20 West Virginia University Apparatuses, systems, and methods for confirming identity
CN105518711B (en) * 2015-06-29 2019-11-29 北京旷视科技有限公司 Biopsy method, In vivo detection system and computer program product
CN105518714A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Vivo detection method and equipment, and computer program product
CN105117695B (en) * 2015-08-18 2017-11-24 北京旷视科技有限公司 In vivo detection equipment and biopsy method
CN105069438B (en) * 2015-08-19 2019-03-12 南昌欧菲生物识别技术有限公司 The manufacturing method of finger print detection device
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus
CN105512632B (en) * 2015-12-09 2019-04-05 北京旷视科技有限公司 Biopsy method and device
CN106384237A (en) * 2016-08-31 2017-02-08 北京志光伯元科技有限公司 Member authentication-management method, device and system based on face identification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
CN105488495A (en) * 2016-01-05 2016-04-13 上海川织金融信息服务有限公司 Identity identification method and system based on combination of face characteristics and device fingerprint
CN105868693A (en) * 2016-03-21 2016-08-17 深圳市商汤科技有限公司 Identity authentication method and system
CN105912986A (en) * 2016-04-01 2016-08-31 北京旷视科技有限公司 In vivo detection method, in vivo detection system and computer program product
CN106407914A (en) * 2016-08-31 2017-02-15 北京旷视科技有限公司 Method for detecting human faces, device and remote teller machine system

Also Published As

Publication number Publication date
CN112651348A (en) 2021-04-13
CN108629260A (en) 2018-10-09
CN108629260B (en) 2022-02-08
CN108573203A (en) 2018-09-25
CN112651348B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN108573203B (en) Identity authentication method and device and storage medium
CN106778525B (en) Identity authentication method and device
US11188734B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US10339402B2 (en) Method and apparatus for liveness detection
CN106407914B (en) Method and device for detecting human face and remote teller machine system
EP2680192B1 (en) Facial recognition
US8675925B2 (en) Spoof detection for biometric authentication
CN108573202A (en) Identity identifying method, device and system and terminal, server and storage medium
EP2680191A2 (en) Facial recognition
KR20190094352A (en) System and method for performing fingerprint based user authentication using a captured image using a mobile device
US11663853B2 (en) Iris authentication device, iris authentication method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant