CN112651348B - Identity authentication method and device and storage medium - Google Patents

Identity authentication method and device and storage medium Download PDF

Info

Publication number
CN112651348B
CN112651348B CN202011596232.XA CN202011596232A CN112651348B CN 112651348 B CN112651348 B CN 112651348B CN 202011596232 A CN202011596232 A CN 202011596232A CN 112651348 B CN112651348 B CN 112651348B
Authority
CN
China
Prior art keywords
authenticated
person
face
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011596232.XA
Other languages
Chinese (zh)
Other versions
CN112651348A (en
Inventor
范浩强
陈可卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN202011596232.XA priority Critical patent/CN112651348B/en
Publication of CN112651348A publication Critical patent/CN112651348A/en
Application granted granted Critical
Publication of CN112651348B publication Critical patent/CN112651348B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an identity authentication method and device and a storage medium. The method comprises the following steps: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on personnel to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result. The method and the device combine the authenticated information judgment and the living body detection to determine whether the identity of the person to be authenticated is legal, so that compared with the conventional mode of performing identity authentication based on the password or the certificate alone, the authentication result of the identity authentication method and the device provided by the embodiment of the invention is more accurate, the security of user authentication can be improved, and the user rights can be effectively ensured.

Description

Identity authentication method and device and storage medium
The present application is a divisional application of chinese invention patent application with application date 2017, 04 month 05, application number 201710218218.8, and the name of "identity authentication method and apparatus, and storage medium".
Technical Field
The present invention relates to the field of image processing, and more particularly, to an identity authentication method and apparatus, and a storage medium.
Background
Socialization of technological products becomes a beautiful landscape of modern social life, people's clothing and food residence are not closely related to science and technology, technological products are gradually applied to aspects of social life, and become an indispensable important component of modern human daily life. However, people enjoy benefits of scientific products and experience negative problems, such as information security, with them.
At present, many fields relate to information security problems, and especially in the technical fields of electronic commerce, mobile payment, bank account opening and the like, the information security problem is particularly prominent. In particular, in the field, user interaction authentication (also referred to as identity authentication) is performed by a password mode, and user interaction authentication is performed by a certificate brushing mode. The two modes have certain defects, the former needs the user to remember the password, each time the password is input is complicated, once the password is stolen by an illegal molecule, privacy or property loss is caused to the user, and the later is easy to forge or impersonate the certificate, so that the security is lower. Therefore, it is necessary to provide a convenient and safe identity authentication method or system, so as to be applied to the technical fields of electronic commerce, mobile payment, bank account opening and the like.
Disclosure of Invention
The present invention has been made in view of the above-described problems. The invention provides an identity authentication method and device and a storage medium.
According to one aspect of the invention, an identity authentication method is provided. The method comprises the following steps: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on personnel to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, performing the living body detection of the person to be authenticated to obtain the living body detection result includes: step S110: acquiring one or more illumination images acquired for a face of a person to be authenticated under irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination activity experience verification result; step S130: determining whether the authentication person passes the living body verification based at least on the photo-activation experience authentication result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light changes between every two consecutive moments in time in the process of illuminating the face of the person to be authenticated.
Illustratively, the pattern of the detection light is randomly changed or preset during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light includes at least one of a color of the detection light, a position of a light emitting region, an intensity of the detection light, an irradiation angle of the detection light, a wavelength of the detection light, and a frequency of the detection light.
Illustratively, the detection light is obtained by dynamically changing the color and/or position of light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of the light emitted by the display screen is dynamically changed by changing the content displayed on the display screen so as to emit the detection light to the face of the person to be authenticated.
Illustratively, the dynamically changing the mode of light emitted by the display screen by changing the content displayed on the display screen includes: the pattern of light emitted by the display screen is dynamically changed by changing the color of the pixels and/or the position of the light emitting areas on the display screen.
Illustratively, the display screen is divided into a plurality of light emitting areas, and the mode of light emitted by the display screen is dynamically changed by changing the color of pixels and/or the position of the light emitting areas on the display screen, including: for each light-emitting area on the display screen, randomly selecting one RGB value in a preset RGB value range at a time as the color value of the light-emitting area for display.
Illustratively, performing the living body detection of the person to be authenticated to obtain the living body detection result includes: step S140: outputting an action instruction, wherein the action instruction is used for indicating an authentication person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living experience verification result; step S130 includes: and determining whether the personnel to be authenticated passes the living body verification based on the illumination living body experience verification result and the action living body experience verification result so as to obtain a living body detection result.
Illustratively, step S170 includes: if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period of not more than the first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
Illustratively, the identity authentication method further comprises: counting once in each process of executing step S140 to step S170 to obtain the number of times of action verification; after step S170, the identity authentication method further includes: if the action living experience verification result indicates that the face of the person to be authenticated does not belong to a living body, outputting first error information, judging whether the action verification times reach a first time threshold, if the action verification times reach the first time threshold, turning to step S130, and if the action verification times do not reach the first time threshold, returning to step S140 or returning to step S110 under the condition that step S110 is executed before step S140, wherein the first error information is used for prompting that the living experience verification of the person to be authenticated fails.
Illustratively, prior to step S110, the identity authentication method further includes: step S108: judging whether the image acquisition condition of the person to be authenticated meets the preset requirement, and if the image acquisition condition meets the preset requirement, turning to step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle of the person to be authenticated and the image acquisition device.
Illustratively, prior to step S108 or concurrently with step S108, the identity authentication method further comprises: step S106: and outputting first prompt information, wherein the first prompt information is used for prompting a person to be authenticated to face the image acquisition device and approach the image acquisition device.
Illustratively, step S106 includes: and outputting the first prompt information in one or more of a voice form, an image form and a text form.
Illustratively, step S108 includes: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; and judging whether the image acquisition condition meets the preset requirement according to the human face region, if the human face region is positioned in the preset region and the proportion of the human face region in the real-time image is larger than the first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the human face region is not positioned in the preset region or the proportion of the human face region in the real-time image is not larger than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the identity authentication method further comprises: if the proportion of the face area in the real-time image is not greater than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, step S108 includes: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; and judging whether the image acquisition condition meets the preset requirement according to the human face region, if the human face region is positioned in the preset region and the proportion of the human face region in the preset region is larger than the second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the human face region is not positioned in the preset region or the proportion of the human face region in the preset region is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the identity authentication method further comprises: if the proportion of the face area in the preset area is not greater than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to be close to the image acquisition device.
Illustratively, the identity authentication method further comprises: judging the relative position relationship between the face area and the preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relation between the face area and the preset area so as to prompt the change of the relative position relation between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area.
Illustratively, step S108 includes: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, otherwise, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the identity authentication method further comprises: counting once in each process of executing the steps S110 to S120 to obtain the number of illumination verification times; after step S120, the identity authentication method further includes: if the result of the illumination activity experience verification indicates that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification times reach a second time threshold, if the illumination verification times reach the second time threshold, turning to step S130, and if the illumination verification times do not reach the second time threshold, returning to step S108 or returning to step S110, wherein the second error information is used for prompting the failure of the activity experience verification of the person to be authenticated.
Illustratively, before step S110 or during execution of steps S110 and S120, the identity authentication method further includes: outputting second prompt information, wherein the second prompt information is used for prompting the personnel to be authenticated to remain motionless within a second preset time.
The second prompt information is countdown information corresponding to a second preset time.
Illustratively, in the process of illuminating the face of the person to be authenticated, the pattern of the detection light is not changed, and the pattern of the detection light is a specific pattern selected after optimization according to experimental data, and the step S120 includes: based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images, determining whether the face of the person to be authenticated belongs to a living body or not through a specific algorithm corresponding to the light detection mode, so as to obtain an illumination activity experience verification result.
Illustratively, the personal identification information is one or more of a credential number, a name, a credential face, a live acquisition face, a transformed value of the credential number, a transformed value of the name, a transformed value of the credential face, and a transformed value of the live acquisition face.
Illustratively, before acquiring the personal identification information of the person to be authenticated, the identity authentication method further includes: outputting indication information for indicating personnel to be authenticated to provide personnel information of a predetermined type; wherein the personal identification information is personnel information provided by personnel to be authenticated or is obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformation value of a certificate number, a transformation value of a name, a transformation value of a certificate face and a transformation value of a field acquisition face, and the step of obtaining the personal identification information of the person to be authenticated includes: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as personal identification information.
Illustratively, acquiring personal identification information of the person to be authenticated includes: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result includes: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before determining whether the identity of the person to be authenticated is legal based at least on the information authentication result and the living body detection result, the identity authentication method further includes: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result comprises the following steps: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judging operation includes a document authenticity judging operation and/or a face consistency judging operation, and the additional judging result includes a document authenticity judging result and/or a face consistency judging result, the document authenticity judging operation includes: judging whether the certificate in the certificate image is a true certificate or not so as to obtain a certificate true or false judgment result; the face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judging result.
Illustratively, acquiring the credential face of the person to be authenticated from the credential image includes: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, acquiring the credential face of the person to be authenticated from the credential image includes: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from an authenticated certificate information database based on text information in the certificate image; and determining the certificate face in the searched and matched certificate information as the certificate face of the person to be authenticated.
Illustratively, determining whether the document in the document image is a genuine document to obtain a document authenticity determination result includes: extracting image features of the certificate image; inputting the image characteristics into a trained certificate classifier to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, determining whether the document in the document image is a genuine document to obtain a document authenticity determination result includes: identifying an image block containing identification information of the certificate from the certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, in determining whether the identity of the person to be authenticated is legitimate based on the credential authentication result, the living detection result, and the additional determination result, each of the credential authentication result, the living detection result, and the additional determination result has a respective weight coefficient.
Illustratively, acquiring a credential image of a person to be authenticated includes: acquiring a pre-shooting image which is acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition; evaluating image attributes of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold, generating pre-shot prompt information according to the image attribute of the pre-shot image, and prompting a person to be authenticated to adjust the shooting condition of the certificate; and when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold, saving the pre-shot image as a document image.
Illustratively, determining whether the personal identification information is authenticated information to obtain an information authentication result includes: performing character recognition on the certificate image to obtain character information in the certificate image; searching in an authenticated certificate information database based on the text information in the certificate image to obtain a certificate authentication result; the certificate authentication result is the confidence that the certificate information in the certificate image is authenticated certificate information.
Illustratively, text recognition of the document image to obtain text information in the document image includes: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
Illustratively, before recognizing the text in the image block containing the text, the identity authentication method further includes: and correcting the image block containing the characters to be in a horizontal state.
Illustratively, after recognizing the text in the image block containing the text to obtain the text information in the document image, performing text recognition on the document image to obtain the text information in the document image further includes: outputting text information in the certificate image for viewing by a user; receiving text correction information input by a user; comparing the text to be repaired indicated by the text correction information with the corresponding text in the text information in the certificate image; and if the difference between the text to be repaired, indicated by the text correction information, and the corresponding text in the text information in the certificate image is smaller than a preset difference threshold value, updating the text information in the certificate image by using the text correction information.
Illustratively, performing the living body detection of the person to be authenticated to obtain the living body detection result includes: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction so as to obtain a living body detection result.
Illustratively, performing the living body detection of the person to be authenticated to obtain the living body detection result includes: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; capturing a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed from the acquired face image; and inputting the skin region image before the living body action is executed by the person to be authenticated and the skin region image after the living body action is executed into a skin elasticity classifier to obtain a living body detection result.
Illustratively, the identity authentication method includes: acquiring a face image before a real person executes a living body action and a face image after the living body action, and acquiring a face image before a false person executes the living body action and a face image after the living body action; extracting a face image before the living body action of the real person and a skin area image after the living body action of the real person from the face image before the living body action of the real person and the face image after the living body action of the real person as positive sample images; extracting a face image before the false person executes the living body action and a skin area image after the false person executes the living body action from the face image before the false person executes the living body action and the face image after the false person executes the living body action as negative sample images; and training a classifier model using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, capturing, from the acquired face image, a skin area image of the person to be authenticated before performing the living body motion and a skin area image of the person to be authenticated after performing the living body motion includes: selecting a face image before the living body action of the person to be authenticated and a face image after the living body action of the person to be authenticated from the collected face images; positioning a face in a face image before the living body action of the person to be authenticated is executed and a face in a face image after the living body action is executed by using a face detection model; positioning key points of a face in a face image before the living body action of a person to be authenticated is executed and a face image after the living body action is executed by using a face key point positioning model; and dividing the areas of the faces in the face image before the living body action of the person to be authenticated is executed and the face in the face image after the living body action is executed according to the face positions and the key point positions obtained by positioning, so as to obtain a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed.
Illustratively, the identity authentication method further comprises: obtaining a sample face image, wherein the positions of the faces and the positions of key points of the faces in the sample face image are marked; and training a neural network by using the sample face image to obtain a face detection model and a face key point positioning model.
Illustratively, performing the living body detection of the person to be authenticated to obtain the living body detection result includes: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under the irradiation of structured light; and determining whether the face of the person to be authenticated belongs to a living body according to the face image so as to obtain a living body detection result.
According to another aspect of the present invention, there is provided an identity authentication device. The device comprises: the information acquisition module is used for acquiring personal identification information of a person to be authenticated; the authenticated information judging module is used for judging whether the personal identification information is authenticated information or not so as to obtain an information authentication result; the living body detection module is used for carrying out living body detection on personnel to be authenticated so as to obtain a living body detection result; and the identity determining module is used for determining whether the identity of the personnel to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the in-vivo detection module includes: the illumination image acquisition sub-module is used for acquiring one or more illumination images acquired for the face of a person to be authenticated under the irradiation of detection light; the illumination experience verification sub-module is used for determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics expressed by the face of the person to be authenticated in one or more illumination images so as to obtain an illumination experience verification result; and a liveness experience verification pass determination sub-module for determining whether the authentication person passes the liveness verification based at least on the photo liveness experience verification result to obtain a liveness detection result.
Illustratively, the pattern of the detection light is changed at least once during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light changes between every two consecutive moments in time in the process of illuminating the face of the person to be authenticated.
Illustratively, the pattern of the detection light is randomly changed or preset during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light includes at least one of a color of the detection light, a position of a light emitting region, an intensity of the detection light, an irradiation angle of the detection light, a wavelength of the detection light, and a frequency of the detection light.
Illustratively, the detection light is obtained by dynamically changing the color and/or position of light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of the light emitted by the display screen is dynamically changed by changing the content displayed on the display screen so as to emit the detection light to the face of the person to be authenticated.
Illustratively, the dynamically changing the mode of light emitted by the display screen by changing the content displayed on the display screen includes: the pattern of light emitted by the display screen is dynamically changed by changing the color of the pixels and/or the position of the light emitting areas on the display screen.
Illustratively, the display screen is divided into a plurality of light emitting areas, and the mode of light emitted by the display screen is dynamically changed by changing the color of pixels and/or the position of the light emitting areas on the display screen, including: for each light-emitting area on the display screen, randomly selecting one RGB value in a preset RGB value range at a time as the color value of the light-emitting area for display.
Illustratively, the in-vivo detection module includes: the instruction output sub-module is used for outputting an action instruction, wherein the action instruction is used for indicating an authentication person to execute corresponding actions; the action image acquisition sub-module is used for acquiring a plurality of action images acquired aiming at the face of the person to be authenticated; the action detection sub-module is used for detecting actions executed by the personnel to be authenticated based on the plurality of action images; the action living experience verification sub-module is used for determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living experience verification result; live experience evidence comprises the following steps of: and the determining unit is used for determining whether the personnel to be authenticated pass the living body verification based on the light living experience verification result and the action living experience verification result so as to obtain a living body detection result.
Illustratively, the action liveness experience verification sub-module includes: a living body determining unit configured to determine that a face of a person to be authenticated belongs to a living body if an action performed by the person to be authenticated that coincides with an action indicated by the action instruction is detected based on a plurality of action images acquired within a period of not more than a first preset time, and determine that the face of the person to be authenticated does not belong to a living body if an action performed by the person to be authenticated that coincides with an action indicated by the action instruction is not detected based on a plurality of action images acquired within the first preset time.
Illustratively, the identity authentication device further comprises: the first counting module is used for counting once in the process of outputting the sub-module to the operation action liveness experience verification sub-module by each operation instruction so as to obtain the action verification times; the living experience authentication device further includes: the first verification error execution module is used for outputting first error information and judging whether the action verification times reach a first time threshold value or not if the action verification times reach the first time threshold value, starting the living experience verification passing determination sub-module, and restarting the instruction output sub-module or restarting the illumination image acquisition sub-module under the condition that the illumination image acquisition sub-module runs before the instruction output sub-module if the action verification times do not reach the first time threshold value, wherein the first error information is used for prompting failure of the living experience verification of the personnel to be authenticated.
Illustratively, the identity authentication device further comprises: the condition judging module is used for judging whether the image acquisition condition of the person to be authenticated meets the preset requirement, and if the image acquisition condition meets the preset requirement, the illumination image acquisition sub-module is started, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle of the person to be authenticated and the image acquisition device.
Illustratively, the identity authentication device further comprises: the first prompt information output module is used for outputting the first device and prompting the approaching image acquisition device by acquiring the information image information prompting by the first person to be authenticated by the collection bar prompt piece before the condition judgment module judges whether the image acquisition condition of the person to be authenticated meets the preset requirement or not or before the condition judgment module judges whether the information acquisition condition of the person to be authenticated meets the preset requirement or not.
Illustratively, the first hint information output module includes: the first prompt information output sub-module is used for outputting the first prompt information in one or more of a voice form, an image form and a text form.
Illustratively, the condition judgment module includes: the first real-time image acquisition sub-module is used for acquiring a real-time image acquired by a face of a person to be authenticated; the first real-time output sub-module is used for outputting a preset area for calibrating the image acquisition conditions and a face area in a real-time image in real time for display; and the first judging submodule is used for judging whether the image acquisition condition meets the preset requirement according to the face area detected in the real-time image, if the face area is positioned in the preset area and the proportion of the face area in the real-time image is larger than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not positioned in the preset area or the proportion of the face area in the real-time image is not larger than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the identity authentication device further comprises: the first acquisition prompt output module is used for outputting first acquisition prompt information in real time to prompt a person to be authenticated to approach to the image acquisition device if the proportion of the face area in the real-time image is not greater than a first preset proportion.
Illustratively, the condition judgment module includes: the second real-time image acquisition sub-module is used for acquiring real-time images acquired by the face of the person to be authenticated; the second real-time output sub-module is used for outputting a preset area for calibrating the image acquisition conditions and a face area in the real-time image in real time for display; and the second judging submodule is used for judging whether the image acquisition condition meets the preset requirement according to the face region, if the face region is positioned in the preset region and the proportion of the face region in the preset region is larger than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face region is not positioned in the preset region or the proportion of the face region in the preset region is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the identity authentication method further comprises: if the proportion of the face area in the preset area is not greater than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to be close to the image acquisition device.
Illustratively, the identity authentication device further comprises: the second acquisition prompt output module is used for judging the relative position relationship between the face area and the preset area in real time; and the third acquisition prompt output module is used for outputting third acquisition prompt information in real time based on the relative position relation between the face area and the preset area so as to prompt the change of the relative position relation between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area.
Illustratively, the condition judgment module includes: the gesture information acquisition module is used for acquiring gesture information of the image acquisition device; and the third judging sub-module is used for judging whether the image acquisition device is in a vertical placement state according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, and if not, determining that the image acquisition condition does not meet the preset requirement.
Illustratively, the identity authentication method further comprises: the second counting module is used for counting once in the process from the operation of the illumination image acquisition module to the operation of the illumination activity experience verification module so as to obtain illumination verification times; the identity authentication device further includes: the second verification error execution module is used for outputting second error information and judging whether the illumination verification times reach a second time threshold or not if the illumination verification times reach the second time threshold, starting the illumination verification passing determination sub-module if the illumination verification times do not reach the second time threshold, and restarting the condition judgment module or restarting the illumination image acquisition sub-module if the illumination verification times do not reach the second time threshold, wherein the second error information is used for prompting failure of the illumination verification for the personnel to be authenticated.
Illustratively, the identity authentication device further comprises: the second prompt information output module is used for outputting second prompt information before the illumination image acquisition sub-module acquires one or more illumination images acquired for the face of the person to be authenticated under the irradiation of the detection light or in the process that the illumination image acquisition sub-module acquires one or more illumination images acquired for the face of the person to be authenticated under the irradiation of the detection light and the illumination activity experience verification sub-module determines whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics expressed by the face of the person to be authenticated in the one or more illumination images, wherein the second prompt information is used for prompting the person to be authenticated to keep still in a second preset time.
The second prompt information is countdown information corresponding to a second preset time.
The method for determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images, which includes: based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images, determining whether the face of the person to be authenticated belongs to a living body or not through a specific algorithm corresponding to the light detection mode, so as to obtain an illumination activity experience verification result. Illustratively, the personal identification information is one or more of a credential number, a name, a credential face, a transformed value of the credential number, a transformed value of the name, a transformed value of the credential face, and a transformed value of the field acquisition face.
Illustratively, the identity authentication device further comprises: the information acquisition module is used for acquiring personal identification information of the personnel to be authenticated; wherein the personal identification information is personnel information provided by personnel to be authenticated or is obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field acquisition face, and the information acquisition module includes: the initial information acquisition sub-module is used for acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and a transformation sub-module for transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as personal identification information.
Illustratively, the information acquisition module includes: the certificate image acquisition sub-module is used for acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the living body detection module includes: the face image acquisition sub-module is used for acquiring a face image of a person to be authenticated; and the detection submodule is used for performing living body detection by using the face image so as to obtain a living body detection result.
Illustratively, the identity authentication device further comprises: the additional judging module is used for executing additional judging operation by using the certificate image and/or the face image before the identity determining module determines whether the identity of the personnel to be authenticated is legal or not at least according to the information authentication result and the living body detection result so as to obtain an additional judging result; the identity determination module comprises: and the identity determination submodule is used for determining whether the identity of the personnel to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judging module comprises a certificate authenticity judging sub-module and/or a face consistency judging sub-module, the certificate authenticity judging sub-module is used for executing certificate authenticity judging operation, the face consistency judging sub-module is used for executing face consistency judging operation, and the additional judging result comprises certificate authenticity judging result and/or face consistency judging result,
the certificate authenticity judging submodule comprises: the certificate authenticity judging unit is used for judging whether the certificate in the certificate image is a real certificate or not so as to obtain a certificate authenticity judging result; the face consistency judging submodule comprises: the certificate face acquisition unit is used for acquiring the certificate face of the person to be authenticated according to the certificate image; and the face comparison unit is used for comparing the certificate face of the person to be authenticated with the face in the face image so as to obtain a face consistency judgment result.
Illustratively, the certificate face acquisition unit includes: and the certificate face detection subunit is used for detecting faces from the certificate images so as to obtain the certificate faces of the personnel to be authenticated.
Illustratively, the certificate face acquisition unit includes: the character recognition sub-module is used for carrying out character recognition on the certificate image so as to obtain character information in the certificate image; a searching subunit for searching for matching document information from the authenticated document information database based on the text information in the document image; and the certificate face determining subunit is used for determining that the certificate face in the searched and matched certificate information is the certificate face of the person to be authenticated.
Illustratively, the certificate authenticity judging unit includes: a feature extraction subunit, configured to extract image features of the document image; the input subunit is used for inputting the image characteristics into the trained certificate classifier so as to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, the certificate authenticity judging unit includes: an image block identification subunit for identifying an image block containing identification information of the certificate from the certificate image; the identification information identification subunit is used for identifying the certificate identification information in the image block containing the certificate identification information so as to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, in the process that the identity determination submodule determines whether the identity of the person to be authenticated is legal according to the certificate authentication result, the living body detection result and the additional judgment result, each result of the certificate authentication result, the living body detection result and the additional judgment result has a respective weight coefficient.
Illustratively, the credential image acquisition submodule includes: the acquisition unit is used for acquiring a pre-shooting image acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition; an evaluation unit for evaluating image attributes of the pre-photographed image in real time; the prompting unit is used for generating pre-shooting prompting information according to the image attribute of the pre-shooting image when the evaluation value of the image attribute of the pre-shooting image is smaller than a preset evaluation value threshold value, and prompting a person to be authenticated to adjust the shooting condition of the certificate; and a saving unit configured to save the pre-captured image as a document image when an evaluation value of an image attribute of the pre-captured image is equal to or greater than a preset evaluation value threshold.
Illustratively, the authenticated information judging module includes: the character recognition sub-module is used for carrying out character recognition on the certificate image so as to obtain character information in the certificate image; the searching sub-module is used for searching in the authenticated certificate information database based on the text information in the certificate image so as to obtain a certificate authentication result; the certificate authentication result is the confidence that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the text recognition submodule includes: the character positioning unit is used for positioning characters in the certificate image to obtain an image block containing the characters; and the character recognition unit is used for recognizing characters in the image block containing the characters so as to obtain character information in the certificate image.
Illustratively, the identity authentication device further comprises: and the character correction module is used for correcting the image block containing the characters into a horizontal state before the character recognition unit recognizes the characters in the image block containing the characters.
Illustratively, the text recognition sub-module further comprises: the text output unit is used for outputting text information in the certificate image for a user to check; the correction information receiving unit is used for receiving text correction information input by a user; the text comparison unit is used for comparing the text to be repaired indicated by the text correction information with the corresponding text in the text information in the certificate image; and the character updating unit is used for updating the character information in the certificate image by using the character correction information if the difference between the text to be corrected, indicated by the character correction information, and the corresponding characters in the character information in the certificate image is smaller than a preset difference threshold value.
Illustratively, the in-vivo detection module includes: the instruction generation sub-module is used for generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; the real-time image acquisition sub-module acquires face images of personnel to be authenticated acquired in real time; the face detection sub-module is used for detecting faces in the face images; and a living body action execution judging sub-module for judging whether the human face in the human face image executes the living body action indicated by the living body action instruction so as to obtain a living body detection result.
Illustratively, the in-vivo detection module includes: the instruction generation sub-module is used for generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; the real-time image acquisition sub-module is used for acquiring face images of personnel to be authenticated acquired in real time; the skin region capturing submodule is used for capturing a skin region image before the living body action of the person to be authenticated is executed and a skin region image after the living body action is executed from the acquired face image; and an input sub-module for inputting the skin area image before the living body action is executed by the person to be authenticated and the skin area image after the living body action is executed into the skin elasticity classifier to obtain a living body detection result.
Illustratively, the identity authentication device includes: the true and false image acquisition module is used for acquiring a face image before a real person executes a living body action and a face image after the living body action is executed, and a face image before a false person executes the living body action and a face image after the living body action is executed; the positive sample extraction sub-module is used for extracting a face image before the living body action of the real person and a skin area image after the living body action of the real person from the face image before the living body action of the real person and the face image after the living body action of the real person as positive sample images; the negative sample extraction sub-module is used for extracting a face image before the false person executes the living body action and a skin area image after the false person executes the living body action from the face image before the false person executes the living body action and the face image after the false person executes the living body action as negative sample images; and a training sub-module for training the classifier model with the positive sample image and the negative sample image to obtain the skin elasticity classifier.
Illustratively, the skin region capture submodule includes: the image selection unit is used for selecting a face image before the living body action of the person to be authenticated is executed and a face image after the living body action is executed from the collected face images; the face positioning unit is used for positioning the face in the face image before the person to be authenticated performs the living body action and the face in the face image after the living body action by using the face detection model; the key point positioning unit is used for positioning key points of the human face in the human face image before the living body action of the person to be authenticated is executed and the human face image after the living body action is executed by using the human face key point positioning model; and a skin region obtaining unit for dividing the region of the face in the face image before the living body action of the person to be authenticated is performed and the face in the face image after the living body action is performed according to the face position and the key point position obtained by the positioning, so as to obtain a skin region image before the living body action of the person to be authenticated is performed and a skin region image after the living body action is performed.
Illustratively, the identity authentication device further comprises: the sample image acquisition module is used for acquiring a sample face image, wherein the positions of the faces and the positions of key points of the faces in the sample face image are marked; and the training module is used for training the neural network by using the sample face image so as to obtain a face detection model and a face key point positioning model.
Illustratively, the in-vivo detection module includes: the structured light image acquisition sub-module is used for acquiring a face image acquired by the binocular camera aiming at the face of the person to be authenticated under the irradiation of structured light; and a living body determining submodule for determining whether the face of the person to be authenticated belongs to a living body according to the face image so as to obtain a living body detection result.
According to another aspect of the present invention there is provided an identity authentication device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the steps of: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on personnel to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: step S110: acquiring one or more illumination images acquired for a face of a person to be authenticated under irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination activity experience verification result; step S130: determining whether the authentication person passes the living body verification based at least on the photo-activation experience authentication result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light changes between every two consecutive moments in time in the process of illuminating the face of the person to be authenticated.
Illustratively, the pattern of the detection light is randomly changed or preset during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light includes at least one of a color of the detection light, a position of a light emitting region, an intensity of the detection light, an irradiation angle of the detection light, a wavelength of the detection light, and a frequency of the detection light.
Illustratively, the detection light is obtained by dynamically changing the color and/or position of light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of the light emitted by the display screen is dynamically changed by changing the content displayed on the display screen so as to emit the detection light to the face of the person to be authenticated.
Illustratively, the dynamically changing the mode of light emitted by the display screen by changing the content displayed on the display screen includes: the pattern of light emitted by the display screen is dynamically changed by changing the color of the pixels and/or the position of the light emitting areas on the display screen.
Illustratively, the display screen is divided into a plurality of light emitting areas, and the mode of light emitted by the display screen is dynamically changed by changing the color of pixels and/or the position of the light emitting areas on the display screen, including: for each light-emitting area on the display screen, randomly selecting one RGB value in a preset RGB value range at a time as the color value of the light-emitting area for display.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: step S140: outputting an action instruction, wherein the action instruction is used for indicating an authentication person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living experience verification result; step S130, which is performed by the processor when the computer program instructions are executed, comprises: and determining whether the personnel to be authenticated passes the living body verification based on the illumination living body experience verification result and the action living body experience verification result so as to obtain a living body detection result.
Illustratively, step S170, which is performed by the processor when the computer program instructions are executed, comprises: if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period of not more than the first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
The computer program instructions, when executed by the processor, are also for performing the steps of: counting once in each process of executing step S140 to step S170 to obtain the number of times of action verification; after step S170, the computer program instructions, when executed by the processor, further perform the steps of: if the action living experience verification result indicates that the face of the person to be authenticated does not belong to a living body, outputting first error information, judging whether the action verification times reach a first time threshold, if the action verification times reach the first time threshold, turning to step S130, and if the action verification times do not reach the first time threshold, returning to step S140 or returning to step S110 under the condition that step S110 is executed before step S140, wherein the first error information is used for prompting that the living experience verification of the person to be authenticated fails.
Illustratively, prior to step S110, which is performed by the computer program instructions when executed by the processor, the computer program instructions when executed by the processor are further configured to perform the steps of: step S108: judging whether the image acquisition condition of the person to be authenticated meets the preset requirement, and if the image acquisition condition meets the preset requirement, turning to step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle of the person to be authenticated and the image acquisition device.
Illustratively, prior to step S108 or concurrent with step S108, the computer program instructions being executed by the processor, the computer program instructions being further operable to: step S106: and outputting first prompt information, wherein the first prompt information is used for prompting a person to be authenticated to face the image acquisition device and approach the image acquisition device.
Illustratively, step S106, which is used by the processor when executing the computer program instructions, comprises: and outputting the first prompt information in one or more of a voice form, an image form and a text form.
Illustratively, step S108, which is performed by the processor when the computer program instructions are executed, comprises: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; and judging whether the image acquisition condition meets the preset requirement according to the face area detected in the real-time image, if the face area is positioned in the preset area and the proportion of the face area in the real-time image is larger than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not positioned in the preset area or the proportion of the face area in the real-time image is not larger than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
The computer program instructions, when executed by the processor, are also for performing the steps of: if the proportion of the face area in the real-time image is not greater than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, step S108, which is performed by the processor when the computer program instructions are executed, comprises: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; and judging whether the image acquisition condition meets the preset requirement according to the human face region, if the human face region is positioned in the preset region and the proportion of the human face region in the preset region is larger than the second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the human face region is not positioned in the preset region or the proportion of the human face region in the preset region is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
The computer program instructions, when executed by the processor, are also for performing the steps of: if the proportion of the face area in the preset area is not greater than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to be close to the image acquisition device.
The computer program instructions, when executed by the processor, are also for performing the steps of: judging the relative position relationship between the face area and the preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relation between the face area and the preset area so as to prompt the change of the relative position relation between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area.
Illustratively, step S108, which is performed by the processor when the computer program instructions are executed, comprises: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, otherwise, determining that the image acquisition condition does not meet the preset requirement.
The computer program instructions, when executed by the processor, are also for performing the steps of: counting once in each process of executing the steps S110 to S120 to obtain the number of illumination verification times; after step S120, which is performed by the computer program instructions when executed by the processor, the computer program instructions when executed by the processor are further configured to perform the steps of: if the result of the illumination activity experience verification indicates that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification times reach a second time threshold, if the illumination verification times reach the second time threshold, turning to step S130, and if the illumination verification times do not reach the second time threshold, returning to step S108 or returning to step S110, wherein the second error information is used for prompting the failure of the activity experience verification of the person to be authenticated.
Illustratively, prior to step S110 or during steps S110 and S120 when the computer program instructions are executed by the processor, the computer program instructions are further configured to perform the steps of: outputting second prompt information, wherein the second prompt information is used for prompting the personnel to be authenticated to remain motionless within a second preset time.
The second prompt information is countdown information corresponding to a second preset time.
Illustratively, in the process of illuminating the face of the person to be authenticated, the pattern of the detection light is not changed, and the pattern of the detection light is a specific pattern selected after optimization according to experimental data, and the step S120 includes: based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images, determining whether the face of the person to be authenticated belongs to a living body or not through a specific algorithm corresponding to the light detection mode, so as to obtain an illumination activity experience verification result.
Illustratively, the personal identification information is one or more of a credential number, a name, a credential face, a live acquisition face, a transformed value of the credential number, a transformed value of the name, a transformed value of the credential face, and a transformed value of the live acquisition face.
Illustratively, the computer program instructions, when executed by the processor, are further operable to perform the steps of: outputting indication information for indicating personnel to be authenticated to provide personnel information of a predetermined type; wherein the personal identification information is personnel information provided by personnel to be authenticated or is obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field acquisition face, and the step of obtaining personal identification information of a person to be authenticated, which is performed by the processor when the computer program instructions are executed, comprises: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as personal identification information.
Illustratively, the steps of obtaining personal identification information of a person to be authenticated, which are performed by the computer program instructions when executed by the processor, comprise: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing a biopsy of a person to be authenticated, as performed by the computer program instructions when executed by the processor, to obtain a biopsy result comprises: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before the step of determining whether the identity of the person to be authenticated is legitimate based at least on the information authentication result and the living detection result, which is performed when the computer program instructions are executed by the processor, the computer program instructions are further configured to perform the steps of: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; the computer program instructions, when executed by the processor, for performing the step of determining whether the identity of the person to be authenticated is legitimate based at least on the information authentication result and the living detection result, comprise: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judging operation includes a document authenticity judging operation and/or a face consistency judging operation, and the additional judging result includes a document authenticity judging result and/or a face consistency judging result, the document authenticity judging operation includes: judging whether the certificate in the certificate image is a true certificate or not so as to obtain a certificate true or false judgment result; the face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judging result.
Illustratively, the computer program instructions, when executed by the processor, further for performing the step of acquiring a credential face of a person to be authenticated from the credential image, comprise: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, the steps for acquiring a credential face of a person to be authenticated from a credential image, when the computer program instructions are executed by the processor, comprise: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from an authenticated certificate information database based on text information in the certificate image; and determining the certificate face in the searched and matched certificate information as the certificate face of the person to be authenticated.
Illustratively, the step of determining whether the document in the document image is a genuine document, when the computer program instructions are executed by the processor, to obtain a document authenticity determination result, comprises: extracting image features of the certificate image; inputting the image characteristics into a trained certificate classifier to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, the step of determining whether the document in the document image is a genuine document, when the computer program instructions are executed by the processor, to obtain a document authenticity determination result, comprises: identifying an image block containing identification information of the certificate from the certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, in the execution of the step of determining whether the identity of the person to be authenticated is legitimate based on the credential authentication result, the living detection result, and the additional determination result, each of the credential authentication result, the living detection result, and the additional determination result having a respective weight coefficient, as the computer program instructions are executed by the processor.
Illustratively, the steps of acquiring a credential image of a person to be authenticated, which are performed when the computer program instructions are executed by the processor, comprise: acquiring a pre-shooting image which is acquired in real time aiming at credentials of a person to be authenticated under the current shooting condition; evaluating image attributes of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold, generating pre-shot prompt information according to the image attribute of the pre-shot image, and prompting a person to be authenticated to adjust the shooting condition of the certificate; and when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold, saving the pre-shot image as a document image.
Illustratively, the step of determining whether the personal identification information is authenticated information for obtaining an information authentication result, as performed by the computer program instructions when executed by the processor, comprises: performing character recognition on the certificate image to obtain character information in the certificate image; searching in an authenticated certificate information database based on the text information in the certificate image to obtain a certificate authentication result; the certificate authentication result is the confidence that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the step of performing text recognition on the document image for obtaining text information in the document image when the computer program instructions are executed by the processor comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
The computer program instructions, when executed by the processor, are further operable to perform the steps of: and correcting the image block containing the characters to be in a horizontal state.
Illustratively, after the step of identifying the text in the image block containing the text for obtaining text information in the document image, which is performed by the computer program instructions when run by the processor, the step of identifying the text in the document image for obtaining text information in the document image, which is performed by the computer program instructions when run by the processor, further comprises: outputting text information in the certificate image for viewing by a user; receiving text correction information input by a user; comparing the text to be repaired indicated by the text correction information with the corresponding text in the text information in the certificate image; and if the difference between the text to be repaired, indicated by the text correction information, and the corresponding text in the text information in the certificate image is smaller than a preset difference threshold value, updating the text information in the certificate image by using the text correction information.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction so as to obtain a living body detection result.
Illustratively, the step of performing the living body detection of the person to be authenticated to obtain the living body detection result when the computer program instructions are executed by the processor further comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; capturing a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed from the acquired face image; and inputting the skin region image before the living body action is executed by the person to be authenticated and the skin region image after the living body action is executed into a skin elasticity classifier to obtain a living body detection result.
The computer program instructions, when executed by the processor, are also for performing the steps of: acquiring a face image before a real person executes a living body action and a face image after the living body action, and acquiring a face image before a false person executes the living body action and a face image after the living body action; extracting a face image before the living body action of the real person and a skin area image after the living body action of the real person from the face image before the living body action of the real person and the face image after the living body action of the real person as positive sample images; extracting a face image before the false person executes the living body action and a skin area image after the false person executes the living body action from the face image before the false person executes the living body action and the face image after the false person executes the living body action as negative sample images; and training a classifier model using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, the steps for capturing an image of a skin area of a person to be authenticated before and after performing a living body action from the acquired face images, when the computer program instructions are executed by the processor, comprise: selecting a face image before the living body action of the person to be authenticated and a face image after the living body action of the person to be authenticated from the collected face images; positioning a face in a face image before the living body action of the person to be authenticated is executed and a face in a face image after the living body action is executed by using a face detection model; positioning key points of a face in a face image before the living body action of a person to be authenticated is executed and a face image after the living body action is executed by using a face key point positioning model; and dividing the areas of the faces in the face image before the living body action of the person to be authenticated is executed and the face in the face image after the living body action is executed according to the face positions and the key point positions obtained by positioning, so as to obtain a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed.
The computer program instructions, when executed by the processor, are also for performing the steps of: obtaining a sample face image, wherein the positions of the faces and the positions of key points of the faces in the sample face image are marked; and training a neural network by using the sample face image to obtain a face detection model and a face key point positioning model.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under the irradiation of structured light; and determining whether the face of the person to be authenticated belongs to a living body according to the face image so as to obtain a living body detection result.
According to another aspect of the present invention, there is provided a storage medium having stored thereon program instructions which, when executed, are adapted to carry out the steps of: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on personnel to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the step of performing the living body detection of the person to be authenticated to obtain the living body detection result, which is used by the program instructions at the time of execution, includes: step S110: acquiring one or more illumination images acquired for a face of a person to be authenticated under irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination activity experience verification result; step S130: determining whether the authentication person passes the living body verification based at least on the photo-activation experience authentication result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light changes between every two consecutive moments in time in the process of illuminating the face of the person to be authenticated.
Illustratively, the pattern of the detection light is randomly changed or preset during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light includes at least one of a color of the detection light, a position of a light emitting region, an intensity of the detection light, an irradiation angle of the detection light, a wavelength of the detection light, and a frequency of the detection light.
Illustratively, the detection light is obtained by dynamically changing the color and/or position of light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of the light emitted by the display screen is dynamically changed by changing the content displayed on the display screen so as to emit the detection light to the face of the person to be authenticated.
Illustratively, the dynamically changing the mode of light emitted by the display screen by changing the content displayed on the display screen includes: the pattern of light emitted by the display screen is dynamically changed by changing the color of the pixels and/or the position of the light emitting areas on the display screen.
Illustratively, the display screen is divided into a plurality of light emitting areas, and the mode of light emitted by the display screen is dynamically changed by changing the color of pixels and/or the position of the light emitting areas on the display screen, including: for each light-emitting area on the display screen, randomly selecting one RGB value in a preset RGB value range at a time as the color value of the light-emitting area for display.
Illustratively, the step of performing the living body detection of the person to be authenticated to obtain the living body detection result, which is used by the program instructions at the time of execution, includes: step S140: outputting an action instruction, wherein the action instruction is used for indicating an authentication person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living experience verification result; step S130, which is used to execute the program instructions at runtime, includes: and determining whether the personnel to be authenticated passes the living body verification based on the illumination living body experience verification result and the action living body experience verification result so as to obtain a living body detection result.
Illustratively, step S170, which is used by the program instructions at runtime, includes: if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period of not more than the first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
The program instructions are, illustratively, further operable when executed to perform the steps of: counting once in each process of executing step S140 to step S170 to obtain the number of times of action verification; after step S170, which the program instructions are used to execute at runtime, the program instructions are further used at runtime to perform the steps of: if the action living experience verification result indicates that the face of the person to be authenticated does not belong to a living body, outputting first error information, judging whether the action verification times reach a first time threshold, if the action verification times reach the first time threshold, turning to step S130, and if the action verification times do not reach the first time threshold, returning to step S140 or returning to step S110 under the condition that step S110 is executed before step S140, wherein the first error information is used for prompting that the living experience verification of the person to be authenticated fails.
Illustratively, prior to step S110, which the program instructions are used to execute at runtime, the program instructions are further used at runtime to perform the steps of: step S108: judging whether the image acquisition condition of the person to be authenticated meets the preset requirement, and if the image acquisition condition meets the preset requirement, turning to step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle of the person to be authenticated and the image acquisition device.
Illustratively, prior to step S108 or concurrently with step S108, the program instructions are for execution at runtime, the program instructions are also for execution at runtime to: step S106: and outputting first prompt information, wherein the first prompt information is used for prompting a person to be authenticated to face the image acquisition device and approach the image acquisition device.
Illustratively, step S106, which is used by the program instructions at runtime, includes: and outputting the first prompt information in one or more of a voice form, an image form and a text form.
Illustratively, step S108, which the program instructions are used to execute at runtime, includes: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; and judging whether the image acquisition condition meets the preset requirement according to the face area detected in the real-time image, if the face area is positioned in the preset area and the proportion of the face area in the real-time image is larger than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not positioned in the preset area or the proportion of the face area in the real-time image is not larger than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
The program instructions are, illustratively, further operable when executed to perform the steps of: if the proportion of the face area in the real-time image is not greater than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, step S108, which the program instructions are used to execute at runtime, includes: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; and judging whether the image acquisition condition meets the preset requirement according to the human face region, if the human face region is positioned in the preset region and the proportion of the human face region in the preset region is larger than the second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the human face region is not positioned in the preset region or the proportion of the human face region in the preset region is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
The program instructions are, illustratively, further operable when executed to perform the steps of: if the proportion of the face area in the preset area is not greater than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to be close to the image acquisition device.
The program instructions are, illustratively, further operable when executed to perform the steps of: judging the relative position relationship between the face area and the preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relation between the face area and the preset area so as to prompt the change of the relative position relation between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area.
Illustratively, step S108, which the program instructions are used to execute at runtime, includes: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, otherwise, determining that the image acquisition condition does not meet the preset requirement.
The program instructions are, illustratively, further operable when executed to perform the steps of: counting once in each process of executing the steps S110 to S120 to obtain the number of illumination verification times; after step S120, which the program instructions are used to execute at runtime, the program instructions are also used at runtime to perform the following steps: if the result of the illumination activity experience verification indicates that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification times reach a second time threshold, if the illumination verification times reach the second time threshold, turning to step S130, and if the illumination verification times do not reach the second time threshold, returning to step S108 or returning to step S110, wherein the second error information is used for prompting the failure of the activity experience verification of the person to be authenticated.
Illustratively, before step S110, or during steps S110 and S120, the program instructions are used to execute at runtime, the program instructions are further used to execute at runtime: outputting second prompt information, wherein the second prompt information is used for prompting the personnel to be authenticated to remain motionless within a second preset time.
The second prompt information is countdown information corresponding to a second preset time.
Illustratively, in the process of illuminating the face of the person to be authenticated, the pattern of the detection light is not changed, and the pattern of the detection light is a specific pattern selected after optimization according to experimental data, and the step S120 includes: based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images, determining whether the face of the person to be authenticated belongs to a living body or not through a specific algorithm corresponding to the light detection mode, so as to obtain an illumination activity experience verification result.
Illustratively, the personal identification information is one or more of a credential number, a name, a credential face, a live acquisition face, a transformed value of the credential number, a transformed value of the name, a transformed value of the credential face, and a transformed value of the live acquisition face.
Illustratively, before the program instructions are used at run-time to obtain personal identification information of the person to be authenticated, the program instructions are further used at run-time to perform the steps of: outputting indication information for indicating personnel to be authenticated to provide personnel information of a predetermined type; wherein the personal identification information is personnel information provided by personnel to be authenticated or is obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformation value of a certificate number, a transformation value of a name, a transformation value of a certificate face and a transformation value of a field acquisition face, and the step of acquiring the personal identification information of the person to be authenticated, which is used by the program instructions in operation, includes: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as personal identification information.
Illustratively, the step of obtaining personal identification information of the person to be authenticated, which the program instructions are used to perform at run-time, comprises: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing living body detection on a person to be authenticated, which is executed by the program instructions in the running process, to obtain a living body detection result comprises the following steps: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before the step of determining whether the identity of the person to be authenticated is legitimate based at least on the information authentication result and the living detection result, which the program instructions are used at runtime, the program instructions are further used at runtime to perform the steps of: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; the step of determining whether the identity of the person to be authenticated is legal or not according to at least the information authentication result and the living body detection result, which is used for executing the program instructions in the running process, comprises the following steps: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judging operation includes a document authenticity judging operation and/or a face consistency judging operation, and the additional judging result includes a document authenticity judging result and/or a face consistency judging result, the document authenticity judging operation includes: judging whether the certificate in the certificate image is a true certificate or not so as to obtain a certificate true or false judgment result; the face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judging result.
Illustratively, the steps for acquiring a credential face of a person to be authenticated from a credential image, performed by the program instructions at run-time, include: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, the steps for acquiring a credential face of a person to be authenticated from a credential image, performed by the program instructions at run-time, include: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from an authenticated certificate information database based on text information in the certificate image; and determining the certificate face in the searched and matched certificate information as the certificate face of the person to be authenticated.
Illustratively, the step of determining whether the document in the document image is a genuine document for execution by the program instructions when executed to obtain a document authenticity determination result comprises: extracting image features of the certificate image; inputting the image characteristics into a trained certificate classifier to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, the step of determining whether the document in the document image is a genuine document for execution by the program instructions when executed to obtain a document authenticity determination result comprises: identifying an image block containing identification information of the certificate from the certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, in the execution of the step of determining whether the identity of the person to be authenticated is legitimate based on the credential authentication result, the living detection result, and the additional determination result, each of which has a respective weight coefficient, that is used by the program instructions at run-time.
Illustratively, the step of acquiring a credential image of the person to be authenticated, for which the program instructions are used at run-time, comprises: acquiring a pre-shooting image which is acquired in real time aiming at credentials of a person to be authenticated under the current shooting condition; evaluating image attributes of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold, generating pre-shot prompt information according to the image attribute of the pre-shot image, and prompting a person to be authenticated to adjust the shooting condition of the certificate; and when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold, saving the pre-shot image as a document image.
Illustratively, the step of determining whether the personal identification information is authenticated information for execution by the program instructions at run-time to obtain the information authentication result comprises: performing character recognition on the certificate image to obtain character information in the certificate image; searching in an authenticated certificate information database based on the text information in the certificate image to obtain a certificate authentication result; the certificate authentication result is the confidence that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the step of performing text recognition on the document image to obtain text information in the document image, the program instructions being operable to perform at run-time, comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
Illustratively, prior to the step of identifying text in the image block containing text, which the program instructions are used to perform at run-time, the program instructions are further used at run-time to perform the steps of: and correcting the image block containing the characters to be in a horizontal state.
Illustratively, after the step of recognizing the text in the image block containing the text to obtain the text information in the document image, which the program instructions are used to execute at run-time, the step of recognizing the text in the document image to obtain the text information in the document image, which the program instructions are used to execute at run-time, further comprises: outputting text information in the certificate image for viewing by a user; receiving text correction information input by a user; comparing the text to be repaired indicated by the text correction information with the corresponding text in the text information in the certificate image; and if the difference between the text to be repaired, indicated by the text correction information, and the corresponding text in the text information in the certificate image is smaller than a preset difference threshold value, updating the text information in the certificate image by using the text correction information.
Illustratively, the step of performing the living body detection of the person to be authenticated to obtain the living body detection result, which is used by the program instructions at the time of execution, includes: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction so as to obtain a living body detection result.
Illustratively, the step of performing the living body detection of the person to be authenticated to obtain the living body detection result, which is used by the program instructions at the time of execution, includes: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; capturing a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed from the acquired face image; and inputting the skin region image before the living body action is executed by the person to be authenticated and the skin region image after the living body action is executed into a skin elasticity classifier to obtain a living body detection result.
The program instructions are, illustratively, further operable when executed to perform the steps of: acquiring a face image before a real person executes a living body action and a face image after the living body action, and acquiring a face image before a false person executes the living body action and a face image after the living body action; extracting a face image before the living body action of the real person and a skin area image after the living body action of the real person from the face image before the living body action of the real person and the face image after the living body action of the real person as positive sample images; extracting a face image before the false person executes the living body action and a skin area image after the false person executes the living body action from the face image before the false person executes the living body action and the face image after the false person executes the living body action as negative sample images; and training a classifier model using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, the step of capturing an image of a skin area of the person to be authenticated before and after performing the living body action from the acquired face images, the program instructions being used at run-time, includes: selecting a face image before the living body action of the person to be authenticated and a face image after the living body action of the person to be authenticated from the collected face images; positioning a face in a face image before the living body action of the person to be authenticated is executed and a face in a face image after the living body action is executed by using a face detection model; positioning key points of a face in a face image before the living body action of a person to be authenticated is executed and a face image after the living body action is executed by using a face key point positioning model; and dividing the areas of the faces in the face image before the living body action of the person to be authenticated is executed and the face in the face image after the living body action is executed according to the face positions and the key point positions obtained by positioning, so as to obtain a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed.
The program instructions are, illustratively, further operable when executed to perform the steps of: obtaining a sample face image, wherein the positions of the faces and the positions of key points of the faces in the sample face image are marked; and training a neural network by using the sample face image to obtain a face detection model and a face key point positioning model.
Illustratively, the step of performing the living body detection of the person to be authenticated to obtain the living body detection result, which is used by the program instructions at the time of execution, includes: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under the irradiation of structured light; and determining whether the face of the person to be authenticated belongs to a living body according to the face image so as to obtain a living body detection result.
According to another aspect of the present invention, there is provided an identity authentication device, including an information acquisition device, a processor, and a memory, wherein the information acquisition device is used for acquiring initial information of a person to be authenticated; stored in the memory are computer program instructions which, when executed by the processor, are configured to perform the steps of: acquiring personal identification information of a person to be authenticated, wherein the personal identification information is initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the information acquisition device comprises an image acquisition device and/or an input device.
The information acquisition device comprises an image acquisition device, and the identity authentication device further comprises a light source, wherein the light source is used for emitting detection light to the face of a person to be authenticated; the image acquisition device is also used for acquiring one or more illumination images of the face of the person to be authenticated under the irradiation of the detection light; the step of performing a biopsy of a person to be authenticated, as performed by the computer program instructions when executed by the processor, to obtain a biopsy result comprises: step S110: acquiring one or more illumination images acquired for a face of a person to be authenticated under irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination activity experience verification result; step S130: determining whether the authentication person passes the living body verification based at least on the photo-activation experience authentication result to obtain a living body detection result.
The identity authentication device further comprises a light source and an image acquisition device, wherein the light source is used for emitting detection light to the face of the person to be authenticated; the image acquisition device is also used for acquiring one or more illumination images of the face of the person to be authenticated under the irradiation of the detection light; the step of performing a biopsy of a person to be authenticated, as performed by the computer program instructions when executed by the processor, to obtain a biopsy result comprises: step S110: acquiring one or more illumination images acquired for a face of a person to be authenticated under irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination activity experience verification result; step S130: determining whether the authentication person passes the living body verification based at least on the photo-activation experience authentication result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light changes between every two consecutive moments in time in the process of illuminating the face of the person to be authenticated.
Illustratively, the pattern of the detection light is randomly changed or preset during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light includes at least one of a color of the detection light, a position of a light emitting region, an intensity of the detection light, an irradiation angle of the detection light, a wavelength of the detection light, and a frequency of the detection light.
Illustratively, the detection light is obtained by dynamically changing the color and/or position of light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of the light emitted by the display screen is dynamically changed by changing the content displayed on the display screen so as to emit the detection light to the face of the person to be authenticated.
Illustratively, the dynamically changing the mode of light emitted by the display screen by changing the content displayed on the display screen includes: the pattern of light emitted by the display screen is dynamically changed by changing the color of the pixels and/or the position of the light emitting areas on the display screen.
Illustratively, the display screen is divided into a plurality of light emitting areas, and the mode of light emitted by the display screen is dynamically changed by changing the color of pixels and/or the position of the light emitting areas on the display screen, including: for each light-emitting area on the display screen, randomly selecting one RGB value in a preset RGB value range at a time as the color value of the light-emitting area for display.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: step S140: outputting an action instruction, wherein the action instruction is used for indicating an authentication person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living experience verification result; step S130, which is performed by the processor when the computer program instructions are executed, comprises: determining whether a person to be authenticated passes living body verification based on the illumination living body experience verification result and the action living body experience verification result so as to obtain a living body detection result; the image acquisition device is also used for acquiring the plurality of action images.
Illustratively, step S170, which is performed by the processor when the computer program instructions are executed, comprises: if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period of not more than the first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
The computer program instructions, when executed by the processor, are also for performing the steps of: counting once in each process of executing step S140 to step S170 to obtain the number of times of action verification; after step S170, the computer program instructions, when executed by the processor, further perform the steps of: if the action living experience verification result indicates that the face of the person to be authenticated does not belong to a living body, outputting first error information, judging whether the action verification times reach a first time threshold, if the action verification times reach the first time threshold, turning to step S130, and if the action verification times do not reach the first time threshold, returning to step S140 or returning to step S110 under the condition that step S110 is executed before step S140, wherein the first error information is used for prompting that the living experience verification of the person to be authenticated fails.
Illustratively, prior to step S110, which is performed by the computer program instructions when executed by the processor, the computer program instructions when executed by the processor are further configured to perform the steps of: step S108: judging whether the image acquisition condition of the person to be authenticated meets the preset requirement, and if the image acquisition condition meets the preset requirement, turning to step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle of the person to be authenticated and the image acquisition device.
Illustratively, prior to step S108 or concurrent with step S108, the computer program instructions being executed by the processor, the computer program instructions being further operable to: step S106: and outputting first prompt information, wherein the first prompt information is used for prompting a person to be authenticated to face the image acquisition device and approach the image acquisition device.
Illustratively, step S106, which is used by the processor when executing the computer program instructions, comprises: and outputting the first prompt information in one or more of a voice form, an image form and a text form.
Illustratively, the identity authentication device further comprises a display device, wherein step S108, which the computer program instructions when executed by the processor, for performing, comprises: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; judging whether the image acquisition conditions meet preset requirements according to the face areas detected in the real-time image, if the face areas are located in the preset areas and the proportion of the face areas in the real-time image is larger than a first preset proportion, determining that the image acquisition conditions meet the preset requirements, and if the face areas are not located in the preset areas or the proportion of the face areas in the real-time image is not larger than the first preset proportion, determining that the image acquisition conditions do not meet the preset requirements; the image acquisition device is also used for acquiring the real-time image; the display device is used for displaying the preset area and the face area.
The computer program instructions, when executed by the processor, are also for performing the steps of: if the proportion of the face area in the real-time image is not greater than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Illustratively, the liveness proving apparatus further comprises a display means, wherein step S108, which is performed by the computer program instructions when executed by the processor, comprises: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; judging whether the image acquisition condition meets the preset requirement according to the human face region, if the human face region is positioned in the preset region and the proportion of the human face region in the preset region is larger than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the human face region is not positioned in the preset region or the proportion of the human face region in the preset region is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement; the image acquisition device is also used for acquiring the real-time image; the display device is used for displaying the preset area and the face area.
The computer program instructions, when executed by the processor, are also for performing the steps of: if the proportion of the face area in the preset area is not greater than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to be close to the image acquisition device.
The computer program instructions, when executed by the processor, are also for performing the steps of: judging the relative position relationship between the face area and the preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relation between the face area and the preset area so as to prompt the change of the relative position relation between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area.
Illustratively, step S108, which is performed by the processor when the computer program instructions are executed, comprises: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, otherwise, determining that the image acquisition condition does not meet the preset requirement.
The computer program instructions, when executed by the processor, are also for performing the steps of: counting once in each process of executing the steps S110 to S120 to obtain the number of illumination verification times; after step S120, which is performed by the computer program instructions when executed by the processor, the computer program instructions when executed by the processor are further configured to perform the steps of: if the result of the illumination activity experience verification indicates that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the illumination verification times reach a second time threshold, if the illumination verification times reach the second time threshold, turning to step S130, and if the illumination verification times do not reach the second time threshold, returning to step S108 or returning to step S110, wherein the second error information is used for prompting the failure of the activity experience verification of the person to be authenticated.
Illustratively, prior to step S110 or during steps S110 and S120 when the computer program instructions are executed by the processor, the computer program instructions are further configured to perform the steps of: outputting second prompt information, wherein the second prompt information is used for prompting the personnel to be authenticated to remain motionless within a second preset time.
The second prompt information is countdown information corresponding to a second preset time.
Illustratively, in the process of illuminating the face of the person to be authenticated, the pattern of the detection light is not changed, and the pattern of the detection light is a specific pattern selected after optimization according to experimental data, and the step S120 includes: based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images, determining whether the face of the person to be authenticated belongs to a living body or not through a specific algorithm corresponding to the light detection mode, so as to obtain an illumination activity experience verification result.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising: the information acquisition device is used for acquiring initial information of the personnel to be authenticated; and transmission means for transmitting the initial information to the server, and receiving, from the server, authentication information on whether the identity of the person to be authenticated is legitimate, which the server has obtained by: acquiring personal identification information of a person to be authenticated, wherein the personal identification information is initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the information acquisition device comprises an image acquisition device and/or an input device.
Illustratively, the personal identification information is one or more of a credential number, a name, a credential face, a live acquisition face, a transformed value of the credential number, a transformed value of the name, a transformed value of the credential face, and a transformed value of the live acquisition face.
Illustratively, the computer program instructions, when executed by the processor, are further operable to perform the steps of: outputting indication information for indicating personnel to be authenticated to provide personnel information of a predetermined type; wherein the personal identification information is personnel information provided by personnel to be authenticated or is obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field acquisition face, and the step of obtaining personal identification information of a person to be authenticated, which is performed by the processor when the computer program instructions are executed, comprises: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as personal identification information.
The information acquisition device comprises an image acquisition device, wherein the image acquisition device is also used for acquiring certificate images and face images of personnel to be authenticated; the steps of obtaining personal identification information of a person to be authenticated, which are performed by the computer program instructions when executed by the processor, include: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing a biopsy of a person to be authenticated, as performed by the computer program instructions when executed by the processor, to obtain a biopsy result comprises: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
The identity authentication device further comprises an image acquisition device, wherein the image acquisition device is used for acquiring a certificate image and a face image of a person to be authenticated; the steps of obtaining personal identification information of a person to be authenticated, which are performed by the computer program instructions when executed by the processor, include: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing a biopsy of a person to be authenticated, as performed by the computer program instructions when executed by the processor, to obtain a biopsy result comprises: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before the step of determining whether the identity of the person to be authenticated is legitimate based at least on the information authentication result and the living detection result, which is performed when the computer program instructions are executed by the processor, the computer program instructions are further configured to perform the steps of: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; the computer program instructions, when executed by the processor, for performing the step of determining whether the identity of the person to be authenticated is legitimate based at least on the information authentication result and the living detection result, comprise: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judging operation includes a document authenticity judging operation and/or a face consistency judging operation, and the additional judging result includes a document authenticity judging result and/or a face consistency judging result, the document authenticity judging operation includes: judging whether the certificate in the certificate image is a true certificate or not so as to obtain a certificate true or false judgment result; the face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judging result.
Illustratively, the computer program instructions, when executed by the processor, further for performing the step of acquiring a credential face of a person to be authenticated from the credential image, comprise: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, the steps for acquiring a credential face of a person to be authenticated from a credential image, when the computer program instructions are executed by the processor, comprise: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from an authenticated certificate information database based on text information in the certificate image; and determining the certificate face in the searched and matched certificate information as the certificate face of the person to be authenticated.
Illustratively, the step of determining whether the document in the document image is a genuine document, when the computer program instructions are executed by the processor, to obtain a document authenticity determination result, comprises: extracting image features of the certificate image; inputting the image characteristics into a trained certificate classifier to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, the step of determining whether the document in the document image is a genuine document, when the computer program instructions are executed by the processor, to obtain a document authenticity determination result, comprises: identifying an image block containing identification information of the certificate from the certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, in the execution of the step of determining whether the identity of the person to be authenticated is legitimate based on the credential authentication result, the living detection result, and the additional determination result, each of the credential authentication result, the living detection result, and the additional determination result having a respective weight coefficient, as the computer program instructions are executed by the processor.
The image acquisition device is further used for acquiring a pre-shooting image in real time aiming at credentials of a person to be authenticated; the steps of acquiring a credential image of a person to be authenticated, which are performed when the computer program instructions are executed by the processor, comprise: acquiring a pre-shooting image which is acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition; evaluating image attributes of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold, generating pre-shot prompt information according to the image attribute of the pre-shot image, and prompting a person to be authenticated to adjust the shooting condition of the certificate; and when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold, saving the pre-shot image as a document image.
Illustratively, the step of determining whether the personal identification information is authenticated information for obtaining an information authentication result, as performed by the computer program instructions when executed by the processor, comprises: performing character recognition on the certificate image to obtain character information in the certificate image; searching in an authenticated certificate information database based on the text information in the certificate image to obtain a certificate authentication result; the certificate authentication result is the confidence that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the step of performing text recognition on the document image for obtaining text information in the document image when the computer program instructions are executed by the processor comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
The computer program instructions, when executed by the processor, are further operable to perform the steps of: and correcting the image block containing the characters to be in a horizontal state.
Illustratively, after the step of identifying the text in the image block containing the text for obtaining text information in the document image, which is performed by the computer program instructions when run by the processor, the step of identifying the text in the document image for obtaining text information in the document image, which is performed by the computer program instructions when run by the processor, further comprises: outputting text information in the certificate image for viewing by a user; receiving text correction information input by a user; comparing the text to be repaired indicated by the text correction information with the corresponding text in the text information in the certificate image; and if the difference between the text to be repaired, indicated by the text correction information, and the corresponding text in the text information in the certificate image is smaller than a preset difference threshold value, updating the text information in the certificate image by using the text correction information.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction so as to obtain a living body detection result.
Illustratively, the step of performing the living body detection of the person to be authenticated to obtain the living body detection result when the computer program instructions are executed by the processor further comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; capturing a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed from the acquired face image; and inputting the skin region image before the living body action is executed by the person to be authenticated and the skin region image after the living body action is executed into a skin elasticity classifier to obtain a living body detection result.
The computer program instructions, when executed by the processor, are also for performing the steps of: acquiring a face image before a real person executes a living body action and a face image after the living body action, and acquiring a face image before a false person executes the living body action and a face image after the living body action; extracting a face image before the living body action of the real person and a skin area image after the living body action of the real person from the face image before the living body action of the real person and the face image after the living body action of the real person as positive sample images; extracting a face image before the false person executes the living body action and a skin area image after the false person executes the living body action from the face image before the false person executes the living body action and the face image after the false person executes the living body action as negative sample images; and training a classifier model using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, the steps for capturing an image of a skin area of a person to be authenticated before and after performing a living body action from the acquired face images, when the computer program instructions are executed by the processor, comprise: selecting a face image before the living body action of the person to be authenticated and a face image after the living body action of the person to be authenticated from the collected face images; positioning a face in a face image before the living body action of the person to be authenticated is executed and a face in a face image after the living body action is executed by using a face detection model; positioning key points of a face in a face image before the living body action of a person to be authenticated is executed and a face image after the living body action is executed by using a face key point positioning model; and dividing the areas of the faces in the face image before the living body action of the person to be authenticated is executed and the face in the face image after the living body action is executed according to the face positions and the key point positions obtained by positioning, so as to obtain a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed.
The computer program instructions, when executed by the processor, are also for performing the steps of: obtaining a sample face image, wherein the positions of the faces and the positions of key points of the faces in the sample face image are marked; and training a neural network by using the sample face image to obtain a face detection model and a face key point positioning model.
The identity authentication device further comprises a structural light source and a binocular camera, wherein the structural light source is used for emitting structural light to the face of the person to be authenticated; the binocular camera is used for collecting face images aiming at faces of people to be authenticated under the irradiation of the structured light; the step of performing a biopsy of a person to be authenticated, as performed by the computer program instructions when executed by the processor, to obtain a biopsy result comprises: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under the irradiation of structured light; and determining whether the face of the person to be authenticated belongs to a living body according to the face image so as to obtain a living body detection result.
According to another aspect of the present invention, there is provided an identity authentication device including a transmission device, a processor, and a memory, wherein the transmission device is configured to receive initial information of a person to be authenticated from a client, and transmit authentication information about whether an identity of the person to be authenticated is legal to the client; stored in the memory are computer program instructions which, when executed by the processor, are configured to perform the steps of: acquiring personal identification information of a person to be authenticated, wherein the personal identification information is initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on a person to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: step S110: acquiring one or more illumination images acquired for a face of a person to be authenticated under irradiation of detection light; step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination activity experience verification result; step S130: determining whether the authentication person passes the living body verification based at least on the photo-activation experience authentication result to obtain a living body detection result.
Illustratively, the pattern of the detection light is changed at least once during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light changes between every two consecutive moments in time in the process of illuminating the face of the person to be authenticated.
Illustratively, the pattern of the detection light is randomly changed or preset during irradiation of the face of the person to be authenticated.
Illustratively, the pattern of the detection light includes at least one of a color of the detection light, a position of a light emitting region, an intensity of the detection light, an irradiation angle of the detection light, a wavelength of the detection light, and a frequency of the detection light.
Illustratively, the detection light is obtained by dynamically changing the color and/or position of light emitted towards the face of the person to be authenticated.
Illustratively, the detection light is emitted by the display screen, the detection light being obtained by: the mode of the light emitted by the display screen is dynamically changed by changing the content displayed on the display screen so as to emit the detection light to the face of the person to be authenticated.
Illustratively, the dynamically changing the mode of light emitted by the display screen by changing the content displayed on the display screen includes: the pattern of light emitted by the display screen is dynamically changed by changing the color of the pixels and/or the position of the light emitting areas on the display screen.
Illustratively, the display screen is divided into a plurality of light emitting areas, and the mode of light emitted by the display screen is dynamically changed by changing the color of pixels and/or the position of the light emitting areas on the display screen, including: for each light-emitting area on the display screen, randomly selecting one RGB value in a preset RGB value range at a time as the color value of the light-emitting area for display.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: step S140: outputting an action instruction, wherein the action instruction is used for indicating an authentication person to execute a corresponding action; step S150: acquiring a plurality of action images acquired aiming at the face of a person to be authenticated; step S160: detecting an action performed by a person to be authenticated based on the plurality of action images; step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living experience verification result; step S130, which is performed by the processor when the computer program instructions are executed, comprises: determining whether a person to be authenticated passes living body verification based on the illumination living body experience verification result and the action living body experience verification result so as to obtain a living body detection result; the transmission device is also used for sending the action instruction to the client and receiving a plurality of action images from the client.
Illustratively, step S170, which is performed by the processor when the computer program instructions are executed, comprises: if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period of not more than the first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
The computer program instructions, when executed by the processor, are also for performing the steps of: counting once in each process of executing step S140 to step S170 to obtain the number of times of action verification; after step S170, the computer program instructions, when executed by the processor, further perform the steps of: if the action living experience verification result indicates that the face of the person to be authenticated does not belong to a living body, outputting first error information, judging whether the action verification times reach a first time threshold, if the action verification times reach the first time threshold, turning to step S130, and if the action verification times do not reach the first time threshold, returning to step S140 or returning to step S110 under the condition that the step S110 is executed before the step S140, wherein the first error information is used for prompting that the living experience verification of the person to be authenticated fails; the transmission means is also for sending the first error information to the client for output by the client.
Illustratively, prior to step S110, which is performed by the computer program instructions when executed by the processor, the computer program instructions when executed by the processor are further configured to perform the steps of: step S108: judging whether the image acquisition condition of the person to be authenticated meets the preset requirement, and if the image acquisition condition meets the preset requirement, turning to step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle of the person to be authenticated and the image acquisition device.
Illustratively, prior to step S108 or concurrent with step S108, the computer program instructions being executed by the processor, the computer program instructions being further operable to: step S106: outputting first prompt information, wherein the first prompt information is used for prompting a person to be authenticated to face the image acquisition device and approach the image acquisition device; the transmission device is also used for sending the first prompt information to the client for output by the client.
Illustratively, step S106, which is used by the processor when executing the computer program instructions, comprises: and outputting the first prompt information in one or more of a voice form, an image form and a text form.
Illustratively, the identity authentication device further comprises a display device, wherein step S108, which the computer program instructions when executed by the processor, for performing, comprises: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; judging whether the image acquisition conditions meet preset requirements according to the face areas detected in the real-time image, if the face areas are located in the preset areas and the proportion of the face areas in the real-time image is larger than a first preset proportion, determining that the image acquisition conditions meet the preset requirements, and if the face areas are not located in the preset areas or the proportion of the face areas in the real-time image is not larger than the first preset proportion, determining that the image acquisition conditions do not meet the preset requirements; the transmission device is also used for receiving the real-time image from the client and sending the preset area and the face area to the client for output by the client for display.
The computer program instructions, when executed by the processor, are also for performing the steps of: if the proportion of the face area in the real-time image is not greater than a first preset proportion, outputting first acquisition prompt information in real time to prompt that the person to be authenticated is close to the image acquisition device; the transmission device is also used for sending the first acquisition prompt information to the client for output by the client.
Illustratively, the liveness proving apparatus further comprises a display means, wherein step S108, which is performed by the computer program instructions when executed by the processor, comprises: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; judging whether the image acquisition condition meets the preset requirement according to the human face region, if the human face region is positioned in the preset region and the proportion of the human face region in the preset region is larger than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the human face region is not positioned in the preset region or the proportion of the human face region in the preset region is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement; the transmission device is also used for receiving the real-time image from the client and sending the preset area and the face area to the client for output by the client for display.
The computer program instructions, when executed by the processor, are also for performing the steps of: if the proportion of the face area in the preset area is not greater than a second preset proportion, outputting second acquisition prompt information in real time to prompt that the person to be authenticated is close to the image acquisition device; the transmission device is also used for sending the second acquisition prompt information to the client for output by the client.
The computer program instructions, when executed by the processor, are also for performing the steps of: judging the relative position relationship between the face area and the preset area in real time; outputting third acquisition prompt information in real time based on the relative position relation between the face area and the preset area so as to prompt the change of the relative position relation between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area; the transmission device is also used for sending the third acquisition prompt information to the client for output by the client.
Illustratively, step S108, which is performed by the processor when the computer program instructions are executed, comprises: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, otherwise, determining that the image acquisition condition does not meet the preset requirement.
The computer program instructions, when executed by the processor, are also for performing the steps of: counting once in each process of executing the steps S110 to S120 to obtain the number of illumination verification times; after step S120, which is performed by the computer program instructions when executed by the processor, the computer program instructions when executed by the processor are further configured to perform the steps of: if the result of the illumination activity experience verification indicates that the face of the person to be authenticated does not belong to a living body, outputting second error information, judging whether the illumination verification times reach a second time threshold, if the illumination verification times reach the second time threshold, turning to step S130, and if the illumination verification times do not reach the second time threshold, returning to step S108 or returning to step S110, wherein the second error information is used for prompting the failure of the activity experience verification of the person to be authenticated; the transmission means is also for sending the second error information to the client for output by the client.
Illustratively, prior to step S110 or during steps S110 and S120 when the computer program instructions are executed by the processor, the computer program instructions are further configured to perform the steps of: outputting second prompt information, wherein the second prompt information is used for prompting the personnel to be authenticated to remain motionless within a second preset time; the transmission device is also used for sending the second prompt information to the client for output by the client.
The second prompt information is countdown information corresponding to a second preset time.
Illustratively, in the process of illuminating the face of the person to be authenticated, the pattern of the detection light is not changed, and the pattern of the detection light is a specific pattern selected after optimization according to experimental data, and the step S120 includes: based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images, determining whether the face of the person to be authenticated belongs to a living body or not through a specific algorithm corresponding to the light detection mode, so as to obtain an illumination activity experience verification result.
Illustratively, the personal identification information is one or more of a credential number, a name, a credential face, a live acquisition face, a transformed value of the credential number, a transformed value of the name, a transformed value of the credential face, and a transformed value of the live acquisition face.
Illustratively, the computer program instructions, when executed by the processor, are further operable to perform the steps of: outputting indication information for indicating personnel to be authenticated to provide personnel information of a predetermined type; the transmission device is also used for sending the indication information to the client to be output by the client; wherein the personal identification information is personnel information provided by personnel to be authenticated or is obtained based on the personnel information.
Illustratively, the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field acquisition face, and the step of obtaining personal identification information of a person to be authenticated, which is performed by the processor when the computer program instructions are executed, comprises: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as personal identification information.
Illustratively, the steps of obtaining personal identification information of a person to be authenticated, which are performed by the computer program instructions when executed by the processor, comprise: acquiring a certificate image of a person to be authenticated, wherein the information authentication result is a certificate authentication result; the step of performing a biopsy of a person to be authenticated, as performed by the computer program instructions when executed by the processor, to obtain a biopsy result comprises: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
Illustratively, before the step of determining whether the identity of the person to be authenticated is legitimate based at least on the information authentication result and the living detection result, which is performed when the computer program instructions are executed by the processor, the computer program instructions are further configured to perform the steps of: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; the computer program instructions, when executed by the processor, for performing the step of determining whether the identity of the person to be authenticated is legitimate based at least on the information authentication result and the living detection result, comprise: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
Illustratively, the additional judging operation includes a document authenticity judging operation and/or a face consistency judging operation, and the additional judging result includes a document authenticity judging result and/or a face consistency judging result, the document authenticity judging operation includes: judging whether the certificate in the certificate image is a true certificate or not so as to obtain a certificate true or false judgment result; the face consistency judging operation comprises the following steps: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judging result.
Illustratively, the computer program instructions, when executed by the processor, further for performing the step of acquiring a credential face of a person to be authenticated from the credential image, comprise: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
Illustratively, the steps for acquiring a credential face of a person to be authenticated from a credential image, when the computer program instructions are executed by the processor, comprise: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from an authenticated certificate information database based on text information in the certificate image; and determining the certificate face in the searched and matched certificate information as the certificate face of the person to be authenticated.
Illustratively, the step of determining whether the document in the document image is a genuine document, when the computer program instructions are executed by the processor, to obtain a document authenticity determination result, comprises: extracting image features of the certificate image; inputting the image characteristics into a trained certificate classifier to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, the step of determining whether the document in the document image is a genuine document, when the computer program instructions are executed by the processor, to obtain a document authenticity determination result, comprises: identifying an image block containing identification information of the certificate from the certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Illustratively, in the execution of the step of determining whether the identity of the person to be authenticated is legitimate based on the credential authentication result, the living detection result, and the additional determination result, each of the credential authentication result, the living detection result, and the additional determination result having a respective weight coefficient, as the computer program instructions are executed by the processor.
Illustratively, the steps of acquiring a credential image of a person to be authenticated, which are performed when the computer program instructions are executed by the processor, comprise: acquiring a pre-shooting image which is acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition; evaluating image attributes of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold, generating pre-shot prompt information according to the image attribute of the pre-shot image, and prompting a person to be authenticated to adjust the shooting condition of the certificate; and when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold value, saving the pre-shot image as a certificate image; the transmission device is also used for receiving the pre-shot image.
Illustratively, the step of determining whether the personal identification information is authenticated information for obtaining an information authentication result, as performed by the computer program instructions when executed by the processor, comprises: performing character recognition on the certificate image to obtain character information in the certificate image; searching in an authenticated certificate information database based on the text information in the certificate image to obtain a certificate authentication result; the certificate authentication result is the confidence that the certificate information in the certificate image is authenticated certificate information.
Illustratively, the step of performing text recognition on the document image for obtaining text information in the document image when the computer program instructions are executed by the processor comprises: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
The computer program instructions, when executed by the processor, are further operable to perform the steps of: and correcting the image block containing the characters to be in a horizontal state.
Illustratively, after the step of identifying the text in the image block containing the text for obtaining text information in the document image, which is performed by the computer program instructions when run by the processor, the step of identifying the text in the document image for obtaining text information in the document image, which is performed by the computer program instructions when run by the processor, further comprises: outputting text information in the certificate image for viewing by a user; receiving text correction information input by a user; comparing the text to be repaired indicated by the text correction information with the corresponding text in the text information in the certificate image; and if the difference between the text to be repaired, indicated by the text correction information, and the corresponding text in the text information in the certificate image is smaller than a preset difference threshold value, updating the text information in the certificate image by using the text correction information.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction so as to obtain a living body detection result.
Illustratively, the step of performing the living body detection of the person to be authenticated to obtain the living body detection result when the computer program instructions are executed by the processor further comprises: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; capturing a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed from the acquired face image; and inputting the skin region image before the living body action is executed by the person to be authenticated and the skin region image after the living body action is executed into a skin elasticity classifier to obtain a living body detection result.
The computer program instructions, when executed by the processor, are also for performing the steps of: acquiring a face image before a real person executes a living body action and a face image after the living body action, and acquiring a face image before a false person executes the living body action and a face image after the living body action; extracting a face image before the living body action of the real person and a skin area image after the living body action of the real person from the face image before the living body action of the real person and the face image after the living body action of the real person as positive sample images; extracting a face image before the false person executes the living body action and a skin area image after the false person executes the living body action from the face image before the false person executes the living body action and the face image after the false person executes the living body action as negative sample images; and training a classifier model using the positive sample image and the negative sample image to obtain a skin elasticity classifier.
Illustratively, the steps for capturing an image of a skin area of a person to be authenticated before and after performing a living body action from the acquired face images, when the computer program instructions are executed by the processor, comprise: selecting a face image before the living body action of the person to be authenticated and a face image after the living body action of the person to be authenticated from the collected face images; positioning a face in a face image before the living body action of the person to be authenticated is executed and a face in a face image after the living body action is executed by using a face detection model; positioning key points of a face in a face image before the living body action of a person to be authenticated is executed and a face image after the living body action is executed by using a face key point positioning model; and dividing the areas of the faces in the face image before the living body action of the person to be authenticated is executed and the face in the face image after the living body action is executed according to the face positions and the key point positions obtained by positioning, so as to obtain a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed.
The computer program instructions, when executed by the processor, are also for performing the steps of: obtaining a sample face image, wherein the positions of the faces and the positions of key points of the faces in the sample face image are marked; and training a neural network by using the sample face image to obtain a face detection model and a face key point positioning model.
Illustratively, the step of performing a live detection of the person to be authenticated for obtaining a live detection result, when the computer program instructions are executed by the processor, comprises: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under the irradiation of structured light; and determining whether the face of the person to be authenticated belongs to a living body according to the face image so as to obtain a living body detection result.
According to the identity authentication method, the device and the storage medium, the authenticated information is combined to judge and carry out living body detection to determine whether the identity of the person to be authenticated is legal, so that compared with a conventional mode of carrying out identity authentication based on a password or a certificate, the authentication result of the identity authentication method according to the embodiment of the invention is more accurate, the security of user authentication can be improved, and the rights and interests of the user can be effectively ensured.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following more particular description of embodiments of the present invention, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, and not constitute a limitation to the invention. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 shows a schematic block diagram of an example electronic device for implementing identity authentication methods and apparatus in accordance with embodiments of the invention;
FIG. 2 shows a schematic flow chart of an identity authentication method according to one embodiment of the invention;
FIG. 3 shows a schematic flow chart of an identity authentication method according to another embodiment of the invention;
FIG. 4 shows a schematic flow chart of an identity authentication method according to another embodiment of the invention;
FIG. 5 shows a schematic flow chart of the training steps of the skin elasticity classifier according to one embodiment of the invention;
FIG. 6 shows a schematic block diagram of an example electronic device for implementing an identity authentication method and apparatus in accordance with another embodiment of the invention;
FIG. 7 shows a schematic flow chart of a living body detection step according to one embodiment of the invention;
FIG. 8 shows a schematic flow chart of a living body detection step according to another embodiment of the present invention;
FIG. 9 shows a flow of implementation of the in-vivo detection step according to one embodiment of the present invention;
FIG. 10 shows a schematic block diagram of an identity authentication device according to one embodiment of the present invention; and
FIG. 11 shows a schematic block diagram of an identity authentication system according to one embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. Based on the embodiments of the invention described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the invention.
In order to solve the above-mentioned problems, the embodiments of the present invention provide an identity authentication method and apparatus, and a storage medium. The identity authentication method, the device and the storage medium are used for carrying out identity authentication by combining personal identification information recognition and face recognition so as to determine whether the identity of the person to be authenticated is legal or not, namely whether the person to be authenticated has permission to carry out subsequent operations such as consumption payment and the like. The identity authentication method and the device can conveniently and safely identify the identity of the personnel to be authenticated, are a safe interactive authentication mode, and can be well applied to the technical fields of electronic commerce, mobile payment, bank account opening and the like.
First, an example electronic device 100 for implementing an identity authentication method and apparatus and a storage medium according to an embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected by a bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 1 are exemplary only and not limiting, as the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 102 to implement client functions and/or other desired functions in embodiments of the present invention as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.
The output device 108 may output various information (e.g., images and/or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may capture document images (including video frames) and/or face images (including video frames) and store the captured images in the storage device 104 for use by other components. The image capture device 110 may be a camera. It should be understood that the image capturing apparatus 110 is merely an example, and the electronic device 100 may not include the image capturing apparatus 110. In this case, the document image and/or the face image may be acquired by other image acquisition means, and the acquired image may be transmitted to the electronic device 100.
By way of example, example electronic devices for implementing the identity authentication methods and apparatus according to embodiments of the invention may be implemented on devices such as personal computers or remote servers.
The embodiment of the invention provides an identity authentication method, which comprises the following steps: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on the personnel to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not according to at least the information authentication result and the living body detection result.
Next, an identity authentication method according to an embodiment of the present invention will be described with reference to fig. 2. Fig. 2 shows a schematic flow chart of an identity authentication method 200 according to one embodiment of the invention. As shown in fig. 2, the identity authentication method 200 includes the following steps.
In step S210, personal identification information of a person to be authenticated is acquired.
The identity authentication method described herein will be described mainly taking a case where personal identification information of a person to be authenticated comes from a certificate in a certificate image as an example, and the personal identification information is not limited thereto in a practical use scenario, and may be, for example, one or more of a certificate number, a name, a certificate face (i.e., a face image detected from a certificate image), and a live acquisition face (i.e., a face image acquired for a face of a person to be authenticated using an image acquisition device), and/or one or more of a transformed value of a certificate number (e.g., an output value of a certain hash algorithm), a transformed value of a name (e.g., an output value of a certain hash algorithm), a transformed value of a certificate face (e.g., an output value of a certain hash algorithm), and a transformed value of a live acquisition face (e.g., an output value of a certain hash algorithm). The type of the personal identification information can be selected according to specific use situations, and the type selection of the personal identification information is not particularly limited in the invention.
Step S210 may include: and acquiring a certificate image of the person to be authenticated. The credential information in the credential image can be considered personal identification information.
Credentials as described herein may include, but are not limited to, identification cards, driver's license, passports, social security cards, and the like.
The document image may be an image captured for a document of a person to be authenticated. The document image may be an original image acquired by an image acquisition device such as a camera, or an image obtained by preprocessing the original image.
The credential image can be sent to the electronic device 100 by a client device (e.g., a mobile terminal including a camera, a remote video teller machine (Video Teller Machine, VTM), etc.) for processing by the processor 102 of the electronic device 100, or can be captured by an image capture device 110 (e.g., a camera) included with the electronic device 100 and transferred to the processor 102 for processing.
According to an embodiment of the present invention, acquiring a certificate image of a person to be authenticated may include: acquiring a pre-shooting image which is acquired in real time aiming at credentials of a person to be authenticated under the current shooting condition; evaluating image attributes of the pre-shot image in real time; when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold, generating pre-shot prompt information according to the image attribute of the pre-shot image, and prompting a person to be authenticated to adjust the shooting condition of the certificate; and when the evaluation value of the image attribute of the pre-shot image is equal to or greater than a preset evaluation value threshold, saving the pre-shot image as a document image.
The pre-shooting refers to a process of starting a shooting mode of an image acquisition device (such as a camera of a mobile terminal of a mobile phone, a tablet computer, etc.) and placing a certificate to be shot in a shooting range of the image acquisition device to perform shooting and framing (photo shooting is not actually completed).
Alternatively, the captured pre-captured image may be subjected to quality evaluation. For example, in the pre-shooting process, image attributes of a pre-shot image obtained by shooting under the current shooting conditions may be calculated in real time. Illustratively, the shooting conditions may include, but are not limited to, one or more of the following: the position of the certificate, the angle of the certificate, the shooting position of the image acquisition device, the shooting angle of the image acquisition device and the like. Illustratively, the image attributes may include, but are not limited to, one or more of the following: document blurriness, document outline, document key parts, document shielding condition, document size, document text definition and the like. And when the evaluation value of the image attribute is smaller than a preset evaluation value threshold, the shot pre-shot image is considered to be unqualified. At this time, corresponding pre-shooting prompt information can be generated according to the image attribute and output to prompt the user to adjust the angle, the position and the like of the certificate or the image acquisition device until a qualified pre-shooting image is shot. The qualified pre-photographed image is a pre-photographed image in which the evaluation value of the image attribute is equal to or greater than a preset evaluation value threshold. When a qualified pre-shot image is shot in the pre-shot mode, the pre-shot image may be saved as the document image acquired in step S210 for subsequent authenticated document determination and the like. Alternatively, when certain image attributes of the pre-captured image are not acceptable, the pre-captured image may be adjusted to qualify the pre-captured image to obtain the desired document image. For example, when the size of the certificate in the pre-shot image is not acceptable, operations such as cropping, scaling, etc. may be performed on the pre-shot image to qualify the size of the certificate in the pre-shot image.
In step S220, it is determined whether the personal identification information is authenticated information, to obtain an information authentication result.
In the case where the individual identification information is certificate information in a certificate image, the authenticated information may be authenticated certificate information, and the information authentication result may be a certificate authentication result. That is, in the case where the personal identification information is the certificate information in the certificate image, step S220 may include: and judging whether the certificate information in the certificate image is authenticated certificate information or not so as to obtain a certificate authentication result.
For example, some certificate information related to the certificate of the person to be authenticated, such as the identification number, the name, etc. on the identification card, may be identified from the certificate image, and then it may be determined whether or not these certificate information are authenticated certificate information, that is, whether or not the certificate of the person to be authenticated is authenticated.
The authenticated credential information about the authenticated credentials may be stored in a database, referred to herein as an authenticated credential information database. The identification information identified from the identification image may be searched in the authenticated identification information database based on the identification information identified from the identification image, i.e., the identification information identified from the identification image is compared with the authenticated identification information in the authenticated identification information database to determine whether the identification in the identification image is an authenticated identification.
In one example, the authenticated credential information database may be stored locally, such as in a storage (e.g., storage 104 shown in fig. 1) of a server or client device for implementing the identity authentication method and apparatus.
In another example, the authenticated credential information database may be stored in a server of some public service system (e.g., a public security system). The server or the client and other devices for realizing the identity authentication method and device can communicate with the server of the public service system in a networking docking mode, and certificate information is searched from the server of the public service system. For example, the public security network may be searched based on the certificate image acquired in step S210, and if the certificate information (record information) matching the certificate information identified from the certificate image is found, it may be determined that the certificate in the certificate image is an authenticated certificate, that is, a legal certificate.
In another example, the personal identification information is an identification card number. The personnel to be authenticated can input the identification card number into the identity authentication device. The identity authentication device may retrieve from an authenticated person information database, which may store a number of authenticated person's identification card numbers, based on the received identification card numbers. If the matched identity card number exists, the personnel to be authenticated are authenticated personnel, and the personal identification information is authenticated information. Similarly, the authenticated person information base described above may optionally be stored in a server of a local or public service system.
Those skilled in the art will understand that in the embodiment where the personal identification information is a name, a certificate face, a field collected face, a conversion value of a certificate number, a conversion value of a name, a conversion value of a certificate face, or a conversion value of a field collected face, the implementation manner of the identity authentication method is similar to that in the embodiment where the personal identification information is an identity card number, and will not be repeated.
In step S230, a living body detection is performed on the person to be authenticated to obtain a living body detection result.
Step S230 may include: acquiring a face image of a person to be authenticated; and performing living body detection by using the face image to obtain a living body detection result.
The face image may be an image acquired for the face of the person to be authenticated. The face image may be an original image acquired by an image acquisition device such as a camera, or may be an image obtained after preprocessing the original image.
The face image may be transmitted by a client device (e.g., a mobile terminal including a camera) to the electronic device 100 for processing by the processor 102 of the electronic device 100, or may be acquired by an image acquisition device 110 (e.g., a camera) included in the electronic device 100 and transmitted to the processor 102 for processing. Step S230 may be implemented using any existing or future possible living detection method, which is not limited by the present invention. Illustratively, when the face in the face image is a real face, the person to be authenticated is considered to be a living body, the living body detection result may be 1, and when the face in the face image is a false face, the person to be authenticated is considered not a living body, and the living body detection result may be 0.
In step S240, it is determined whether the identity of the person to be authenticated is legal or not based on at least the information authentication result and the living body detection result.
In one example, the information authentication result may be one of 1 and 0, where 1 indicates that the personal identification information of the person to be authenticated is authenticated information and 0 indicates that the personal identification information of the person to be authenticated is not authenticated information. Similarly, the living body detection result may be one of 1 and 0, where 1 indicates that the person to be authenticated is living body (i.e., passes living body verification), and 0 indicates that the person to be authenticated is not living body (i.e., does not pass living body verification). For example, if either one of the information authentication result and the living body detection result is 0, the identity of the person to be authenticated may be considered as illegal, i.e., the authentication of the person to be authenticated fails, in which case it may be prohibited from performing subsequent business operations such as on-line transactions or bank account opening.
In another example, the information authentication result may be any value in the range of [0,1], indicating a confidence that the personal identification information of the person to be authenticated is authenticated information. In this case, an operation such as a weighted average may be performed on the information authentication result and the living body detection result, and whether the person to be authenticated is a legitimate person may be measured based on the operation result. Similar embodiments will be described in detail below and are not repeated here.
It should be understood that the order of execution of the steps of the identity authentication method 200 shown in fig. 2 is merely an example and not a limitation, and for example, step S230 may be executed before step S210, between step S210 and step S220, or simultaneously with step S210 or S220.
According to the identity authentication method provided by the embodiment of the invention, the identity of the person to be authenticated is determined to be legal by combining the authentication information judgment and the living body detection, so that compared with a conventional mode of performing identity authentication based on a password or a certificate alone, the authentication result of the identity authentication method provided by the embodiment of the invention is more accurate, the security of user authentication can be improved, and the rights and interests of the user can be effectively ensured. The method can be well applied to various fields related to identity authentication, such as the fields of electronic commerce, mobile payment or banking business and the like.
The identity authentication method according to the embodiment of the present invention may be implemented in a device, apparatus or system having a memory and a processor, for example.
The identity authentication method according to the embodiment of the invention can be deployed at an image acquisition end, for example, at an image acquisition end of a financial system such as a bank management system or at a mobile terminal such as a smart phone, a tablet computer and the like. Alternatively, the identity authentication method according to the embodiment of the present invention may be distributed and deployed at the server side (or cloud side) and the client side. For example, personal identification information (such as collecting a certificate image or receiving a certificate number, a name and the like input by a person to be authenticated) and/or collecting a face image may be collected at a client, and the client transmits the collected personal identification information and/or the collected face image to a server (or cloud) to perform identity authentication by the server (or cloud).
According to an embodiment of the present invention, before step S210, the identity authentication method 200 may further include: outputting indication information for indicating personnel to be authenticated to provide personnel information of a predetermined type; wherein the personal identification information is personnel information provided by personnel to be authenticated or is obtained based on the personnel information.
Since the type of authenticated information stored in a database for storing authenticated information, such as the authenticated document information database or the authenticated person information database described above, is determined, in order to smoothly perform identity authentication, it is necessary for a user (i.e., a person to be authenticated) to provide personal identification information having a type consistent with that of the authenticated information stored in the database. For this purpose, the instruction information may be output to instruct the person to be authenticated to input the predetermined type of person information. The indication information may be output in the form of text, image, voice, or the like, for example.
For example, a "name" may be displayed on the display screen: and identification card number: "such indication information indicates that the person to be authenticated inputs corresponding person information in a blank place behind the information type (i.e., name and identification card number). The personal information input by the person to be authenticated can be directly used as personal identification information, and the personal identification information can be obtained by converting the personal identification information.
For another example, a voice prompt such as "please show an identity card" may be sent through a speaker, and after the authentication personnel provides their own identity card, image acquisition may be performed on the identity card to obtain an identity card image. In this example, the personal identification information is an identification card image.
The method and the device have the advantages that the indication information for indicating the person to be authenticated to provide the personnel information of the preset type is output, valuable personal identification information which can be compared with the authenticated information is obtained, and therefore identity authentication can be successfully implemented. In addition, the output of the indication information can promote the interaction experience between the user and the identity authentication device.
As is clear from the above, the personal identification information may be original information such as a certificate number, a name, a certificate face, a live acquisition face, or the like, or may be a converted value obtained by converting the original information. According to an embodiment of the present invention, in the case that the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field acquisition face, step S210 may include: acquiring initial information of a person to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and transforming the initial information based on a predetermined algorithm to obtain a transformed value of the initial information as personal identification information.
When the person information of the authenticated person is stored in the database, the person information of the authenticated person may be transformed using a predetermined algorithm (e.g., some sort of hashing algorithm) to obtain authenticated information of the authenticated person. The conversion method is arbitrary and can be set as needed. The transformation process is understood to be an encoding process.
The initial information of the person to be authenticated obtained by the identity authentication device is usually information which is not transformed, such as a certificate number, a name, a certificate face, a field acquisition face, and the like. To facilitate subsequent comparison with the authenticated information, the initial information may be transformed using an algorithm consistent with the algorithm used to generate the authenticated information to obtain personal identification information.
In the case where the personal identification information is the certificate information in the certificate image, the identity authentication method 200 may further include, before step S240: performing additional judgment operation by using the certificate image and/or the face image to obtain an additional judgment result; step S240 may include: and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
The additional judging operations may include one or more operations of making a plausibility or identity judgment for the credentials in the credential image and/or the faces in the face image. For example, the additional judging operation may include a certificate authenticity judging operation and/or a face consistency judging operation. In addition to the authenticated document judgment operation and the living body detection operation, other operations for judging the authenticity or consistency of the document in the document image and/or the face in the face image are added, which is beneficial to further improving the reliability of the identity authentication result, so that the safety of the application related to the identity authentication can be improved.
An embodiment of the additional judging operation is described below by way of example.
According to one embodiment, the additional determining operation may include a certificate authenticity determining operation. The certificate authenticity judging operation may include: judging whether the certificate in the certificate image is a true certificate or not so as to obtain a certificate true or false judgment result. According to another embodiment, the additional judging operation may include a face consistency judging operation. The face consistency judging operation may include: acquiring a certificate face of a person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain a face consistency judging result. The two embodiments described above are described below in connection with fig. 3 and 4, respectively.
Fig. 3 shows a schematic flow chart of an identity authentication method 300 according to another embodiment of the invention. Fig. 3 illustrates an embodiment in which personal identification information of a person to be authenticated is certificate information in a certificate image. Steps S310, S330 to S350 of the authentication method 300 shown in fig. 3 are already described above in the description of steps S210 to S230 of the authentication method 200 shown in fig. 2, and are not repeated here. According to the present embodiment, before step S360, the identity authentication method 300 may further include step S320. In step S320, the certificate authenticity judgment operation described above is performed. In step S360, it is determined whether the identity of the person to be authenticated is legal or not according to the certificate authentication result (i.e., the information authentication result), the living body detection result, and the certificate authenticity judgment result.
After the certificate authenticity judging operation is performed, a certificate authenticity judging result can be obtained. Illustratively, the certificate authenticity determination result may be one of 1 and 0, wherein 1 indicates that the certificate in the certificate image is a genuine certificate and 0 indicates that the certificate in the certificate image is a fake certificate. Of course, the certificate authenticity determination result may be any value within the range of [0,1], which indicates the confidence that the certificate in the certificate image is a true certificate. The false certificate may be, for example, a certificate obtained by flipping a screen on a device such as a mobile phone or a computer, a certificate obtained by forging by using a computer graphics technique, or the like.
Illustratively, if any one of the certificate authentication result, the living body detection result, and the certificate authenticity judgment result is 0, it may be determined that the identity of the person to be authenticated is not legal, otherwise it may be determined that the identity of the person to be authenticated is legal.
Two exemplary embodiments of step S320 are described below.
In one example, step S320 may include: extracting image features of the certificate image; inputting the image characteristics into a trained certificate classifier to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
Due to the interference effect of the wave between the photosensitive element of the image acquisition device and the display, obvious periodic color stripes, called "moire", appear on the image obtained by flipping the certificate photograph on the screen of the computer or mobile phone. Moire is an important clue to distinguish between a genuine document and a flipped document. Since moire exhibits periodicity, the characteristic of moire may be particularly apparent in the frequency domain. In addition, the color of the moire is also different from the color of the real document. Thus, it is possible to identify whether the document in the document image is a reproduction document based on the Moire.
Illustratively, the image features may include, but are not limited to, at least one of spectral features, texture features, and color features.
The document classifier involved in the document authenticity determination operation may be trained beforehand using a large number of sample document images. Illustratively, the "classifier" described herein may be any existing or future-possible machine-learning-based classifier, such as a support vector machine (Support Vector Machine, SVM), or the like.
Taking an identity card as an example, the training process of the document classifier may include: collecting and labeling an identity card image containing a real identity card and an identity card image containing a reproduction identity card; respectively calculating the frequency spectrum information of an identity card image containing a real identity card and an identity card image containing a reproduction identity card to be used as respective image characteristics; taking the image characteristics of the identity card image containing the real identity card as a positive sample, and taking the image characteristics of the identity card image containing the flip identity card as a negative sample to train a classifier model so as to obtain the identity card classifier. Then, in the actual identity authentication process, for the acquired identity card image, spectrum information of the acquired identity card image can be calculated as image features of the acquired identity card image, and the extracted image features are input into a trained identity card classifier so as to judge whether the identity card in the acquired identity card image is a recapping identity card.
The certificate authenticity judgment result output by the certificate classifier can be the confidence that the certificate in the certificate image is a real certificate. Confidence may be any value in the range of 0, 1.
In another example, step S320 may include: identifying an image block containing identification information of the certificate from the certificate image; identifying the certificate identification information in the image block containing the certificate identification information to obtain a certificate authenticity judgment result; the certificate authenticity judging result is the confidence that the certificate in the certificate image is a real certificate.
The certificate identification information may be any information capable of identifying a genuine certificate. For example, the identification information of the certificate may include national logo patterns on an identity card or social security card, some special anti-counterfeiting marks, etc. For example, an authentic document typically has a relatively covert security marking, and the authenticity of the document can be determined by identifying the security marking.
The above two examples of the identification document authenticity judgment by the image features and the identification document authenticity judgment by the identification document information can be simultaneously implemented, that is, the identification document authenticity can be judged based on the image features of the identification document image and the identification document information thereof, and those skilled in the art can understand the implementation of this judgment mode by reading the above description, which is not repeated here.
It should be understood that the order of execution of the steps of the identity authentication method 300 shown in fig. 3 is merely an example and not a limitation, and for example, the step S320 may be executed at any time between the step S310 and the step S360.
Fig. 4 shows a schematic flow chart of an identity authentication method 400 according to another embodiment of the invention. Fig. 4 illustrates an embodiment in which personal identification information of a person to be authenticated is certificate information in a certificate image. Steps S410 to S440 of the authentication method 400 shown in fig. 4 have been described above in the description of steps S210 to S240 of the authentication method 200 shown in fig. 2, and are not repeated here. According to the present embodiment, before step S470, the identity authentication method 400 may further include steps S450 and S460. In steps S450 and S460, the face consistency determination operation described above is performed. In step S470, it is determined whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result, and the face consistency determination result.
In step S450, a certificate face of the person to be authenticated is acquired from the certificate image.
In one example, step S450 may include: and detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
A face (typically a photograph of a face), referred to herein as a "document face," is often included on the document to distinguish from the face in the face image. In the case where the document includes a face, the face may be detected from the document image. In this way, the face detected from the certificate image can be directly used as the certificate face of the person to be authenticated for comparison with the face in the face image. As is clear from the above, the face detected from the document image can be used as personal identification information.
In another example, step S450 may include: performing character recognition on the certificate image to obtain character information in the certificate image; searching matched certificate information from an authenticated certificate information database based on text information in the certificate image; and determining the certificate face in the searched and matched certificate information as the certificate face of the person to be authenticated.
A face may not be included on some credentials, in which case the credentials information in the authenticated credentials information database may be utilized to find the credentials face of the person to be authenticated. Of course, in the case where the certificate includes a face, the certificate face of the person to be authenticated can be found in this way as well.
For example, the document image is an identity card image of person X, from which text recognition can be performed to identify text information such as an identity card number, and then matching identity card information can be searched from an identity card database of the public security system based on the text information such as the identity card number. If the identification card information of the person X is recorded in the identification card database, the matched identification card information can be searched. The identification card information may typically include basic information such as identification card number, name, sex, face photo, etc. of person X. The face photo is the required certificate face. The credential face may then be compared to faces in previously acquired face images.
In step S460, the certificate face of the person to be authenticated is compared with the face in the face image to obtain a face consistency judgment result.
The face (namely, the face in the face image) acquired by the image acquisition device is compared with the certificate face, if the similarity between the face and the certificate face is larger than a preset similarity threshold value, the acquired face and the certificate face can be considered to belong to the same person, otherwise, the acquired face and the certificate face can be considered to not belong to the same person. Thus, the face consistency judgment result can be obtained. Illustratively, the consistency determination result may be a similarity between the face of the certificate and the face in the face image, i.e., the consistency result may be any value in the range of [0,1 ]. For example, the consistency determination result may be one of 1 and 0, where 1 indicates that the face of the certificate and the face in the face image belong to the same person, and 0 indicates that the face of the certificate and the face in the face image do not belong to the same person.
Illustratively, if any one of the certificate authentication result, the living body detection result, and the face consistency determination result is 0, it may be determined that the identity of the person to be authenticated is not legal, otherwise it may be determined that the identity of the person to be authenticated is legal.
It should be understood that, similarly to fig. 3, the execution order of the steps of the authentication method 400 shown in fig. 4 is merely an example and not a limitation, for example, the step S450 may be executed before the step S420, or after the step S420 and before the step S430, or after the step S430 and before the step S440, or simultaneously with the step S420 or S430, and the step S460 may be executed before, after, or simultaneously with the step S440. Of course, step S450 may also be performed simultaneously with step S440.
Embodiments of document authenticity and facial identity determination operations are described above in connection with fig. 3 and 4, respectively, it being understood that the additional determination operations may include document authenticity and facial identity determination operations. That is, in the identity authentication process, the identity of the person to be authenticated and the identity of the face can be simultaneously judged, and whether the identity of the person to be authenticated is legal or not can be determined according to the four results, namely the certificate authentication result, the living body detection result, the certificate identity judgment result and the face identity judgment result.
According to the embodiment of the invention, in the process of determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result, each result of the certificate authentication result, the living body detection result and the additional judgment result has a respective weight coefficient.
A weight coefficient may be assigned in advance to each of the certificate authentication result, the living body detection result, and the additional judgment result (including, for example, the certificate authenticity judgment result and/or the face consistency judgment result described above). The size of the weight coefficient of each result may be determined as needed, and the present invention is not limited thereto.
For example, the certificate authentication result, the living body detection result, and the additional judgment result may be weighted-averaged based on the weight coefficient to obtain an averaged result. And then, comparing the average result with a preset threshold value, if the average result is larger than the threshold value, the identity of the person to be authenticated can be considered legal, otherwise, the identity of the person to be authenticated is considered illegal.
In the case that the personal identification information of the person to be authenticated is the certificate information in the certificate image, the certificate authentication result, the certificate authenticity judgment result and the face consistency judgment result may be any numerical value within the range of [0,1 ]. Of course, the values of the three may be only one of 0 and 1. The value of the living body detection result is one of 0 and 1. Thus, the results may be weighted or arithmetically averaged to obtain an averaged result. The averaged result is then compared to a threshold.
Of course, various detection or judgment results participating in identity authentication can be directly summed up simply to obtain a total result, and the total result is compared with a threshold value to judge whether the identity of the person to be authenticated is legal or not.
Each result has a respective weight coefficient, so that the identity authentication system can conveniently adjust the participation degree of the identity authentication system in the identity authentication process according to the importance of each judgment or detection operation involved in the identity authentication, and the accuracy of the identity authentication can be further improved.
According to an embodiment of the present invention, step S220 (S330, S420) may include: performing character recognition on the certificate image to obtain character information in the certificate image; searching in an authenticated certificate information database based on the text information in the certificate image to obtain a certificate authentication result; the certificate authentication result is the confidence that the certificate information in the certificate image is authenticated certificate information.
As described above, the text information in the document image may be identified from the document image, and then the matching document information may be searched from the authenticated document information database based on the identified text information. The search results are typically the probability (which may also be referred to as confidence) that there is matching credential information in the authenticated credential information database, in which case the credential authentication result may be the search result.
According to an embodiment of the present invention, performing text recognition on the document image to obtain text information in the document image may include: positioning characters in the certificate image to obtain an image block containing the characters; and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
The step of character recognition of the document image may be performed in any suitable character recognition manner. One embodiment of the text recognition step is described below.
First, characters in the document image can be positioned, and the positions of the characters can be determined. Image blocks containing text can then be extracted from the document image. For example, the document image may be input into a trained neural network to locate text in the document image.
For example, a plurality of sample document images may be collected, and the location of text on the sample document images may be indicated manually or by machine labeling. Neural networks for locating the position of text are trained by machine learning algorithms based on a large number of annotated sample document images. The certificate image obtained in the actual identity authentication process is input into a trained neural network, and the neural network can output the positions of characters in the certificate image, for example, the vertex coordinates of the area where the characters are located.
Subsequently, characters in the image block (character area) containing the characters can be identified, and a character identification result is obtained. Identifying text in an image block containing text refers to the process of converting the image content of the image block containing text into a character string. Illustratively, the recognition may be performed using conventional optical character recognition (Optical Character Recognition, OCR) methods: each character is firstly segmented by using binarization operation, and then all characters are identified in a template matching or pattern classification mode.
Alternatively, a sliding window (sliding window) recognition method may be used to locate and recognize text from the document image without relying on the results of the binary segmentation.
Before recognizing the text in the image block containing the text, the identity authentication method 200 (300, 400) may further include: and correcting the image block containing the characters to be in a horizontal state.
The step of adjusting (correcting) the position of the text may be further included between the step of locating the text in the document image and the step of identifying the text in the image block containing the text. In practical applications, the document in the document image may have a certain inclination, that is, the image block containing the text may have a certain inclination. Therefore, before recognizing the characters in the document image, the image block containing the characters (i.e., the area where the characters are located) can be corrected and converted into a horizontal level state. For example, in the step of locating the text in the document image, the coordinates of four vertices of the area where the text is located in the document image are already obtained, and the image area included in the four vertices, that is, the image block including the text, is already obtained, so that it is only necessary to rotate it to a horizontal state according to the coordinates of the image block including the text.
Correcting the image block containing the characters to be in a horizontal state can facilitate the subsequent recognition of the characters in the image block containing the characters.
According to an embodiment of the present invention, after recognizing the text in the image block containing the text to obtain the text information in the document image, performing text recognition on the document image to obtain the text information in the document image further includes: outputting text information in the certificate image for viewing by a user; receiving text correction information input by a user; comparing the text to be repaired indicated by the text correction information with the corresponding text in the text information in the certificate image; and if the difference between the text to be repaired, indicated by the text correction information, and the corresponding text in the text information in the certificate image is smaller than a preset difference threshold value, updating the text information in the certificate image by using the text correction information.
In the text recognition process, a text correction step can be added. When the OCR method is adopted to recognize the characters, recognition errors or recognition failures can exist for some rarely used characters, near characters and characters with many strokes. Therefore, the user is allowed to modify the characters, the problems existing in character recognition can be flexibly and conveniently solved, and the precision of character recognition is improved. The user may be a person to be authenticated as described herein, or may be other than a person to be authenticated, such as a manager of an identity authentication system, or the like.
For example, text information identified from the document image may be output for viewing by a user in the form of a text display or voice playback, or the like. When the user finds that the character recognition result has an error, character correction information can be input. After the text correction information input by the user is received, the text to be corrected indicated by the text correction information can be compared with the identified corresponding text. If the difference between the two is smaller than the preset difference threshold value, the text information in the certificate image can be updated by using the text correction information, otherwise, the text information can not be updated. For example, if a word on the document is identified as "one", the word correction information entered by the user indicates that the word should be corrected to "one", and the identity authentication system may reject the user's correction request because of the large difference between the two, without correcting the word.
According to an embodiment of the present invention, step S230 (S340 and S350, or S430 and S440) may include: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; detecting a face in the face image; and judging whether the human face in the human face image executes the living body action indicated by the living body action instruction so as to obtain a living body detection result.
In one example, living body motion instructions may be generated and output by a client device (e.g., a mobile terminal including a camera, VTM, etc.), face images (including still images or video) may be acquired, and living body detection may be performed using the face images. In another example, the living body action instruction may be generated by the cloud server, the living body action instruction is output by the client device (e.g., a mobile terminal including a camera, VTM, etc.) and the face image is acquired, and then the client device uploads the acquired face image to the cloud server, and the cloud server detects the face in the face image and determines the authenticity of the face. If the face is judged to be a real face, the person to be authenticated can be considered to be a living body, otherwise, the person to be authenticated can be considered not to be a living body. Illustratively, the cloud server may include a trained face authenticity classifier and a false face category classifier. The face authenticity classifier can be used for judging the authenticity of the face, and the false face type classifier can be used for judging the type of the false face under the condition that the face is the false face.
An implementation of an embodiment of living body detection based on the action of a face in a face image is described below by way of example to facilitate understanding of the present embodiment.
The living body action instruction may instruct a person to be authenticated to make a corresponding living body action according to the instruction. The living body motion indicated by the living body motion command may be a single static motion (corresponding to one gesture) or a variable motion. For example, the living body motion instructions may be generated and output prior to the acquisition of the face image, without change during the acquisition of the face image. The living body action instructions may also be a continuous instruction sequence, i.e. different instructions are continuously generated and output in the process of acquiring the face image, and instruct the person to be authenticated to change the living body action made by the person following the instructions.
The living body motion may be, for example, pressing the skin of the two cheeks with a finger, swelling the two cheeks with the air swallowed in the mouth, or reading a piece of text. When a person to be authenticated executes one or more living body actions, face images of the person to be authenticated can be acquired, whether the living body actions executed by the person to be authenticated are qualified or not is judged, if yes, the living body detection is successful, and if not, the living body detection fails. For example, if the living body action indicated by the living body action instruction is reading a section of text, a face image in the reading process can be collected, whether the lip movement of the face in the face image is matched with the lip movement of the corresponding text is judged, and if so, the living body detection is successful.
According to an embodiment of the present invention, step S230 (S340 and S350, or S430 and S440) may include: generating a living body action instruction, wherein the living body action instruction is used for indicating a person to be authenticated to execute a corresponding living body action; acquiring face images of personnel to be authenticated acquired in real time; capturing a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed from the acquired face image; and inputting the skin region image before the living body action is executed by the person to be authenticated and the skin region image after the living body action is executed into a skin elasticity classifier to obtain a living body detection result.
When a person to be authenticated performs one or more living body actions, skin area images before and after the person to be authenticated performs the living body actions can be captured from the acquired face images, respectively. Whether the person to be authenticated starts to execute the living body action can be judged according to the acquired face image. For example, a face in the face image may be monitored, and a time at which the person to be authenticated starts to perform the living body action may be determined according to a change in the state of the face. The face image acquired before the start time is used as the face image before the person to be authenticated performs the living body action. Subsequently, the faces in the face images may be continuously monitored for a period of time, which is an estimated duration of the living body motion. And taking the face image acquired after the duration time is over as the face image after the person to be authenticated performs the living body action. The skin area images before and after the living body action of the person to be authenticated is performed can be extracted from the face images before and after the living body action of the person to be authenticated is performed, respectively.
The obtained skin region image may then be input into a skin elasticity classifier, which is a pre-trained classification model. For example, if the collected human face skin is living skin, the skin elasticity classifier may output 1, otherwise output 0.
According to an embodiment of the present invention, capturing, from a captured face image, a skin area image before a living body motion is performed and a skin area image after the living body motion is performed of a person to be authenticated includes: selecting a face image before the living body action of the person to be authenticated and a face image after the living body action of the person to be authenticated from the collected face images; positioning a face in a face image before the living body action of the person to be authenticated is executed and a face in a face image after the living body action is executed by using a face detection model; positioning key points of a face in a face image before the living body action of a person to be authenticated is executed and a face image after the living body action is executed by using a face key point positioning model; and dividing the areas of the faces in the face image before the living body action of the person to be authenticated is executed and the face in the face image after the living body action is executed according to the face positions and the key point positions obtained by positioning, so as to obtain a skin area image before the living body action of the person to be authenticated is executed and a skin area image after the living body action is executed.
The extraction of the skin area image can be realized based on the existing face detection and face key point positioning algorithm. For example, a face image before the person to be authenticated performs the living body action may be input into a trained face detection model and a face key point positioning model, respectively, to obtain a face position (for example, coordinates of a face contour point) and a key point position (coordinates of each key point) respectively. The key points may be any point on the face, such as the left eye corner, nose tip, left lip corner, etc. Of course, the key points may be the face contour points. For example, a face region may be segmented into a series of triangular patches according to the face position and the key point position, and a triangular patch image block located in the chin, cheekbone, cheek, etc. region may be taken as a face skin region, and a skin region image may be obtained. The extraction manner of the skin region image after the authentication person performs the living body action is similar to the above manner, and will not be described again. The division method of the regions, the number of the selected regions as the skin regions of the face, and the positions included in each region may be set as needed for each face image, and the present invention is not limited thereto.
In one example, the face detection model and the face key point localization model may be implemented using deep neural networks. The deep neural network is a network capable of autonomous learning, and the deep neural network can accurately and efficiently detect and locate faces and key points in face images.
According to an embodiment of the present invention, the identity authentication method 200 (300, 400) may further include: obtaining a sample face image, wherein the positions of the faces and the positions of key points of the faces in the sample face image are marked; and training a neural network by using the sample face image to obtain a face detection model and a face key point positioning model.
A large number of (e.g., about 10000) sample face images can be collected in advance, and the positions of a series of key points such as the corners of eyes, corners of mouth, nose wings, highest points of cheekbones and the like of the face and the positions of contour points of the face can be marked in each sample face image in a manual mode. Then, a machine learning algorithm (such as deep learning or a regression algorithm based on local features) may be used to perform neural network training with the labeled sample face image as input, so as to obtain a required face detection model and a face key point positioning model.
According to an embodiment of the invention, the identity authentication method 200 (300, 400) may further comprise a training step of the skin elasticity classifier. A schematic flow chart of the training step S500 of the skin elasticity classifier according to one embodiment of the present invention is described below with reference to fig. 5.
As shown in fig. 5, the training step S500 of the skin elasticity classifier includes the following steps.
In step S510, a face image before the living body action is performed by the real person and a face image after the living body action is performed are acquired, and a face image before the living body action is performed by the dummy person and a face image after the living body action is performed are acquired.
In step S520, a face image before the living body motion of the real person and a skin area image after the living body motion of the real person are extracted as positive sample images from the face image before the living body motion of the real person and the face image after the living body motion of the real person are performed.
In step S530, a face image before the dummy person performs the living body action and a skin area image after the dummy person performs the living body action are extracted as negative sample images from the face image before the dummy person performs the living body action and the face image after the dummy person performs the living body action.
In step S540, the classifier model is trained using the positive and negative sample images to obtain a skin elasticity classifier.
For example, training of the skin elasticity classifier may be performed offline. Face images before and after a real person performs a prescribed living body action can be collected in advance, and face images before and after a false person performs a prescribed living body action can be collected. The dummy may be, for example, a photograph containing a face, a video containing a face, a paper mask, or a three-dimensional (3D) model of a face, etc.
The manner of extracting the skin region image from the face image of the real person or the dummy person may refer to the above description about the manner of extracting the skin region image in the actual authentication process, which is not described herein. Illustratively, after the positive and negative sample images are obtained, the classifier model may be trained using a statistical learning method such as deep learning or SVM, thereby obtaining the skin elasticity classifier.
According to an embodiment of the present invention, step S230 (S340 and S350, or S430 and S440) may include: acquiring a face image acquired by a binocular camera aiming at the face of a person to be authenticated under the irradiation of structured light; and determining whether the face of the person to be authenticated belongs to a living body according to the face image so as to obtain a living body detection result.
The above-mentioned mode of performing living body detection according to actions performed by a human face or by using a skin elasticity classifier is mainly aimed at some application scenes with weaker safety requirements. For some application scenes with high safety requirements, a mode of living body detection based on special hardware can be selected. For example, a binocular camera may be used to collect face images under structured light illumination to perform living body detection using the collected face information and structured light illumination information.
In one example, a detection parameter indicating a degree of subsurface scattering of structured light on a face of a person to be authenticated may be determined based on the collected face image under structured light irradiation, and then, based on the detection parameter and a predetermined parameter threshold, it may be determined whether the face of the person to be authenticated is a living body. Because the false face such as a 3D mask and the true face have different subsurface scattering degrees (when subsurface scattering is stronger, the image gradient is smaller, so that the diffusion degree is smaller), for example, the subsurface scattering degree of a mask made of common paper or plastic and the like is far weaker than that of the true face, and the subsurface scattering degree of a mask made of common silica gel and the like is far stronger than that of the true face, the false face and the true face can be distinguished through judging the image diffusion degree, so that a mask attacker can be effectively defended.
In another example, depth information of a face of a person to be authenticated may be obtained from a face image. In addition, a light spot pattern formed by the face of the person to be authenticated under the irradiation of the structural light can be obtained. Texture information of the face of the person to be authenticated can be obtained according to the light spot pattern. Subsequently, it can be determined whether the face of the person to be authenticated belongs to a living body in combination with the depth information and the texture information.
Different material structures can form different light spot patterns under the structured light. Texture information of the face, namely material properties of the face surface, can be obtained according to the light spot pattern. If the texture information of the face of the person to be authenticated is found to be not in accordance with the distribution rule of the skin texture, the face of the person to be authenticated is determined not to belong to a living body, and mask attack and the like are judged. Because an attacker can use the mask made of imitation human skin material to implement attack, even if the texture information of the human face of the person to be authenticated accords with the human skin texture distribution rule, the human face of the person to be authenticated is not necessarily determined to belong to a living body, and therefore, whether the human face of the person to be authenticated belongs to the living body can be judged by combining the depth information. Depth information of the face of the person to be authenticated can be obtained from face images acquired at two different perspectives. It will be appreciated that a real face is generally undulating, for example, with different depth of coordinates for the eyes and nose, with a large gap, whereas a mask made of imitation leather has very little undulation and a small difference in depth of coordinates for the eyes and nose. Therefore, the combination depth information can further judge whether the face of the person to be authenticated belongs to the living body.
Therefore, in the embodiment of the invention, the binocular camera and the structured light are combined, the 3D face with the structured light pattern is collected through the binocular camera, and then living body detection is performed according to the subsurface scattering degree of the structured light on the 3D face or the facula pattern formed by combining the depth information of the face and the structured light on the face.
The living body detection process in the identity authentication method can also have other implementation modes according to the embodiment of the invention. Another living body detection method is described below. In this living body detection method, a true face is distinguished from a false face on a photograph or screen based on light reflection characteristics.
Exemplary implementations of the living detection step in the authentication method and the living detection module in the authentication apparatus are described below. Note that the "living experience" and "living body detection" described herein have the same meaning and are used interchangeably.
Fig. 6 illustrates an example electronic device 600 for implementing the identity authentication method and apparatus according to another embodiment of the invention. The electronic device 600 includes one or more processors 602, one or more storage devices 604, an input device 606, an output device 608, an image capture device 610, and a light source 612, interconnected by a bus system 614 and/or other forms of connection mechanisms (not shown). The processor 602, the storage device 604, the input device 606, the output device 608, the image capturing device 110 and the bus system 614 in the electronic apparatus 600 shown in fig. 6 are similar to the structures and the working principles of the processor 102, the storage device 104, the input device 106, the output device 108, the image capturing device 110 and the bus system 112 shown in fig. 1, and are not repeated.
The light source 612 may be a device capable of emitting light, and may include a dedicated light source such as a light emitting diode, or may include an unconventional light source such as a display screen. In the case where the authentication method and apparatus are implemented in a mobile terminal such as a smart phone, the input device 606, the output device 608, and the light source 612 may be the same display screen.
Fig. 7 shows a schematic flow chart of a living body detection step S700 (corresponding to the above-described steps S220, or S340 and S350, or S430 and S440) according to one embodiment of the present invention. As shown in fig. 7, the living body detection step S700 includes the following steps.
In step S710, one or more illumination images acquired for the face of the person to be authenticated under irradiation of detection light are acquired.
For example, the detection light may be emitted to the face of the person to be authenticated using a light source. The light source may be controlled by the processor to emit light. For example, the light sources may share other light emitting devices (e.g., at least part of the area of the display screen, the light sources in the projector) as the light sources. As another example, the light source may also be a dedicated light source (e.g., one or more light emitting diodes or laser diodes arranged in a manner, such as a flash for a camera, etc.), a combination of a display screen and other types of light sources, etc.
The pattern of the detection light may include, but is not limited to, a color of the detection light, a position of the light emitting region, an intensity of the detection light, an irradiation angle of the detection light, a wavelength of the detection light, a frequency of the detection light, and the like.
For example, the pattern of the detection light may not change during the irradiation of the face of the person to be authenticated, that is, the light source may irradiate the face of the person to be authenticated with a single constant light. For example, in a preferred embodiment, the light source employed is a display screen of a mobile terminal. On the display screen, the color and brightness of each pixel can be controlled so that the display screen can emit light exhibiting a specific pattern, such as structured light, at this time, the specific color or brightness of the screen displayed in the specific pixel area can be a specific detection light pattern selected after optimization according to a large amount of experimental data, and in this detection light pattern, the living experience of the object to be verified can be rapidly and accurately performed through a specific algorithm corresponding to the detection light pattern. In this case, one or more illumination images may be acquired under the irradiation of the constant detection light, and the living body verification may be performed based on the illumination images.
It is preferable that the pattern of the detection light is changed at least once during irradiation of the face of the person to be authenticated. In this case, the mode change frequency of the detection light and the acquisition frequency of the image may be cooperatively controlled so that at least one illumination image may be acquired under the detection light of each mode.
More preferably, the pattern of the detection light is changed between every two consecutive moments. The time instant may be any particular point in time in a predetermined period of time. For example, the pattern of the detection light may be changed every 1 second. The mode of the detection light is continuously changed, so that richer light reflection characteristic information can be obtained, and the method is beneficial to more accurately and efficiently implementing living body verification based on the light reflection characteristic.
Alternatively, the pattern of the detection light is randomly changed or set in advance in the process of irradiating the face of the person to be authenticated. In one example, the pattern of the detection light is changed completely randomly. For example, in a preferred embodiment, the light source employed is a display screen of a mobile terminal. On the display screen, the color of each region may be controlled, and for each region, a certain RGB value is randomly selected within a predetermined RGB value range at a time as the color value of the region to be displayed. The division of the regions may be arbitrarily set, for example, each region may include one or more pixels, and the sizes of two different regions may be the same or different.
In another example, the pattern of the detection light may be preset. For example, it may be set that the detection light is irradiated for a total of 10 seconds, one pattern is changed every second, and the color, position, intensity, and the like of the detection light emitted each time are set in advance. In the living experience authentication process, the light sources can sequentially emit detection light in 10 modes according to a preset mode. The predetermined pattern of detection light may be a pattern that is relatively effective for liveness authentication based on previous experience, which is advantageous for improving accuracy and efficiency of liveness authentication.
For example, the pattern of the detection light irradiated to the face of the person to be authenticated may be dynamically changed by dynamically changing the emission color of the detection light. For another example, the pattern of the detection light irradiated to the face of the person to be authenticated may also be dynamically changed by dynamically changing the position of the light emitting region of the detection light (i.e., changing the position of the detection light). For another example, the pattern of the detection light irradiated to the face of the person to be authenticated may also be dynamically changed by dynamically changing the light emission color of the detection light and the position of the light emission region of the detection light simultaneously.
For example, the position of the light emitting region of the detection light may be dynamically changed by changing the position of the light source, which may change the position where the detection light irradiates the face of the person to be authenticated. For another example, the position of the face of the person to be authenticated irradiated with the detection light can also be dynamically changed by changing the angle of the outgoing light of the light source.
In a preferred embodiment, the light source used is a display screen of a mobile terminal, and the image capturing device is a camera (e.g. a front camera) of the mobile terminal located on the same side as the display screen. Compared with the scheme adopting an additional special light source, the scheme can be realized by adopting the existing mobile terminals such as mobile phones and the like, is not limited by external conditions, and can be better applied to application scenes such as remote account opening and the like through the personal mobile terminal.
Further, in a further preferred version of the above preferred embodiment, the mode of light employed is a combination mode of the color of light and the position of the light emitting region, for example: different colors of light are emitted at different positions of the display screen at the same time, or the same color of light is emitted at different positions of the display screen at the same time but the colors of the emitted light are different at different times, or the like. The scheme of adopting the combination mode of the color of the light and the position of the light-emitting area is better in living body detection effect compared with the scheme of selecting other light modes such as changing the light intensity, and the stimulation of the light to human eyes can be reduced, so that the user experience is improved.
In the case where the face of the person to be authenticated is irradiated with the detection light, an image of the face of the person to be authenticated under irradiation of the detection light may be acquired by an image acquisition device (for example, the image acquisition device 610 of the electronic apparatus 600) to obtain the illumination image. The image acquisition device may be controlled by the processor to acquire images. The image acquisition device transmits one or more illumination images to a processor of the authentication system for in vivo verification. The number of illumination images collected under the same pattern of light may be one or more, for example, and the present invention is not limited thereto. As will be appreciated by those skilled in the art, live experience authentication is based primarily on face verification, and thus, in accordance with embodiments herein, in acquiring illumination images and subsequent motion images and real-time images, the objective is to acquire images containing faces for live experience authentication.
Illustratively, the illumination image may be transmitted by a client device (e.g., a mobile terminal including a camera, a remote video teller machine (Video Teller Machine, VTM), etc.) to the electronic device 600 for processing by the processor 602 of the electronic device 600, or may be acquired by an image acquisition device 610 (e.g., a camera) included in the electronic device 600 and transmitted to the processor 602 for processing.
In step S720, it is determined whether the face of the person to be authenticated belongs to a living body based on the light reflection characteristics represented by the face of the person to be authenticated in one or more illumination images, so as to obtain an illumination activity experience verification result.
Human skin, such as human face skin, is a diffusely reflective material and human faces are three-dimensional; in contrast, a display screen such as a Liquid Crystal Display (LCD) or an Organic Light Emitting Diode (OLED) display may be regarded as a self-luminous object and also generally includes a partially specular reflection component, while a photograph or the like is generally planar and also generally includes a partially specular reflection component, and the reflection characteristics thereof as a whole are uniform and lack the three-dimensional characteristics of a human face, whether the display screen or the photograph. The light reflection characteristic of the face is different from that of the display screen or the photograph, whereby it is possible to judge whether the face of the person to be authenticated belongs to the living body or not by based on the light reflection characteristic of the face of the person to be authenticated.
In step S730, it is determined whether or not the person to be authenticated passes the living body verification based at least on the photo-living experience authentication result to obtain a living body detection result.
In one example, the final living detection result may be determined directly based on the photo-living experience-verification result, that is, if the photo-living experience-verification result indicates that the face of the person to be authenticated belongs to a living body, it is determined that the person to be authenticated passes the living body verification, and the living detection result may be 1 as an example; if the photo-activation experience verification result indicates that the face of the person to be authenticated does not belong to the living body, it is determined that the person to be authenticated does not pass the living body verification, and the living body detection result may be 0 as an example. The living body detection method has the advantages of small calculated amount and high efficiency. In another example, the photo-active experience verification result and other active experience verification results obtained based on other active experience verification methods can be comprehensively considered in combination with other active experience verification methods to finally determine whether the person to be authenticated passes the in-vivo verification. The accuracy of the living body detection mode is high.
As described above, since the light reflection characteristics of the face are different from those of an object such as a display screen or a photo, the real face can be effectively distinguished from the face played back on the screen or the face on the photo based on the light reflection characteristics. Therefore, the identity authentication method and the device adopting the living body detection method can effectively defend a screen or a photo attacker, so that the safety and the user experience of an identity authentication system can be improved.
For example, the authentication method employing the living body detection method according to the embodiment of the present invention may be implemented in a device, apparatus, or system having a memory and a processor.
The identity authentication method according to the embodiment of the invention can be deployed at an image acquisition end, for example, at an image acquisition end of a financial system such as a bank management system or at a mobile terminal such as a smart phone, a tablet computer and the like. Alternatively, the identity authentication method according to the embodiment of the present invention may be distributed and deployed at the server side (or cloud side) and the client side. For example, the personal identification information and the emitted light can be obtained at the client, an image of the face of the person to be authenticated can be collected, the obtained personal identification information and the collected image are transmitted to the server (or the cloud), the server (or the cloud) judges the authenticated information and performs living body detection to obtain an identity authentication result, and the identity authentication result is returned to the client. The server has larger data operation capability relative to the client, and the identity authentication is carried out by the server, so that the authentication speed and the user experience can be improved, and in addition, the server has larger operation speed, so that a more complex identity authentication algorithm can be adopted, and the accuracy of the identity authentication can be improved by carrying out the identity authentication on the server.
Although the living body verification based on the light reflection characteristic can defend against screen or photo attacks, the attack modes of an attacker can be various, and some other attack modes can possibly break through the living body verification based on the light reflection characteristic, such as three-dimensional simulation mask attack. For mask attack situations, live experience methods based on light reflection characteristics may not be well defended. Therefore, in order to further perfect the living body experience verification method and improve the safety of living body verification, living body detection can be further carried out by combining other living body experience verification modes on the basis of living body verification based on light reflection characteristics. An exemplary implementation is described below.
Fig. 8 shows a schematic flow chart of a living body detection step S800 (corresponding to the above-described steps S220, or S340 and S350, or S430 and S440) according to another embodiment of the present invention. Steps S810 to S830 in the living body detection step S800 shown in fig. 8 correspond to steps S710 to S730 of the living body detection step S700 shown in fig. 7, and those skilled in the art can understand steps S810 to S830 shown in fig. 8 with reference to the related description of fig. 7, and are not repeated here. According to the present embodiment, the living body detection step S800 may further include steps S840 to S870, and the step S830 may include: and determining whether the personnel to be authenticated passes the living body verification based on the illumination living body experience verification result and the action living body experience verification result so as to obtain a living body detection result.
In step S840, an action instruction is output, where the action instruction is used to instruct the person to be authenticated to perform a corresponding action.
For example, the action instructions may be output randomly or according to a predetermined rule. The action instructions may comprise individual instructions or a sequence of instructions consisting of a series of instructions. For example, the action instructions may instruct the person to be authenticated to nod, pan, blink, open mouth, etc.
In step S850, a plurality of action images acquired for the face of the person to be authenticated are acquired.
The image acquisition can be performed on the face of the person to be authenticated at the same time of outputting the action instruction or within a period of time after outputting the action instruction, so as to obtain a plurality of action images. Illustratively, the plurality of action images may be consecutive video frames. The motion image may also be acquired by the image acquisition device 110 described above, or by other image acquisition devices.
In step S860, an action performed by the person to be authenticated is detected based on the plurality of action images.
For example, face detection and key point recognition may be performed in each motion image, and the motion performed by the face may be determined based on the face contours and/or the face key points in the plurality of motion images, for example, by identifying the change trend of the face contours and/or the face key points in the acquired continuous plurality of motion images. Then, it can be judged whether the action performed by the face is consistent with the action indicated by the action instruction.
In step S870, it is determined whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction, so as to obtain an action living experience verification result.
For example, if the motion performed by the person to be authenticated in the plurality of motion images is consistent with the motion indicated by the motion instruction, it is determined that the face of the person to be authenticated belongs to the living body, and if the motion performed by the person to be authenticated in the plurality of motion images is inconsistent with the motion indicated by the motion instruction or the person to be authenticated does not perform any motion in the plurality of motion images (i.e., the motion of the person to be authenticated is not detected), it is determined that the face of the person to be authenticated does not belong to the living body. Of course, the above manner is merely an example, and other determination manners are possible for determining whether the authentication of the person to be authenticated is performed, for example, if the person to be authenticated performs a plurality of actions in a plurality of action images, and if the plurality of actions includes actions consistent with the actions indicated by the action instructions, the face of the person to be authenticated is determined to belong to the living body.
Illustratively, in step S830, if both the photo-active experience-verification result and the action-active experience-verification result indicate that the face of the person to be authenticated belongs to the living body, it is determined that the person to be authenticated passes the living body verification, and if either one of the photo-active experience-verification result and the action-active experience-verification result indicates that the face of the person to be authenticated does not belong to the living body, it is determined that the person to be authenticated does not pass the living body verification. Of course, the above manner is merely an example, and there may be other manners of determining whether or not a live experience passes.
It should be noted that the order of execution of the above-described living experience verification step based on the action (steps S840 to S870) and the living experience verification step based on the light reflection characteristic (steps S810 to S820) may be arbitrarily set, and the present invention is not limited thereto.
The living body detection method including the living body experience authentication step based on the motion can be independently executed by an image acquisition end, for example, an image acquisition end of a financial system such as a bank management system or a mobile terminal such as a smart phone, a tablet computer or the like. Alternatively, the living body detection method including the living body experience authentication step based on the motion may also be cooperatively executed by the server side (or cloud side) and the client side. For example, an action instruction may be generated at a server or a client, the client collects an action image of a person to be authenticated, the client transmits the collected action image to the server (or cloud), the server (or cloud) performs in-vivo verification based on the action, and a verification result is returned to the client.
The living experience verification mode based on actions can defend attack modes such as mask attack, and can be combined with the living experience verification mode based on light reflection characteristics, so that various attacks can be effectively defended, the safety of an identity authentication system or a similar system adopting the living detection method is further ensured, the information safety and the rights and interests of a user are protected, and the living experience verification method has extremely wide application value and market prospect.
According to an embodiment of the present invention, step S870 may include: if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period of not more than the first preset time, it is determined that the face of the person to be authenticated belongs to the living body, and if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to the living body.
And randomly outputting action instructions (such as words or voice instructions such as 'please head', 'please open mouth') to instruct the person to be authenticated to execute corresponding actions (such as head, mouth opening and the like), and detecting key points of the face area to judge whether the actions executed by the person to be authenticated are consistent with the output action instructions. If the fact that the action executed by the person to be authenticated is consistent with the output action instruction is detected within the first preset time, determining that the face of the person to be authenticated belongs to a living body; if the action executed by the person to be authenticated is detected to be inconsistent with the output action instruction within the first preset time, or the action of the person to be authenticated is not detected within the first preset time, the face of the person to be authenticated can be determined not to belong to the living body.
According to an embodiment of the present invention, the identity authentication method (200, 300 or 400) may further include: counting once in each process of executing step S840 to step S870 to obtain the number of action verifications; after step S870, the identity authentication method (200, 300 or 400) may further include: if the result of the action living experience verification indicates that the face of the person to be authenticated does not belong to a living body, outputting first error information, judging whether the action verification number reaches a first time threshold, if the action verification number reaches the first time threshold, turning to step S330, and if the action verification number does not reach the first time threshold, returning to step S340 or returning to step S310 if the step S310 is executed before the step S340, wherein the first error information is used for prompting the failure of the living experience verification for the person to be authenticated.
For example, a counter may be provided for counting the number of executions of the action-based live experience verification step (steps S340-S370), and the counter may be incremented for each execution of the action-based live experience verification step. The output result of the counter is the action verification times. After the entire living body detection step (living body detection step S800) is terminated, the counter may be cleared.
If the current action living experience verification result indicates that the face of the person to be authenticated does not belong to a living body, first error information can be output. The first error message may prompt failure of live experience for a face of a person to be authenticated and prompt the person to resume live experience. If the number of executions of the motion-based live experience verification step (the number of motion verifications) has not reached the preset first time count threshold at this time, an attempt may be made to re-execute the motion-based live experience verification step. Alternatively, if the live experience verification step based on the light reflection characteristic (steps S810 to S820) is performed before the live experience verification step based on the motion, it is possible to return directly to step S810, i.e., to re-perform the live experience verification step based on the light reflection characteristic and the live experience verification step based on the motion once again, so as to improve accuracy of the live verification.
The first time number threshold may be any suitable value, which may be set as desired, and the present invention is not limited thereto.
In the actual living experience authentication process, various unexpected situations may exist, for example, the user does not timely make a specified action, the acquired image is not clear enough, the face detection result is not accurate enough, and the situations may cause the user to be mistakenly identified as a non-living body. Therefore, in order to compromise user experience and system security, a threshold number of times may be set, allowing the user to try in-vivo verification multiple times within a reasonable range. If the user has not been correctly identified as a living body when the threshold number of times is reached, it can be determined that it does not belong to a living body.
According to an embodiment of the present invention, before step S710 (or S810), the identity authentication method (200, 300 or 400) may further include: step S708: judging whether the image acquisition condition of the person to be authenticated meets the preset requirement, and if the image acquisition condition meets the preset requirement, turning to step S710 (or S810), wherein the image acquisition condition comprises the position of the person to be authenticated in the image acquisition area of the image acquisition device and/or the relative angle of the person to be authenticated and the image acquisition device.
Before executing the live experience verification step or other live experience verification steps based on the light reflection characteristics, the image acquisition conditions of the personnel to be authenticated can be detected first, and whether the image acquisition conditions meet preset requirements or not can be judged. The subsequent living experience verification step or other living experience verification steps based on the light reflection characteristic can be executed only when the image acquisition condition of the person to be authenticated meets the preset requirement, so that the quality of images (including illumination images, action images and the like) for living body verification can be ensured, the face in the images can be detected correctly, and the accuracy of living body verification can be improved.
According to an embodiment of the present invention, before step S708 or simultaneously with step S708, the identity authentication method (200, 300 or 400) may further include: step S706: and outputting first prompt information, wherein the first prompt information is used for prompting a person to be authenticated to face the image acquisition device and approach the image acquisition device.
The first prompt may be output in any suitable manner. Illustratively, step S706 may include: and outputting the first prompt information in one or more of a voice form, an image form and a text form. For example, a text such as "please face the screen" (the face screen corresponds to the face of the image pickup device) may be output on the display screen of the mobile terminal, or a prompt such as "please face the screen" may be issued through the speaker of the mobile terminal.
By way of example, the identity authentication method may be implemented by an Application (APP) installed on an electronic device such as a mobile terminal. When the user opens the application and then enters the living body detection stage, the first prompt information can be started to be output so as to prompt the user to keep a proper relative position relation with the mobile terminal, so that the camera of the mobile terminal can collect ideal face images for living body verification. In one example, the first prompt information may be continuously or intermittently output before the image capturing condition of the person to be authenticated satisfies the preset requirement.
The first prompt information is output so as to guide the user to timely adjust the relative position relation between the identity authentication device (mainly an image acquisition device in the identity authentication device) and the user, and meanwhile, the user experience can be improved through the interaction between the user and the system.
According to an embodiment of the present invention, step S708 may include: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; and judging whether the image acquisition condition meets the preset requirement according to the face area detected in the real-time image, if the face area is positioned in the preset area and the proportion of the face area in the real-time image is larger than a first preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not positioned in the preset area or the proportion of the face area in the real-time image is not larger than the first preset proportion, determining that the image acquisition condition does not meet the preset requirement.
In one example, whether the image acquisition condition of the person to be authenticated meets the preset requirement may be determined according to the image acquired for the face of the person to be authenticated. For example, the mobile terminal may acquire real-time images through a camera and perform face detection. Face detection may obtain a face region, which is an image block containing a face. Whether the image acquisition condition of the person to be authenticated meets the preset requirement can be judged according to the position of the face area in the real-time image and the proportion of the face area in the real-time image. For example, the preset area may be defined in the real-time image. The position of the face of the person to be authenticated in the image acquisition area of the image acquisition device can be defined through the preset area. The size of the face region can reflect the distance and the relative angle between the person to be authenticated and the image acquisition device. The preset area and the first preset ratio can be set according to the needs, and the invention is not limited to this.
For example, if the face area of the person to be authenticated is located in the preset area, but the proportion of the face area in the real-time image is smaller than the first preset proportion (for example, two thirds), the person to be authenticated may be too inclined relative to the image capturing device and/or be far away from the image capturing device, and the image capturing condition may not be considered to meet the preset requirement.
Illustratively, the identity authentication method (200, 300 or 400) may further comprise: if the proportion of the face area in the real-time image is not greater than a first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to approach the image acquisition device.
Alternatively, the first acquisition prompt may be output in one or more of a voice form, an image form, and a text form. For example, if the proportion of the face area in the real-time image is found to be not greater than the first preset proportion, a prompt message such as "please get close to the camera" (or "please get close to the mobile phone") may be displayed on the display screen.
According to an embodiment of the present invention, step S708 may include: acquiring a real-time image acquired by aiming at the face of a person to be authenticated; outputting a preset area for calibrating the image acquisition conditions in real time and displaying a face area in a real-time image; and judging whether the image acquisition condition meets the preset requirement according to the face area detected in the real-time image, if the face area is positioned in the preset area and the proportion of the face area in the preset area is larger than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not positioned in the preset area or the proportion of the face area in the preset area is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
In one example, whether the image acquisition condition of the person to be authenticated meets the preset requirement may be determined according to the image acquired for the face of the person to be authenticated. For example, the mobile terminal may acquire real-time images through a camera and perform face detection. Face detection may obtain a face region, which is an image block containing a face. Whether the image acquisition condition of the person to be authenticated meets the preset requirement can be judged according to the position of the face area in the real-time image and the proportion of the face area in the preset area. For example, a preset area may be displayed on a display screen. The relative position between the person to be authenticated and the screen can be defined by the preset area. The size of the face region may reflect the distance and the relative angle between the person to be authenticated and the image acquisition device, for example, when the face is detected to be far or near from the screen, the face region displayed on the screen in real time is changed from small to large, and when the face is sufficiently close to the screen, the size of the face region displayed on the screen satisfies a preset condition, and of course, the size of the face region displayed on the screen in real time may be adjusted only under the condition that the face is sufficiently close to the screen so as to satisfy the preset condition, which is not limited herein. The preset area and the second preset ratio can be set according to the needs, and the invention is not limited to this.
For example, if the face area of the person to be authenticated is located in the preset area, but the proportion of the face area in the preset area is smaller than the second preset proportion (for example, two thirds), it is possible that the person to be authenticated is too inclined relative to the image capturing device and/or is far away from the image capturing device, and at this time, the image capturing condition may be considered as not meeting the preset requirement.
Illustratively, the identity authentication method (200, 300 or 400) may further comprise: if the proportion of the face area in the preset area is not greater than a second preset proportion, outputting second acquisition prompt information in real time to prompt the person to be authenticated to be close to the image acquisition device.
Optionally, the second acquisition prompt information may be output in one or more of a voice form, an image form, and a text form. For example, if the proportion of the face area in the preset area is found to be not greater than the second preset proportion, a prompt message such as "please get close to the camera" (or "please get close to the mobile phone") may be displayed on the display screen.
According to an embodiment of the present invention, the identity authentication method (200, 300 or 400) may further include: judging the relative position relationship between the face area and the preset area in real time; and outputting third acquisition prompt information in real time based on the relative position relation between the face area and the preset area so as to prompt the change of the relative position relation between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area.
For example, when the identity authentication method and apparatus according to the embodiments of the present invention are implemented on a mobile terminal, a face region (i.e., an image block including a face extracted from a real-time image) and an icon for representing a preset region (i.e., a preset region displayed on a screen in real-time) may be displayed on a display screen of the mobile terminal in real time. The human face area and the icon used for representing the preset area are displayed in real time, so that a user can conveniently know what condition the current image acquisition condition is, and the difference between the current image acquisition condition and the preset requirement is large, and the user can conveniently adjust the gesture of the user or the image acquisition device (or the identity authentication device comprising the image acquisition device) so as to enter the subsequent live experience authentication stage as soon as possible. Therefore, displaying the face region and the icon for representing the preset region in real time can improve the user experience and can improve the efficiency of live experience.
In addition, a third acquisition prompt message can be output to prompt the user to change the relative position relationship between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area. Optionally, the third acquisition prompt information may be output in one or more of a voice form, an image form, and a text form. For example, if the face region is found not to be located within the preset region, a prompt such as "please get close to the center of the circle" (the preset region is displayed as a circle icon on the display screen) may be displayed on the display screen. In addition, an arrow pointing to the preset area from the face area can be displayed on the display screen, so that a user can conveniently know how to move the user or the image acquisition device so that the face area enters the preset area as soon as possible. The text prompt information such as "please get close to the center of the circle" and the image prompt information such as the arrow can be displayed at the same time or alternatively.
According to an embodiment of the present invention, step S708 may include: acquiring attitude information of an image acquisition device; and judging whether the image acquisition device is in a vertical placement state according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, otherwise, determining that the image acquisition condition does not meet the preset requirement.
For example, when the identity authentication method according to the embodiment of the present invention is applied to a mobile terminal scene, the pose information of the image acquisition device (i.e., the camera of the mobile terminal) may be measured using a gyroscope sensor and/or an acceleration sensor built in the mobile terminal. When the mobile terminal is in a vertical placement state, the image acquisition device is also in a vertical state, and under the condition, ideal face images can be acquired. Therefore, whether the image acquisition condition of the person to be authenticated meets the requirement can be measured based on the gesture of the image acquisition device.
According to an embodiment of the present invention, the identity authentication method (200, 300 or 400) may further include: counting once in each process of executing steps S710 to S720 (or steps S810 to S820) to obtain the number of illumination verification times; after step S720 (or S820), the identity authentication method (200, 300 or 400) may further include: if the result of the photo-active experience verification indicates that the face of the person to be authenticated does not belong to the living body, outputting second error information, judging whether the number of the photo-active verification reaches a second number threshold, if the number of the photo-active verification reaches the second number threshold, turning to step S230 (or S330), and if the number of the photo-active verification does not reach the second number threshold, returning to step S208 or returning to step S210 (or S310), wherein the second error information is used for prompting the failure of the photo-active experience verification for the person to be authenticated.
Similarly to the action-based live experience authentication step, for the live experience authentication step based on the light reflection characteristic (steps S710 to S720 shown in fig. 7 or steps S810 to S820 shown in fig. 8), if it is determined that the face of the person to be authenticated does not belong to the living body, it is also possible to try to re-execute the live experience authentication step based on the light reflection characteristic, the principle and advantages of which are similar to those of the action-based live experience authentication step, and will not be repeated here.
Illustratively, in the case where the identity authentication method includes the above-described step S708, it is also possible to re-execute from step S708.
According to an embodiment of the present invention, before step S710 (or S810) or in the process of performing step S710 (or S810) and step S720 (or S820), the identity authentication method (200, 300 or 400) may further include: outputting second prompt information, wherein the second prompt information is used for prompting the personnel to be authenticated to remain motionless within a second preset time.
Illustratively, the second preset time may be an execution time of the living experience authentication step (steps S710 to S720 shown in fig. 7 or steps S810 to S820 shown in fig. 8) based on the light reflection characteristics. When the living experience authentication step based on the light reflection characteristic is executed, namely when the detection light irradiates the face of the person to be authenticated, the person to be authenticated can be prompted to remain motionless in the period, so that the image acquisition effect and the living experience authentication result are not influenced. For example, if the movement of the person to be authenticated within the second preset time causes the image capturing condition of the person to be authenticated to no longer satisfy the preset requirement, the process may return to step S706 or step S708, and one or more of the following steps may be re-performed: judging image acquisition conditions, outputting first prompt information, outputting various acquisition prompt information and the like.
For example, the second prompt information may be countdown information corresponding to a second preset time. Alternatively, the countdown information may be implemented in one or more of text, dynamic images, and speech. The countdown information can facilitate the user to know the living body detection progress, and can improve the interactive experience of the user.
One implementation flow of the in-vivo detection step according to an embodiment of the present invention is described below with reference to fig. 9. The application scenario shown in fig. 9 is a mobile terminal.
As shown in fig. 9, first, a character such as "please face the screen" is displayed on the display screen of the mobile terminal, prompting the user to face the screen with the face, while an icon (indicated by a circle) for indicating a preset area and a face area detected based on a real-time image are displayed on the display screen. When the user changes the position and/or the gesture of the face of the user and/or changes the position and/or the gesture of the mobile terminal, the text such as please face the screen and the icon used for representing the preset area can be continuously displayed, and the two information can be kept unchanged. However, the size and position of the face region may be changed, so that the continuously changing face region may be displayed in real time, which is convenient for the user to view. Then, when the image acquisition condition of the user meets the preset requirement, the next stage, namely a living experience verification step based on the light reflection characteristic, can be entered.
During the execution of the live experience authentication step based on the light reflection characteristic, a character such as "please remain stationary" may be displayed on the display screen (as shown in fig. 2 and 3 images) to prompt the user to remain stationary, and at the same time, countdown information may also be displayed on the display screen. The countdown information is represented in the 3 rd image shown in fig. 9 by a colored progress bar marked on an icon (i.e., a circle) for representing a preset area.
After the live experience verification step based on the light reflection characteristics is completed, the performance of the live experience verification step based on the action may be started. As shown in fig. 9, 4 th image, a letter "please click" is displayed on the display screen, instructing the user to perform a corresponding action.
Finally, the final living body detection result, such as a word of "living body passes", is output on the display screen.
According to another aspect of the present invention, there is provided an identity authentication device. Fig. 10 shows a schematic block diagram of an identity authentication device 1000 according to one embodiment of the invention.
As shown in fig. 10, the identity authentication device 1000 according to the embodiment of the present invention includes an information acquisition module 1010, an authenticated information judgment module 1020, a living body detection module 1030, and an identity determination module 1040. The various modules may perform the various steps/functions of the identity authentication methods described above in connection with fig. 2-9, respectively. Only the main functions of the respective components of the authentication apparatus 1000 will be described below, and details already described above will be omitted.
The information acquisition module 1010 is configured to acquire personal identification information of a person to be authenticated. The information acquisition module 1010 may be implemented by the processor 102 in the electronic device shown in fig. 1 running program instructions stored in the storage 104.
The authenticated information judging module 1020 is configured to judge whether the personal identification information is authenticated information, so as to obtain an information authentication result. The authenticated information judgment module 1020 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The living body detection module 1030 is configured to perform living body detection on a person to be authenticated to obtain a living body detection result. The living body detection module 1030 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The identity determination module 1040 is configured to determine whether the identity of the person to be authenticated is legal, at least according to the certificate authentication result and the living body detection result. The identity determination module 1040 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Fig. 11 shows a schematic block diagram of an identity authentication system 1100 according to one embodiment of the invention. The identity authentication system 1100 includes an image acquisition device 1110, a storage device 1120, and a processor 1130.
The image capturing device 1110 is used for capturing images of a person to be authenticated, such as face images, document images, and the like. Image acquisition device 1110 is optional and identity authentication system 1100 may not include image acquisition device 1110. In this case, an image for authentication may be acquired by other image acquisition means and the acquired image may be transmitted to the authentication system 1100.
The storage 1120 stores computer program instructions for implementing corresponding steps in an authentication method according to an embodiment of the present invention.
The processor 1130 is configured to execute computer program instructions stored in the storage 1120 to perform the respective steps of the authentication method according to an embodiment of the present invention, and to implement the information acquisition module 1010, the authenticated information judgment module 1020, the living body detection module 1030, and the identity determination module 1040 in the authentication device according to an embodiment of the present invention.
In one embodiment, the computer program instructions, when executed by the processor 1130, are configured to perform the steps of: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on personnel to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Furthermore, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which program instructions, when being executed by a computer or a processor, are for performing the respective steps of the identity authentication method of the embodiment of the present invention, and for realizing the respective modules in the identity authentication device according to the embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media.
In one embodiment, the computer program instructions, when executed by a computer or processor, may cause the computer or processor to implement the respective functional modules of the authentication device according to the embodiments of the present invention and/or may perform the authentication method according to the embodiments of the present invention.
In one embodiment, the computer program instructions, when executed, are configured to perform the steps of: acquiring personal identification information of a person to be authenticated; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on personnel to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
The modules in the authentication system according to the embodiment of the present invention may be implemented by a processor of an electronic device implementing authentication according to the embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer readable storage medium of a computer program product according to the embodiment of the present invention are run by a computer.
According to another aspect of the present invention, there is provided an identity authentication device, including an information acquisition device, a processor, and a memory, wherein the information acquisition device is configured to receive personal identification information of a person to be authenticated; the memory has stored therein computer program instructions which, when executed by the processor, are adapted to perform the steps of: acquiring personal identification information of the person to be authenticated, wherein the personal identification information is the initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on the personnel to be authenticated to obtain a living body detection result; and determining whether the identity of the person to be authenticated is legal or not according to at least the information authentication result and the living body detection result.
Illustratively, the information acquisition device comprises an image acquisition device and/or an input device.
Illustratively, the identity authentication device may further include: the image acquisition device is used for acquiring face images of personnel to be authenticated; the processing device is further used for performing living body detection by using the human face image so as to obtain a living body detection result.
Illustratively, the identity authentication device may further include: a light source for emitting detection light to the face of a person to be authenticated; the image acquisition device is further used for acquiring one or more illumination images of the face of the person to be authenticated under the irradiation of detection light as the face image; the processing device is further used for determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living experience verification result, and determining whether the person to be authenticated passes living body verification or not at least based on the illumination living experience verification result so as to obtain a living body detection result.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising: the image acquisition device is used for acquiring personal identification information of a person to be authenticated; and the processing device is used for judging whether the personal identification information is authenticated information to obtain an information authentication result, performing living body detection on the personnel to be authenticated to obtain a living body detection result, and determining whether the identity of the personnel to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
The image acquisition device is also used for acquiring face images of personnel to be authenticated; the processing device is further used for performing living body detection by using the human face image so as to obtain a living body detection result.
Illustratively, the identity authentication device may further include: a light source for emitting detection light to the face of a person to be authenticated; the image acquisition device is further used for acquiring one or more illumination images of the face of the person to be authenticated under the irradiation of detection light as the face image; the processing device is further used for determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living experience verification result, and determining whether the person to be authenticated passes living body verification or not at least based on the illumination living experience verification result so as to obtain a living body detection result.
Illustratively, the identity authentication method (200, 300 or 400) may be implemented on a separate identity authentication device (e.g., mobile terminal, etc.). The input device may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. In the case that the personal identification information is one or more of a certificate number, a name, a certificate face, a field collection face, a conversion value of the certificate number, a conversion value of the name, a conversion value of the certificate face, and a conversion value of the field collection face, the information input by the user can be received by using the input device to obtain the required personal identification information. In the case that the personal identification information is the certificate information in the certificate image, the image acquisition device can be used for acquiring the certificate image of the person to be authenticated so as to obtain the required personal identification information. Of course, personal information such as a certificate number, a name, and a certificate face may be extracted from a certificate image, and converted values of the personal information may be obtained by a predetermined algorithm, that is, personal identification information such as a certificate number, a name, a certificate face, converted values of a certificate number, converted values of a name, and converted values of a certificate face may be obtained by an image pickup device. Of course, the identity authentication device may include both an input device and an image acquisition device, and acquire the personal identification information in combination with two information acquisition modes.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising: the input device is used for receiving personal identification information of a person to be authenticated; a transmission means for transmitting the personal identification information to the server, and receiving, from the server, authentication information on whether the identity of the person to be authenticated is legal or not, which is obtained by the server by: and judging whether the personal identification information is authenticated information to obtain an information authentication result, performing living body detection on the personnel to be authenticated to obtain a living body detection result, and determining whether the identity of the personnel to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
Illustratively, the identity authentication device further comprises: the image acquisition device is used for acquiring face images of personnel to be authenticated; the transmission device is further used for transmitting the face image to a server, wherein the server performs living body detection by using the face image to obtain a living body detection result.
Illustratively, the identity authentication device may further include: a light source for emitting detection light to the face of a person to be authenticated; the image acquisition device is further used for acquiring a plurality of illumination images of the face of the person to be authenticated under the irradiation of detection light as the face images; the transmission device is further used for sending one or more illumination images to a server; wherein the server performs the living body detection by: and determining whether the face of the person to be authenticated belongs to a living body based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living experience verification result, and determining whether the person to be authenticated passes living body verification based on at least the illumination living experience verification result so as to obtain a living body detection result.
According to another aspect of the present invention, there is provided an identity authentication apparatus comprising: the image acquisition device is used for acquiring personal identification information of a person to be authenticated; a transmission means for transmitting the personal identification information to the server, and receiving, from the server, authentication information on whether the identity of the person to be authenticated is legal or not, which is obtained by the server by: and judging whether the personal identification information is authenticated information to obtain an information authentication result, performing living body detection on the personnel to be authenticated to obtain a living body detection result, and determining whether the identity of the personnel to be authenticated is legal or not at least according to the information authentication result and the living body detection result.
The image acquisition device is also used for acquiring face images of personnel to be authenticated; the processing device is further used for performing living body detection by using the human face image so as to obtain a living body detection result.
Illustratively, the identity authentication device may further include: a light source for emitting detection light to the face of a person to be authenticated; the image acquisition device is further used for acquiring one or more illumination images of the face of the person to be authenticated under the irradiation of detection light as the face image; the transmission device is further used for sending one or more illumination images to a server; wherein the server performs the living body detection by: and determining whether the face of the person to be authenticated belongs to a living body based on the light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living experience verification result, and determining whether the person to be authenticated passes living body verification based on at least the illumination living experience verification result so as to obtain a living body detection result.
Illustratively, the identity authentication method (200, 300 or 400) may be implemented on separate devices, such as on a client and a server. In this case, the client may comprise input means and/or image acquisition means as well as transmission means. Optionally, the client may upload the acquired personal identification information and the acquired face image (including the illumination image, the action image, the real-time image, etc.) to the server, and the server performs identity authentication. After receiving the authentication information sent by the server, the client performs an authentication pass or fail operation, for example, outputs information about the pass or fail of the identity authentication, allows or denies the user to perform a subsequent service operation, and so on.
Illustratively, the transmitting device of the client may transmit the personal identification information and the face image to the server through a network or other technology, and receive the authentication information from the server through the network or other technology. For example, the network may be the Internet, a wireless local area network, a mobile communication network, etc., and the other technologies may include Bluetooth communication, infrared communication, etc., for example. For example, the server may be a general-purpose server or a dedicated server, may be a virtual server or a cloud server, or the like. The transmission device of the client (or the server) may include a modem, a network adapter, a bluetooth transceiver unit, an infrared transceiver unit, or the like, and may also perform operations such as encoding, decoding, etc. on the transmitted or received information.
Because most of the authentication is completed in the server, the computing resources of the processing device of the client can be saved, so that the requirements on the performance of the client and the manufacturing cost of the authentication device can be reduced, and the user experience can be improved.
According to another aspect of the present invention, there is provided an identity authentication apparatus including a transmission apparatus and a processing apparatus, wherein the transmission apparatus is configured to receive personal identification information of a person to be authenticated from a client, and transmit authentication information on whether the identity of the person to be authenticated is legal to the client; the processing device is used for judging whether the personal identification information is authenticated information to obtain an information authentication result, performing living body detection on the personnel to be authenticated to obtain a living body detection result, and determining whether the identity of the personnel to be authenticated is legal at least according to the information authentication result and the living body detection result to obtain the authentication information.
Illustratively, the transmission device is further configured to receive a face image of a person to be authenticated; the processing device is further used for performing living body detection by using the human face image so as to obtain a living body detection result.
The transmission device is further configured to receive one or more illumination images of a person to be authenticated from the client, where the one or more illumination images are acquired for a face of the person to be authenticated under irradiation of the detection light; the processing device is further used for determining whether the face of the person to be authenticated belongs to a living body or not based on light reflection characteristics of the face of the person to be authenticated in one or more illumination images so as to obtain an illumination living experience verification result, and determining whether the person to be authenticated passes living body verification or not at least based on the illumination living experience verification result so as to obtain a living body detection result.
As described above, the identity authentication method (200, 300 or 400) may be implemented on separate devices, such as on a client and a server. This embodiment describes one implementation of at least a portion of the identity authentication method implemented on a server.
According to the identity authentication method and the device, the authenticated information is combined to judge and living body detection to determine whether the identity of the person to be authenticated is legal, so that compared with a conventional mode of authenticating the identity based on a password or a certificate, the identity authentication method and the device according to the embodiment of the invention have more accurate authentication results, can improve the security of user authentication, and can effectively guarantee the rights and interests of users. The method and the device can be well applied to various fields related to identity authentication, such as the fields of electronic commerce, mobile payment or banking business and the like.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the invention and aid in understanding one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the invention. However, the method of the present invention should not be construed as reflecting the following intent: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in an authentication device according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing description is merely illustrative of specific embodiments of the present invention and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention. The protection scope of the invention is subject to the protection scope of the claims.

Claims (44)

1. An identity authentication method, comprising:
acquiring personal identification information of a person to be authenticated;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the personnel to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
the performing living body detection on the person to be authenticated to obtain a living body detection result comprises the following steps:
step S110: acquiring one or more illumination images acquired for the face of the person to be authenticated under the irradiation of detection light; wherein the detection light is emitted by the display screen, the detection light being obtained by: dynamically changing the mode of light emitted by the display screen by changing the color of the pixel and the position of the light emitting area on the display screen so as to emit the detection light to the face of the person to be authenticated; the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly;
step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images so as to obtain an illumination activity experience verification result; and
Step S130: determining whether the authentication person passes the living body verification based at least on the photo-living experience authentication result to obtain the living body detection result.
2. The identity authentication method according to claim 1, wherein the pattern of the detection light is changed between every two consecutive moments in time during irradiation of the face of the person to be authenticated.
3. The identity authentication method according to claim 1 or 2, wherein the pattern of the detection light further includes at least one of a color of the detection light, a position of a light emitting region, an intensity of the detection light, an irradiation angle of the detection light, a wavelength of the detection light, and a frequency of the detection light.
4. The identity authentication method of claim 1, wherein the detection light is obtained by dynamically changing the color and/or position of light emitted toward the face of the person to be authenticated.
5. The identity authentication method of claim 1, wherein the display screen is divided into a plurality of light emitting areas, the dynamically changing the pattern of light emitted by the display screen by changing the color of pixels and/or the location of the light emitting areas on the display screen, comprising:
For each light-emitting area on the display screen, randomly selecting one RGB value in a preset RGB value range at a time as the color value of the light-emitting area for display.
6. The identity authentication method of claim 1, wherein the performing the living body detection on the person to be authenticated to obtain a living body detection result comprises:
step S140: outputting an action instruction, wherein the action instruction is used for indicating the authentication personnel to execute corresponding actions;
step S150: acquiring a plurality of action images acquired aiming at the face of the person to be authenticated;
step S160: detecting actions performed by the person to be authenticated based on the plurality of action images; and
step S170: determining whether the face of the person to be authenticated belongs to a living body according to the action detection result and the action instruction so as to obtain an action living experience verification result;
the step S130 includes:
and determining whether the person to be authenticated passes the living body verification based on the illumination living body experience verification result and the action living body experience verification result so as to obtain the living body detection result.
7. The identity authentication method as claimed in claim 6, wherein the step S170 comprises:
If an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is detected based on the plurality of action images acquired within a period of not more than a first preset time, it is determined that the face of the person to be authenticated belongs to a living body, and if an action performed by the person to be authenticated, which is consistent with the action indicated by the action instruction, is not detected based on the plurality of action images acquired within the first preset time, it is determined that the face of the person to be authenticated does not belong to a living body.
8. The identity authentication method of claim 6, wherein the identity authentication method further comprises:
counting once in each execution of the steps S140 to S170 to obtain the number of action verifications;
after the step S170, the identity authentication method further includes:
outputting first error information if the action authentication result indicates that the face of the person to be authenticated does not belong to a living body, judging whether the action authentication number reaches a first time threshold, if so, turning to step S130, and if not, returning to step S140 or returning to step S110 if the step S110 is executed before step S140, wherein the first error information is used for prompting failure of the living authentication for the person to be authenticated.
9. The identity authentication method of claim 1, wherein prior to the step S110, the identity authentication method further comprises:
step S108: judging whether the image acquisition condition of the person to be authenticated meets a preset requirement, and if the image acquisition condition meets the preset requirement, turning to the step S110, wherein the image acquisition condition comprises the position of the person to be authenticated in an image acquisition area of an image acquisition device and/or the relative angle of the person to be authenticated and the image acquisition device.
10. The authentication method according to claim 9, wherein before the step S108 or simultaneously with the step S108, the authentication method further comprises:
step S106: and outputting first prompt information, wherein the first prompt information is used for prompting the person to be authenticated to face the image acquisition device and approach the image acquisition device.
11. The identity authentication method as claimed in claim 10, wherein the step S106 comprises: and outputting the first prompt information through one or more of a voice form, an image form and a text form.
12. The identity authentication method as claimed in claim 9, wherein the step S108 comprises:
Acquiring a real-time image acquired aiming at the face of the person to be authenticated;
outputting a preset area for calibrating the image acquisition condition in real time and displaying a face area in the real-time image; and
judging whether the image acquisition conditions meet the preset requirements according to the face areas, if the face areas are located in the preset areas and the proportion of the face areas in the real-time image is larger than a first preset proportion, determining that the image acquisition conditions meet the preset requirements, and if the face areas are not located in the preset areas or the proportion of the face areas in the real-time image is not larger than the first preset proportion, determining that the image acquisition conditions do not meet the preset requirements.
13. The identity authentication method of claim 12, wherein the identity authentication method further comprises:
and if the proportion of the face area in the real-time image is not greater than the first preset proportion, outputting first acquisition prompt information in real time to prompt the person to be authenticated to be close to the image acquisition device.
14. The identity authentication method as claimed in claim 9, wherein the step S108 comprises:
Acquiring a real-time image acquired aiming at the face of the person to be authenticated;
outputting a preset area for calibrating the image acquisition condition in real time and displaying a face area in the real-time image; and
judging whether the image acquisition condition meets the preset requirement according to the face area,
if the face area is located in a preset area and the proportion of the face area in the preset area is larger than a second preset proportion, determining that the image acquisition condition meets the preset requirement, and if the face area is not located in the preset area or the proportion of the face area in the preset area is not larger than the second preset proportion, determining that the image acquisition condition does not meet the preset requirement.
15. The identity authentication method of claim 14, wherein the identity authentication method further comprises:
and if the proportion of the face area in the preset area is not greater than the second preset proportion, outputting second acquisition prompt information in real time so as to prompt the person to be authenticated to be close to the image acquisition device.
16. The identity authentication method of claim 12 or 14, wherein the identity authentication method further comprises:
Judging the relative position relationship between the face area and the preset area in real time; and
and outputting third acquisition prompt information in real time based on the relative position relation between the face area and the preset area so as to prompt the change of the relative position relation between the personnel to be authenticated and the image acquisition device so as to enable the face area to be close to the preset area.
17. The identity authentication method as claimed in claim 9, wherein the step S108 comprises:
acquiring attitude information of the image acquisition device; and
judging whether the image acquisition device is in a vertical placement state according to the posture information, if so, determining that the image acquisition condition meets the preset requirement, otherwise, determining that the image acquisition condition does not meet the preset requirement.
18. The identity authentication method of claim 9, wherein the identity authentication method further comprises:
counting once in each process of executing the steps S110 to S120 to obtain the number of illumination verification times;
after the step S120, the identity authentication method further includes:
if the photo-active experience verification result indicates that the face of the person to be authenticated does not belong to a living body, outputting second error information, judging whether the photo-active verification number reaches a second number threshold, if the photo-active verification number reaches the second number threshold, turning to the step S130, and if the photo-active verification number does not reach the second number threshold, returning to the step S108 or returning to the step S110, wherein the second error information is used for prompting that the living experience verification of the person to be authenticated fails.
19. The identity authentication method of claim 1, wherein before the step S110 or during the execution of the steps S110 and S120, the identity authentication method further comprises:
outputting second prompt information, wherein the second prompt information is used for prompting the personnel to be authenticated to remain motionless within a second preset time.
20. The authentication method of claim 19, wherein the second hint information is countdown information corresponding to the second preset time.
21. The identity authentication method of claim 1, wherein the personal identification information is one or more of a certificate number, a name, a certificate face, a field acquisition face, a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field acquisition face.
22. The authentication method according to claim 1, wherein before the acquisition of the personal identification information of the person to be authenticated, the authentication method further comprises:
outputting indication information for indicating the personnel to be authenticated to provide personnel information of a preset type;
the personal identification information is personnel information provided by the personnel to be authenticated or is obtained based on the personnel information.
23. The authentication method of claim 1, wherein the personal identification information is one or more of a transformed value of a certificate number, a transformed value of a name, a transformed value of a certificate face, and a transformed value of a field acquisition face,
the step of obtaining the personal identification information of the personnel to be authenticated comprises the following steps:
acquiring initial information of the personnel to be authenticated, wherein the initial information is one or more of a certificate number, a name, a certificate face and a field acquisition face; and
the initial information is transformed based on a predetermined algorithm to obtain a transformed value of the initial information as the personal identification information.
24. The identity authentication method of claim 1, wherein the acquiring personal identification information of the person to be authenticated comprises:
acquiring a certificate image of the person to be authenticated, wherein the information authentication result is a certificate authentication result;
the performing living body detection on the person to be authenticated to obtain a living body detection result comprises the following steps:
acquiring a face image of the person to be authenticated; and
and performing living body detection by using the face image to obtain a living body detection result.
25. The identity authentication method of claim 24, wherein before the determining whether the identity of the person to be authenticated is legal based at least on the information authentication result and the living body detection result, the identity authentication method further comprises:
Executing additional judging operation by using the certificate image and/or the face image so as to obtain an additional judging result;
the determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result comprises the following steps:
and determining whether the identity of the person to be authenticated is legal or not according to the certificate authentication result, the living body detection result and the additional judgment result.
26. The identity authentication method of claim 25, wherein the additional judging operation includes a document authenticity judging operation and/or a face consistency judging operation, and the additional judging result includes a document authenticity judging result and/or a face consistency judging result,
the certificate authenticity judging operation comprises the following steps: judging whether the certificate in the certificate image is a true certificate or not so as to obtain a certificate authenticity judgment result;
the face consistency judging operation comprises the following steps: acquiring a certificate face of the person to be authenticated according to the certificate image; and comparing the certificate face of the person to be authenticated with the face in the face image to obtain the face consistency judging result.
27. The identity authentication method of claim 26 wherein the acquiring the credential face of the person to be authenticated from the credential image comprises:
And detecting the face from the certificate image to obtain the certificate face of the person to be authenticated.
28. The identity authentication method of claim 26 wherein the acquiring the credential face of the person to be authenticated from the credential image comprises:
performing character recognition on the certificate image to obtain character information in the certificate image;
searching matched certificate information from an authenticated certificate information database based on the text information in the certificate image; and
and determining the searched certificate face in the matched certificate information as the certificate face of the person to be authenticated.
29. The identity authentication method of claim 26 wherein the determining whether the document in the document image is a genuine document to obtain the document authenticity determination result comprises:
extracting image features of the certificate image; and
inputting the image characteristics into a trained certificate classifier to obtain a certificate authenticity judgment result;
and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is a true certificate.
30. The identity authentication method of claim 26 wherein the determining whether the document in the document image is a genuine document to obtain the document authenticity determination result comprises:
Identifying an image block containing identification information of the certificate from the certificate image; and
identifying the certificate identification information in the image block containing the certificate identification information to obtain the certificate authenticity judgment result;
and the certificate authenticity judgment result is the confidence that the certificate in the certificate image is a true certificate.
31. The identity authentication method of claim 25, wherein each of the credential authentication result, the living body detection result, and the additional judgment result has a respective weight coefficient in the process of determining whether the identity of the person to be authenticated is legal based on the credential authentication result, the living body detection result, and the additional judgment result.
32. The identity authentication method of claim 24, wherein the acquiring a document image of the person to be authenticated comprises:
acquiring a pre-shooting image which is acquired in real time aiming at the certificate of the person to be authenticated under the current shooting condition;
evaluating image attributes of the pre-shot image in real time;
when the evaluation value of the image attribute of the pre-shot image is smaller than a preset evaluation value threshold, generating pre-shot prompt information according to the image attribute of the pre-shot image, wherein the pre-shot prompt information is used for prompting the personnel to be authenticated to adjust the shooting conditions of the credentials of the personnel to be authenticated; and
And when the evaluation value of the image attribute of the pre-shooting image is equal to or larger than the preset evaluation value threshold value, saving the pre-shooting image as the certificate image.
33. The identity authentication method of claim 24, wherein the determining whether the personal identification information is authenticated information to obtain an information authentication result comprises:
performing character recognition on the certificate image to obtain character information in the certificate image; and
searching in an authenticated certificate information database based on the text information in the certificate image to obtain the certificate authentication result;
and the certificate authentication result is the confidence that the certificate information in the certificate image is authenticated certificate information.
34. The identity authentication method of claim 28 or 33, wherein the performing text recognition on the document image to obtain text information in the document image comprises:
positioning characters in the certificate image to obtain an image block containing the characters; and
and identifying the characters in the image block containing the characters to obtain character information in the certificate image.
35. The authentication method of claim 34, wherein prior to said identifying text in said text-containing image block, the authentication method further comprises:
and correcting the image block containing the characters into a horizontal state.
36. The identity authentication method of claim 34 wherein, after the identifying text in the text-containing image block to obtain text information in the document image, the text identifying the document image to obtain text information in the document image further comprises:
outputting the text information in the certificate image for viewing by a user;
receiving the text correction information input by the user;
comparing the text to be repaired indicated by the text correction information with corresponding text in the text information in the certificate image; and
and if the difference between the text to be repaired, indicated by the text correction information, and the corresponding text in the text information in the certificate image is smaller than a preset difference threshold value, updating the text information in the certificate image by using the text correction information.
37. An identity authentication device comprising a processor and a memory, wherein the memory has stored therein computer program instructions which, when executed by the processor, are adapted to carry out the steps of:
Acquiring personal identification information of a person to be authenticated;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the personnel to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
the step of performing a living body detection on the person to be authenticated, which is performed by the processor when the computer program instructions are executed, to obtain a living body detection result includes:
step S110: acquiring one or more illumination images acquired for the face of the person to be authenticated under the irradiation of detection light; wherein the detection light is emitted by the display screen, the detection light being obtained by: dynamically changing the mode of light emitted by the display screen by changing the color of the pixel and the position of the light emitting area on the display screen so as to emit the detection light to the face of the person to be authenticated; the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly;
Step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images so as to obtain an illumination activity experience verification result;
step S130: determining whether the authentication person passes the living body verification based at least on the photo-living experience authentication result to obtain the living body detection result.
38. A storage medium having stored thereon program instructions, which when executed, are adapted to perform the steps of:
acquiring personal identification information of a person to be authenticated;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the personnel to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
the step of performing living body detection on the person to be authenticated, which is used for executing the program instructions in running, so as to obtain a living body detection result comprises the following steps:
step S110: acquiring one or more illumination images acquired for the face of the person to be authenticated under the irradiation of detection light; wherein the detection light is emitted by the display screen, the detection light being obtained by: dynamically changing the mode of light emitted by the display screen by changing the color of the pixel and the position of the light emitting area on the display screen so as to emit the detection light to the face of the person to be authenticated; the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly;
Step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images so as to obtain an illumination activity experience verification result;
step S130: determining whether the authentication person passes the living body verification based at least on the photo-living experience authentication result to obtain the living body detection result.
39. An identity authentication device comprises an information acquisition device, a processor and a memory, wherein,
the information acquisition device is used for acquiring initial information of a person to be authenticated;
the memory has stored therein computer program instructions which, when executed by the processor, are adapted to perform the steps of:
acquiring personal identification information of the person to be authenticated, wherein the personal identification information is the initial information or is acquired based on the initial information;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
performing living body detection on the personnel to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
The step of performing a living body detection on the person to be authenticated, which is performed by the processor when the computer program instructions are executed, to obtain a living body detection result includes:
step S110: acquiring one or more illumination images acquired for the face of the person to be authenticated under the irradiation of detection light; wherein the detection light is emitted by the display screen, the detection light being obtained by: dynamically changing the mode of light emitted by the display screen by changing the color of the pixel and the position of the light emitting area on the display screen so as to emit the detection light to the face of the person to be authenticated; the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly;
step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images so as to obtain an illumination activity experience verification result;
step S130: determining whether the authentication person passes the living body verification based at least on the photo-living experience authentication result to obtain the living body detection result.
40. The authentication device of claim 39 wherein the information gathering device comprises an image gathering device and/or an input device.
41. An identity authentication device comprising:
the information acquisition device is used for acquiring initial information of the personnel to be authenticated; and
a transmission device, configured to send the initial information to a server, and receive, from the server, authentication information about whether the identity of the person to be authenticated is legal, which is obtained by the server by: acquiring personal identification information of the person to be authenticated, wherein the personal identification information is the initial information or is acquired based on the initial information; judging whether the personal identification information is authenticated information or not to obtain an information authentication result; performing living body detection on the personnel to be authenticated to obtain a living body detection result; determining whether the identity of the person to be authenticated is legal or not according to at least the information authentication result and the living body detection result;
the server performs living body detection on the person to be authenticated to obtain a living body detection result, including:
step S110: acquiring one or more illumination images acquired for the face of the person to be authenticated under the irradiation of detection light; wherein the detection light is emitted by the display screen, the detection light being obtained by: dynamically changing the mode of light emitted by the display screen by changing the color of the pixel and the position of the light emitting area on the display screen so as to emit the detection light to the face of the person to be authenticated; the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly;
Step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images so as to obtain an illumination activity experience verification result; and
step S130: determining whether the authentication person passes the living body verification based at least on the photo-living experience authentication result to obtain the living body detection result.
42. The authentication device of claim 41, wherein the information gathering device comprises an image gathering device and/or an input device.
43. An identity authentication device comprises a transmission device, a processor and a memory, wherein,
the transmission device is used for receiving initial information of a person to be authenticated from a client and sending authentication information about whether the identity of the person to be authenticated is legal or not to the client;
the memory has stored therein computer program instructions which, when executed by the processor, are adapted to perform the steps of:
acquiring personal identification information of the person to be authenticated, wherein the personal identification information is the initial information or is acquired based on the initial information;
judging whether the personal identification information is authenticated information or not to obtain an information authentication result;
Performing living body detection on the personnel to be authenticated to obtain a living body detection result; and
determining whether the identity of the person to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
the step of performing a living body detection on the person to be authenticated, which is performed by the processor when the computer program instructions are executed, to obtain a living body detection result includes:
step S110: acquiring one or more illumination images acquired for the face of the person to be authenticated under the irradiation of detection light; wherein the detection light is emitted by the display screen, the detection light being obtained by: dynamically changing the mode of light emitted by the display screen by changing the color of the pixel and the position of the light emitting area on the display screen so as to emit the detection light to the face of the person to be authenticated; the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly;
step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images so as to obtain an illumination activity experience verification result; and
Step S130: determining whether the authentication person passes the living body verification based at least on the photo-living experience authentication result to obtain the living body detection result.
44. An identity authentication device comprising:
the information acquisition module is used for acquiring personal identification information of a person to be authenticated;
the authenticated information judging module is used for judging whether the personal identification information is authenticated information or not so as to obtain an information authentication result;
the living body detection module is used for carrying out living body detection on personnel to be authenticated so as to obtain a living body detection result; and
the identity determining module is used for determining whether the identity of the personnel to be authenticated is legal or not at least according to the information authentication result and the living body detection result;
the living body detection module carries out living body detection on the personnel to be authenticated to obtain living body detection results, and the living body detection module comprises:
step S110: acquiring one or more illumination images acquired for the face of the person to be authenticated under the irradiation of detection light; wherein the detection light is emitted by the display screen, the detection light being obtained by: dynamically changing the mode of light emitted by the display screen by changing the color of the pixel and the position of the light emitting area on the display screen so as to emit the detection light to the face of the person to be authenticated; the mode of the detection light is changed at least once in the process of irradiating the face of the person to be authenticated, and the mode of the detection light is changed randomly;
Step S120: determining whether the face of the person to be authenticated belongs to a living body or not based on the light reflection characteristics of the face of the person to be authenticated in the one or more illumination images so as to obtain an illumination activity experience verification result; and
step S130: determining whether the authentication person passes the living body verification based at least on the photo-living experience authentication result to obtain the living body detection result.
CN202011596232.XA 2017-03-17 2017-04-05 Identity authentication method and device and storage medium Active CN112651348B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011596232.XA CN112651348B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN2017101622458 2017-03-17
CN2017101616851 2017-03-17
CN201710161685 2017-03-17
CN201710162245 2017-03-17
CN201710218218.8A CN108573203B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium
CN202011596232.XA CN112651348B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201710218218.8A Division CN108573203B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium

Publications (2)

Publication Number Publication Date
CN112651348A CN112651348A (en) 2021-04-13
CN112651348B true CN112651348B (en) 2024-04-05

Family

ID=63575988

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201710218218.8A Active CN108573203B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium
CN202011596232.XA Active CN112651348B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium
CN201710218512.9A Active CN108629260B (en) 2017-03-17 2017-04-05 Living body verification method and apparatus, and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201710218218.8A Active CN108573203B (en) 2017-03-17 2017-04-05 Identity authentication method and device and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201710218512.9A Active CN108629260B (en) 2017-03-17 2017-04-05 Living body verification method and apparatus, and storage medium

Country Status (1)

Country Link
CN (3) CN108573203B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046696A (en) * 2018-10-12 2020-04-21 宏碁股份有限公司 Living body identification method and electronic device
US10885363B2 (en) * 2018-10-25 2021-01-05 Advanced New Technologies Co., Ltd. Spoof detection using structured light illumination
CN109448193A (en) * 2018-11-16 2019-03-08 广东电网有限责任公司 Identity information recognition methods and device
CN109766849B (en) * 2019-01-15 2023-06-20 深圳市凯广荣科技发展有限公司 Living body detection method, detection device and self-service terminal equipment
CN109618100B (en) * 2019-01-15 2020-11-27 北京旷视科技有限公司 Method, device and system for judging field shooting image
CN109993124B (en) * 2019-04-03 2023-07-14 深圳华付技术股份有限公司 Living body detection method and device based on video reflection and computer equipment
CN110135326B (en) * 2019-05-10 2021-10-29 中汇信息技术(上海)有限公司 Identity authentication method, electronic equipment and computer readable storage medium
CN112906741A (en) * 2019-05-21 2021-06-04 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
SG10201906721SA (en) 2019-07-19 2021-02-25 Nec Corp Method and system for chrominance-based face liveness detection
CN110443237B (en) * 2019-08-06 2023-06-30 北京旷视科技有限公司 Certificate identification method, device, electronic equipment and computer readable storage medium
CN110705350B (en) * 2019-08-27 2020-08-25 阿里巴巴集团控股有限公司 Certificate identification method and device
US10974537B2 (en) 2019-08-27 2021-04-13 Advanced New Technologies Co., Ltd. Method and apparatus for certificate identification
CN110909264B (en) * 2019-11-29 2023-08-29 北京三快在线科技有限公司 Information processing method, device, equipment and storage medium
CN111523438B (en) * 2020-04-20 2024-02-23 支付宝实验室(新加坡)有限公司 Living body identification method, terminal equipment and electronic equipment
CN111723655B (en) * 2020-05-12 2024-03-08 五八有限公司 Face image processing method, device, server, terminal, equipment and medium
CN111680616A (en) * 2020-06-04 2020-09-18 中国建设银行股份有限公司 Qualification authentication method, device, equipment and medium for subsidy retriever
CN111784498A (en) * 2020-06-22 2020-10-16 北京海益同展信息科技有限公司 Identity authentication method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825915A (en) * 2005-02-23 2006-08-30 佳能株式会社 Image sensor device, living body authentication system using the device, and image acquiring method
CN105518714A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Vivo detection method and equipment, and computer program product

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001069520A2 (en) * 2000-03-10 2001-09-20 Ethentica, Inc. Biometric sensor
CN1209073C (en) * 2001-12-18 2005-07-06 中国科学院自动化研究所 Identity discriminating method based on living body iris
JP2006043029A (en) * 2004-08-03 2006-02-16 Matsushita Electric Ind Co Ltd Living body distinguishing device, and authenticating device using the same, and living body distinguishing method
CN102262727A (en) * 2011-06-24 2011-11-30 常州锐驰电子科技有限公司 Method for monitoring face image quality at client acquisition terminal in real time
US9082235B2 (en) * 2011-07-12 2015-07-14 Microsoft Technology Licensing, Llc Using facial data for device authentication or subject identification
CN103310179A (en) * 2012-03-06 2013-09-18 上海骏聿数码科技有限公司 Method and system for optimal attitude detection based on face recognition technology
CN102622588B (en) * 2012-03-08 2013-10-09 无锡中科奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN103440479B (en) * 2013-08-29 2016-12-28 湖北微模式科技发展有限公司 A kind of method and system for detecting living body human face
CN104104867B (en) * 2014-04-28 2017-12-29 三星电子(中国)研发中心 The method and apparatus that control camera device is shot
CN104184956A (en) * 2014-08-29 2014-12-03 宇龙计算机通信科技(深圳)有限公司 Mobile communication terminal photographing method and system and mobile communication terminal
CN111898108B (en) * 2014-09-03 2024-06-04 创新先进技术有限公司 Identity authentication method, device, terminal and server
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
US9985963B2 (en) * 2015-02-15 2018-05-29 Beijing Kuangshi Technology Co., Ltd. Method and system for authenticating liveness face, and computer program product thereof
CN104766063B (en) * 2015-04-08 2018-01-05 宁波大学 A kind of living body faces recognition methods
CN104881632A (en) * 2015-04-28 2015-09-02 南京邮电大学 Hyperspectral face recognition method
US9922238B2 (en) * 2015-06-25 2018-03-20 West Virginia University Apparatuses, systems, and methods for confirming identity
WO2017000116A1 (en) * 2015-06-29 2017-01-05 北京旷视科技有限公司 Living body detection method, living body detection system, and computer program product
CN105117695B (en) * 2015-08-18 2017-11-24 北京旷视科技有限公司 In vivo detection equipment and biopsy method
CN105069438B (en) * 2015-08-19 2019-03-12 南昌欧菲生物识别技术有限公司 The manufacturing method of finger print detection device
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus
CN105512632B (en) * 2015-12-09 2019-04-05 北京旷视科技有限公司 Biopsy method and device
CN105488495A (en) * 2016-01-05 2016-04-13 上海川织金融信息服务有限公司 Identity identification method and system based on combination of face characteristics and device fingerprint
CN105868693A (en) * 2016-03-21 2016-08-17 深圳市商汤科技有限公司 Identity authentication method and system
CN105912986B (en) * 2016-04-01 2019-06-07 北京旷视科技有限公司 A kind of biopsy method and system
CN106407914B (en) * 2016-08-31 2019-12-10 北京旷视科技有限公司 Method and device for detecting human face and remote teller machine system
CN106384237A (en) * 2016-08-31 2017-02-08 北京志光伯元科技有限公司 Member authentication-management method, device and system based on face identification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1825915A (en) * 2005-02-23 2006-08-30 佳能株式会社 Image sensor device, living body authentication system using the device, and image acquiring method
CN105518714A (en) * 2015-06-30 2016-04-20 北京旷视科技有限公司 Vivo detection method and equipment, and computer program product

Also Published As

Publication number Publication date
CN112651348A (en) 2021-04-13
CN108629260B (en) 2022-02-08
CN108573203A (en) 2018-09-25
CN108573203B (en) 2021-01-26
CN108629260A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN112651348B (en) Identity authentication method and device and storage medium
CN106778525B (en) Identity authentication method and device
US11188734B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US10339402B2 (en) Method and apparatus for liveness detection
CN110326001B (en) System and method for performing fingerprint-based user authentication using images captured with a mobile device
US9971920B2 (en) Spoof detection for biometric authentication
EP2680192B1 (en) Facial recognition
EP2680191A2 (en) Facial recognition
CN108573202A (en) Identity identifying method, device and system and terminal, server and storage medium
US20180046853A1 (en) Methods and systems for determining user liveness and verifying user identities
Li et al. An accurate and efficient user authentication mechanism on smart glasses based on iris recognition
US11948402B2 (en) Spoof detection using intraocular reflection correspondences
Gao 3D Face Recognition Using Multicomponent Feature Extraction from the Nasal Region and its Environs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant