CN106599772B - Living body verification method and device and identity authentication method and device - Google Patents

Living body verification method and device and identity authentication method and device Download PDF

Info

Publication number
CN106599772B
CN106599772B CN201610927708.0A CN201610927708A CN106599772B CN 106599772 B CN106599772 B CN 106599772B CN 201610927708 A CN201610927708 A CN 201610927708A CN 106599772 B CN106599772 B CN 106599772B
Authority
CN
China
Prior art keywords
living
image
verified
identity card
judgment
Prior art date
Application number
CN201610927708.0A
Other languages
Chinese (zh)
Other versions
CN106599772A (en
Inventor
何涛
曹志敏
Original Assignee
北京旷视科技有限公司
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京旷视科技有限公司, 北京迈格威科技有限公司 filed Critical 北京旷视科技有限公司
Priority to CN201610927708.0A priority Critical patent/CN106599772B/en
Publication of CN106599772A publication Critical patent/CN106599772A/en
Application granted granted Critical
Publication of CN106599772B publication Critical patent/CN106599772B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00885Biometric patterns not provided for under G06K9/00006, G06K9/00154, G06K9/00335, G06K9/00362, G06K9/00597; Biometric specific functions not specific to the kind of biometric
    • G06K9/00899Spoof detection
    • G06K9/00906Detection of body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00228Detection; Localisation; Normalisation

Abstract

A method and a device for a live experience certification and a method and a device for identity authentication are provided. The living body verification method includes: randomly generating a living body action instruction, wherein the living body action instruction is used for indicating an object to be verified to carry out corresponding living body action by holding the identity card; acquiring an image of an object to be verified executing a living body action in real time to obtain the image to be verified; and determining whether the object to be verified passes the living body verification based on the image to be verified. According to the living body verification method and device and the identity authentication method and device, the living body verification is carried out by combining the information brought by the identity card, so that the accuracy of the living body verification can be improved.

Description

Living body verification method and device and identity authentication method and device

Technical Field

The invention relates to the field of identity authentication, in particular to a method and a device for a verification of a live experience and a method and a device for identity authentication.

Background

In the current online activities, it is a common requirement to remotely identify the identity of an operator, such as real-name authentication of a mobile phone, real-name authentication of an account number of an online financial activity, and the like. The conventional identity authentication method includes: a user uploads an identity card photo and a self-portrait photo to an application background for manual processing; or the user inputs the identity card number and uploads a self-timer video (or a static image) to an application background for manual processing. With the application of the biometric verification technology such as face recognition, the new online authentication method based on face recognition shortens the processing time. In order to enhance the reliability and safety of identity authentication, a new identity authentication method adopts a living body verification technology. For example, a user inputs an identity card number or uploads an identity card image, then performs living body verification (a specified action, say a session, etc.) facing an image acquisition device (e.g., a mobile phone camera), and after the living body verification is passed (i.e., the user is confirmed to be a living body), compares data such as the image uploaded by the user with real data which is stored in advance and corresponds to identity information uploaded by the user to verify the identity of the user.

Current in vivo verification techniques have drawbacks. For example, a malicious attacker can spoof a current liveness verification system by using computer graphics technology (CG) software to synthesize a video required for spoofing face liveness verification in combination with a stolen picture of a real user. The above-mentioned defects of the living body verification may result in the loss of security of the identity authentication.

Disclosure of Invention

The present invention has been made in view of the above problems. The invention provides a method and a device for verifying a live experience and a method and a device for authenticating an identity.

According to one aspect of the invention, a method of witness verification is provided. The living body verification method includes: randomly generating a living body action instruction, wherein the living body action instruction is used for indicating an object to be verified to carry out corresponding living body action by holding an identity card; acquiring an image of the object to be verified executing the living body action in real time to obtain an image to be verified; and determining whether the object to be verified passes the in-vivo verification based on the image to be verified.

Illustratively, the determining whether the object to be authenticated passes in-vivo authentication based on the image to be authenticated includes: detecting the face and the identity card in the image to be verified; performing a plurality of living body judgment operations, wherein the plurality of living body judgment operations include a first living body judgment operation, a second living body judgment operation, and a third living body judgment operation, wherein the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the image to be verified, wherein the third living body judging operation comprises the following steps: comprehensively judging whether the detected face and the detected identity card belong to living bodies on the basis of the image to be verified; and determining whether the object to be verified passes the in-vivo verification according to the judgment result of each in-vivo judgment operation in the plurality of in-vivo judgment operations, if the judgment result of any in-vivo judgment operation is negative, determining that the object to be verified does not pass the in-vivo verification, and otherwise, determining that the object to be verified passes the in-vivo verification.

Illustratively, after the detecting the face and the identity card in the image to be verified, the living body verification method further comprises: extracting a face image to be verified only containing the detected face and an identity card image to be verified only containing the detected identity card from the image to be verified; and the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the face image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the identity card image to be verified, wherein the third living body judging operation comprises the following steps: and comprehensively judging whether the detected face and the detected identity card belong to living bodies on the whole based on the face image to be verified and the identity card image to be verified.

Illustratively, the comprehensively judging whether the detected face and the detected identity card integrally belong to living bodies based on the face image to be verified and the identity card image to be verified comprises: inputting the face image to be verified and the identity card image to be verified into a trained first convolutional neural network to obtain the probability that the detected face and the detected identity card integrally belong to a living body; and determining whether the detected human face and the detected identity card belong to living bodies on the whole according to the probability.

Illustratively, the living body verification method further includes: acquiring training data, wherein the training data comprises a positive sample image and a negative sample image, the positive sample image comprises a real face and a real identity card, and the negative sample image comprises a false face and a real identity card; extracting a positive sample face image only containing a face and a positive sample identity card image only containing an identity card from the positive sample image; extracting a negative sample face image only containing a face and a negative sample identity card image only containing an identity card from the negative sample image; and performing neural network training to obtain the first convolution neural network by taking the positive sample face image and the positive sample identity card image as positive samples and taking the negative sample face image and the negative sample identity card image as negative samples.

Illustratively, before the extracting, from the positive sample image, a positive sample face image containing only a face and a positive sample identity card image containing only an identity card, the liveness verification method further includes: calculating a positive sample scaling ratio required for scaling the face in the positive sample image to a preset size, and scaling the positive sample image according to the positive sample scaling ratio; before the extracting, from the negative sample image, a negative sample face image containing only a face and a negative sample identification card image containing only an identification card, the in-vivo verification method further includes: calculating a negative sample scaling ratio required for scaling the face in the negative sample image to the preset size, and scaling the negative sample image according to the negative sample scaling ratio; before the extracting, from the image to be verified, a face image to be verified including only the detected face and an identity card image to be verified including only the detected identity card, the living body verification method further includes: and calculating the scaling of the image to be verified required for scaling the detected face to the preset size, and scaling the image to be verified according to the scaling of the image to be verified.

For example, the determining whether the detected face belongs to a living body based on the face image to be verified includes: and inputting the face image to be verified into a trained second convolutional neural network so as to judge whether the detected face belongs to a living body.

For example, the determining whether the detected identity card belongs to a living body based on the identity card image to be verified includes: and inputting the image of the identity card to be verified into a trained third convolutional neural network so as to judge whether the detected identity card belongs to a living body.

Illustratively, after the detecting the face and the identity card in the image to be verified, the living body verification method further comprises: and if no human face is detected or no identity card is detected in the image to be verified, outputting a prompt for re-executing the living body verification.

Illustratively, the performing the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation includes: the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed in the arranged order, and if the result of any one of the living body judgment operations is no, the execution of the subsequent living body judgment operation is stopped.

Illustratively, the plurality of living body judgment operations further includes a fourth living body judgment operation, wherein the fourth living body judgment operation includes: and judging whether the detected action executed by the identity card is matched with the living body action instruction or not based on the image to be verified.

Illustratively, the image to be verified is a video, the fourth living body judgment operation is performed based on at least two frames in the video, and the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed based on at least one frame of the at least two frames.

Illustratively, the live action includes flipping and/or translating the identity card while occluding the face with the identity card.

According to another aspect of the present invention, there is provided an identity authentication method, including the above living body verification method, wherein the identity authentication method further includes: and under the condition that the object to be verified passes the living body verification, judging whether the face on the identity card detected from the image to be verified is consistent with the face detected from the image to be verified.

According to another aspect of the present invention, there is provided a proof of presence apparatus comprising: the instruction generation module is used for randomly generating a living body action instruction, and the living body action instruction is used for indicating an object to be verified to carry out corresponding living body action by holding the identity card; the to-be-verified image acquisition module is used for acquiring an image of the to-be-verified object executing the living body action in real time so as to acquire the to-be-verified image; and the verification passing determining module is used for determining whether the object to be verified passes the in-vivo verification or not based on the image to be verified.

Illustratively, the verification pass determination module includes: the detection submodule is used for detecting the face and the identity card in the image to be verified; a living body judgment sub-module for performing a plurality of living body judgment operations, wherein the living body judgment sub-module includes: a first living body judgment unit configured to perform a first living body judgment operation, wherein the first living body judgment unit includes a face judgment subunit configured to judge whether the detected face belongs to a living body based on the image to be verified; a second living body judgment unit configured to perform a second living body judgment operation, wherein the second living body judgment unit includes an identification card judgment subunit configured to judge whether the detected identification card belongs to a living body based on the image to be verified; a third living body judgment unit configured to perform a third living body judgment operation, wherein the third living body judgment unit includes a comprehensive judgment subunit configured to comprehensively judge whether the detected face and the detected identity card as a whole belong to a living body based on the image to be verified; and the verification passing determining submodule is used for determining whether the object to be verified passes the in-vivo verification according to the judgment result of each in-vivo judgment unit in the in-vivo judgment submodule, if the judgment result of any in-vivo judgment unit is negative, determining that the object to be verified does not pass the in-vivo verification, and otherwise determining that the object to be verified passes the in-vivo verification.

Illustratively, the living body authentication device further includes: the first image extraction module is used for extracting a face image to be verified only containing the detected face and an identity card image to be verified only containing the detected identity card from the image to be verified; and the face judgment subunit comprises a face judgment component for judging whether the detected face belongs to a living body based on the face image to be verified, the identity card judgment subunit comprises an identity card judgment component for judging whether the detected identity card belongs to the living body based on the identity card image to be verified, and the comprehensive judgment subunit comprises a comprehensive judgment component for comprehensively judging whether the detected face and the detected identity card belong to the living body as a whole based on the face image to be verified and the identity card image to be verified.

Illustratively, the comprehensive judgment component includes: the first input subassembly is used for inputting the face image to be verified and the identity card image to be verified into a trained first convolutional neural network so as to obtain the probability that the detected face and the detected identity card integrally belong to a living body; and a living body determining sub-component for determining whether the detected human face and the detected identity card as a whole belong to a living body according to the probability.

Illustratively, the living body authentication device further includes: the training data acquisition module is used for acquiring training data, wherein the training data comprises a positive sample image and a negative sample image, the positive sample image comprises a real face and a real identity card, and the negative sample image comprises a false face and a real identity card; the second image extraction module is used for extracting a positive sample face image only containing a face and a positive sample identity card image only containing an identity card from the positive sample image; the third image extraction module is used for extracting a negative sample face image only containing a face and a negative sample identity card image only containing an identity card from the negative sample image; and the training module is used for carrying out neural network training by taking the positive sample face image and the positive sample identity card image as positive samples and taking the negative sample face image and the negative sample identity card image as negative samples to obtain the first convolutional neural network.

Illustratively, the living body authentication device further includes: a first scaling module, configured to calculate a positive sample scaling ratio required for scaling the face in the positive sample image to a preset size before the second image extraction module extracts the positive sample face image only containing the face and the positive sample identity card image only containing the identity card from the positive sample image, and scale the positive sample image according to the positive sample scaling ratio; a second scaling module, configured to calculate a negative sample scaling ratio required for scaling the face in the negative sample image to the preset size before the third image extraction module extracts the negative sample face image only containing the face and the negative sample identity card image only containing the identity card from the negative sample image, and scale the negative sample image according to the negative sample scaling ratio; and a third scaling module, configured to calculate a scaling ratio of the image to be verified, which is required for scaling the detected face to the preset size, before the first image extraction module extracts the image to be verified including only the detected face and the image to be verified including only the detected identity card from the image to be verified, and scale the image to be verified according to the scaling ratio of the image to be verified.

Illustratively, the face determination component includes: and the second input subassembly is used for inputting the face image to be verified into a trained second convolutional neural network so as to judge whether the detected face belongs to a living body.

Illustratively, the identification card determination component includes: and the third input subassembly is used for inputting the image of the identity card to be verified into a trained third convolutional neural network so as to judge whether the detected identity card belongs to a living body.

Illustratively, the living body authentication device further includes: and the prompt output module is used for outputting a prompt for re-executing the living body verification if no human face is detected or no identity card is detected in the image to be verified.

Illustratively, the living body judgment sub-module further comprises a fourth living body judgment unit configured to perform a fourth living body judgment operation, wherein the fourth living body judgment unit comprises an identity card action judgment sub-unit configured to judge whether the detected action performed by the identity card matches the living body action instruction based on the image to be verified.

Illustratively, the image to be authenticated is a video, the fourth living body judgment unit performs the fourth living body judgment operation based on at least two frames in the video, and the first living body judgment unit, the second living body judgment unit, and the third living body judgment unit perform the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation, respectively, based on at least one frame in the at least two frames. Illustratively, the live action includes flipping and/or translating the identity card while occluding the face with the identity card.

According to another aspect of the present invention, an identity authentication apparatus is provided, which includes the living body verification apparatus, wherein the identity authentication apparatus further includes a face consistency determination module, configured to determine whether a face on an identity card detected from an image to be verified is consistent with a face detected from the image to be verified, in a case where the verification passing determination module determines that the object to be verified passes through living body verification.

According to the living body verification method and device and the identity authentication method and device provided by the embodiment of the invention, the image of the to-be-verified object which holds the identity card to execute the living body action is acquired, and the living body verification is carried out based on the acquired image, so that whether the to-be-verified object belongs to the living body can be judged by combining the information brought by the identity card in the living body verification process, and the accuracy of the living body verification can be improved.

Drawings

The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.

FIG. 1 shows a schematic block diagram of an example electronic device for implementing a liveness verification method and apparatus in accordance with embodiments of the present invention;

FIG. 2 shows a schematic flow diagram of a liveness verification method according to one embodiment of the invention;

FIG. 3 shows a schematic flow chart of the steps of determining whether an object to be authenticated passes in-vivo authentication based on an image to be authenticated according to one embodiment of the present invention;

FIG. 4 illustrates a network architecture diagram of a first convolutional neural network, according to one embodiment of the present invention;

FIG. 5 shows a schematic flow diagram of the training steps of a first convolutional neural network according to one embodiment of the present invention;

FIG. 6 shows a schematic block diagram of a living body authentication device according to one embodiment of the present invention; and

FIG. 7 shows a schematic block diagram of a liveness verification system according to one embodiment of the invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.

As can be seen from the above, in the current authentication application, the live body verification process and the subsequent identity comparison process are performed separately. Specifically, the in-vivo authentication is realized by a user making a specified interactive action to an image acquisition device such as a mobile phone camera, and the in-vivo authentication judges whether a face belongs to a living body by using a video (or a still image) including the face provided by the user and acquired by the image acquisition device, and does not consider the identity card information of the user in the process. Therefore, if an attacker uses CG software to synthesize a face image of an attacker, the living body verification system can be deceived, and meanwhile, the attacker can steal or use software technology to synthesize an identity card image of the attacker for subsequent identity comparison. Although the identity card image and the face image are both forged, because the identity card image and the face image belong to the same person, the subsequent identity comparison can be smoothly carried out as long as the live body verification is passed, so that an attacker can pass the identity authentication smoothly. Therefore, in order to ensure the security of identity authentication, it is necessary to improve the accuracy of live body verification as much as possible to prevent an attacker from using a false face to pass the live body verification.

In order to solve the above-mentioned problems, embodiments of the present invention provide a method and an apparatus for testimony of liveliness. The method and the device can carry out the in-vivo verification based on the cross information of the face and the identity card, improve the accuracy of the in-vivo verification, and can avoid the problems caused by the separate use of the identity card information and the face information when being applied to the identity authentication. It should be noted that the present invention can be applied to any scenario requiring living body verification, including but not limited to real-name authentication in the financial field, etc.

First, an example electronic device 100 for implementing a living body authentication method and apparatus according to an embodiment of the present invention is described with reference to fig. 1.

As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.

The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.

The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.

The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.

The output device 108 may output various information (e.g., images and/or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, etc.

The image capture device 110 may capture images for in vivo verification and store the captured images in the storage device 104 for use by other components. The image capture device 110 may be a camera. It should be understood that the image capture device 110 is merely an example, and the electronic device 100 may not include the image capture device 110. In this case, an image for in-vivo authentication may be acquired using another image acquisition device and the acquired image may be transmitted to the electronic apparatus 100.

Illustratively, an exemplary electronic device for implementing the in-vivo authentication method and apparatus according to embodiments of the present invention may be implemented on a device such as a personal computer or a remote server.

Next, a living body authentication method according to an embodiment of the present invention will be described with reference to fig. 2. FIG. 2 shows a schematic flow diagram of a liveness verification method 200 according to one embodiment of the invention. As shown in fig. 2, the living body verification method 200 includes the following steps.

In step S210, a living body action instruction is randomly generated, where the living body action instruction is used to instruct the to-be-verified object to perform a corresponding living body action by holding the identity card.

In step S210, appropriate living body action instructions may be generated as needed. The generation here refers to generating and notifying the object to be authenticated in a certain manner (e.g., voice, text, etc.). The living body action instruction is used for indicating the object to be verified to execute corresponding living body action. By way of example and not limitation, the live action may include flipping and/or translating the identity card while shielding the face with the identity card.

In the living body verification process, a user (namely, an object to be verified) can be required to execute certain preset living body actions according to the generated living body action instruction. The living body action required to be executed by the user enables the identity card and the face to appear in the acquisition range of the camera, so that the camera can acquire and obtain an image (including a video or a static image) containing the face and the identity card as an image to be verified. One example of a liveness action instruction is to require the user (e.g., at an initial stage of liveness verification) to raise the identification card flush with their nose, with the user's front facing the camera and the identification card parallel to the lens plane of the camera. Another example of a live action instruction is to ask the user to flip the identification card, changing the pitch angle (pitch angle) and/or yaw angle (yaw angle) of the identification card, while asking the identification card to be able to block the nose and mouth parts of the face during flipping. Yet another example of a live action instruction is to require the user to translate the identification card up and down and/or left and right while requiring the identification card to be able to block the nose and mouth portions of the face during translation.

It should be understood that the above living body action instruction (or indicating a living body action performed by the object to be verified) is only an example and not a limitation, and the present invention may be implemented based on any other suitable living body action instruction (or indicating a living body action performed by the object to be verified). In addition, the part of the identity card that blocks the face in the above example is only an example and is not limited, and the part of the face that the identity card blocks when an interactive action such as flipping or panning is performed may be any part on the face, which is not limited in the present invention.

The method has the advantages that the object to be verified is required to perform some living body actions that the identity card interacts with the face and the interaction state changes, so that the camera can collect the identity card and the face in different interaction states, in the living body verification process, the living body verification can be performed on the basis of images to be verified in different conditions, and the passing rate of the living body verification is improved.

According to the embodiment of the invention, the living body action instruction is output in a text display mode and/or an audio playing mode. In one example, a living body action desired to be performed by a user may be displayed via an output device such as a display screen. In another example, a live action desired to be performed by the user may be played out via an output device such as a speaker. Of course, the two output means may be used to output the biological motion command.

In step S220, an image of the object to be verified executing the living body action is acquired in real time to obtain an image to be verified.

The image to be verified may be any suitable image acquired for a human face and an identity card. The image to be verified can be an original image acquired by a camera or an image obtained after the original image is preprocessed. The image to be verified can be a static image or a video.

Illustratively, in a real-name authentication occasion, living body verification is required, in which case, a prompt may be given first, the user is required to provide his or her identity card and to perform a living body action by holding the identity card, and an image is acquired for the living body action performed by the user holding the identity card. In one example, a user can acquire an image of the user, which includes an identity card, a human face and an identity card living body action, by using a camera of the mobile phone of the user, and upload the image to a server side for living body verification. In another example, the camera may be an installed camera such as a bank system, and the user performs a living body action as required in front of the camera, and the identity card and the face of the user are collected by the camera and uploaded to a processing system at the back end of the bank for living body verification.

Illustratively, in order to utilize the cross information of the identity card and the face, the user may be required to perform some living action, so that there is interaction between the identity card and the face, for example, the identity card partially blocks the face. The cross information described herein refers to information formed on an image due to interaction between the identification card and the face, such as illumination, focus, and the like of an overlapping portion of the identification card and the face.

In step S230, it is determined whether the object to be authenticated passes the living body authentication based on the image to be authenticated.

As described above, in the process of executing the living body action by the object to be verified, there is interaction between the identity card provided by the object to be verified and the face, and the image to be verified obtained at a certain time has cross information. Illustratively, whether the face and the identity card belong to living bodies can be verified according to the face information, the identity card information and the interaction information of the identity card and the face in the image to be verified. The interaction state of the identity card and the face changes along with the proceeding of the living body action, the cross information in the to-be-verified images obtained at different times is different, and although the to-be-verified object is determined not to pass the living body verification according to the to-be-verified image obtained at a certain time, it is possible to determine that the to-be-verified object passes the living body verification according to the to-be-verified images obtained at other times.

For example, in step S230, it may be determined whether the face information, the identification card information, and the cross information in the image to be verified are obtained when the object to be verified performs a correct living body motion according to the living body motion instruction, and if so, it is determined that the object to be verified passes the living body verification, otherwise, it is determined that the object to be verified does not pass the living body verification. It can be understood that, in the case that the object to be verified performs the correct living body action according to the living body action instruction, the interaction state of the identity card and the face is roughly determined, and the obtained identity card information, the face information and the cross information should have a certain rule, which can be predetermined. The regularity may be determined by training a convolutional neural network with a large number of sample images. The trained convolutional neural network is utilized to process the image to be verified, the probability that the image to be verified is the image obtained under the condition that the object to be verified executes the correct living body action according to the living body action instruction can be obtained, and whether the object to be verified passes the living body verification or not can be further determined. The form of the convolutional neural network and the training process will be described below, and will not be described in detail here.

In one example, in step S230, in addition to determining whether the face information, the identification card information, and the cross information in the image to be verified are obtained when the object to be verified performs a correct living body action according to the living body action instruction, it may be determined whether the action performed by the handheld identification card matches the living body action instruction, and if the determination results in the foregoing four aspects are yes, it is determined that the object to be verified passes the living body verification, otherwise, it is determined that the object to be verified does not pass the living body verification.

According to the living body verification method provided by the embodiment of the invention, as the image of the to-be-verified object which holds the identity card to execute the living body action is collected and the living body verification is carried out based on the collected image, whether the to-be-verified object belongs to the living body can be judged by combining the information (including the identity card information and the cross information of the identity card and the face) brought by the identity card in the living body verification process, so that the accuracy of the living body verification can be improved.

Illustratively, the liveness verification method according to embodiments of the present invention may be implemented in a device, apparatus, or system having a memory and a processor.

The living body verification method according to the embodiment of the invention can be deployed at an image acquisition end, for example, the living body verification method can be deployed at the image acquisition end of a bank real-name authentication system. Alternatively, the living body verification method according to the embodiment of the present invention may also be distributively deployed at the server side (or cloud side) and the client side. For example, an image to be verified may be collected at a client, and the client transmits the collected image to be verified to a server (or a cloud), so that the server (or the cloud) performs living body verification.

Fig. 3 shows a schematic flowchart of the step of determining whether the object to be authenticated is authenticated by living body authentication based on the image to be authenticated (step S230) according to one embodiment of the present invention. As shown in fig. 3, step S230 may include the following steps.

In step S310, a human face and an identity card in an image to be verified are detected.

Illustratively, when the image to be verified is a static image, the face and identity card detection is directly carried out on the static image; and when the image to be verified is a video, detecting the face and the identity card aiming at each video frame in the video.

Any existing or future possibly implemented face detection algorithm may be used to detect faces in the image to be authenticated. For example, an AdaBoost algorithm, a CART (classification regression tree) algorithm, or the like may be used to detect a human face in an image to be authenticated. In the case that a face exists in the image to be verified, the detected face can be represented by a conventional face box, that is, the position of the face is indicated by a rectangular box.

Likewise, any existing or future possible identity card detection algorithm may be employed to detect the identity card in the image to be authenticated. The identity card detection algorithm is similar to the face detection algorithm, and can detect the edge outline of the identity card and mark the position of the identity card with an identity card frame (which can be a rectangular frame).

Under the condition that the image to be verified is a video, the face detection algorithm and the identity card detection algorithm can be used for detecting (namely positioning) and tracking the face and the identity card in each video frame in real time, and when the face or the identity card is found to be lost, a notice can be sent.

In step S320, a plurality of living body judgment operations are performed, wherein the plurality of living body judgment operations include a first living body judgment operation, a second living body judgment operation, and a third living body judgment operation, wherein the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the image to be verified, wherein the third living body judging operation comprises the following steps: and comprehensively judging whether the detected human face and the detected identity card belong to living bodies on the basis of the image to be verified.

The first living body judgment operation solely judges whether or not the detected face belongs to a living body. For real faces it is considered to belong to a living body and for false faces it is considered not to belong to a living body. Illustratively, the false faces may include faces obtained for screen shots, faces generated using CG software, faces obtained by printing, and the like.

The second living body judgment operation solely judges whether the detected identification card belongs to a living body. The real identity card is considered to belong to the living body, and the false identity card is considered not to belong to the living body. Exemplary false identity cards may include identity cards obtained by copying against a screen, hand-drawn identity cards, and the like.

The third living body judgment operation comprehensively judges whether the detected face and the detected identification card as a whole belong to a living body based on the image to be verified. In the case where the detected face is a real face and the detected identification card is a real identification card, the two may be considered to belong to a living body as a whole, and in the case where either one is not real, the two may be considered to belong to no living body as a whole.

Because the surface of the face is fluctuant and uneven, the real identity card and the real face are not on the same plane in a three-dimensional space, if the identity card and the face are interacted, for example, the face is partially shielded by the identity card, the detailed information (namely cross information) of the illumination condition, the shielding condition, the focusing condition and the like between the real face and the real identity card is different from that when the false face and the real identity card or the real face and the false identity card or the false face and the false identity card are interacted, and therefore whether the detected face and the detected identity card belong to living bodies on the whole can be judged according to the cross information of the identity card and the face. Because whether the identity card and the face belong to the living body on the whole is judged by utilizing the difference of the living body and the non-living body in the aspects of image detail information such as illumination, shielding, focusing and the like, the accuracy of judging the authenticity of the face and the identity card can be improved, and the accuracy of in-vivo verification can also be improved.

In step S330, it is determined whether the object to be authenticated passes the in-vivo authentication according to the determination result of each of the plurality of in-vivo determination operations, and if the determination result of any one of the in-vivo determination operations is no, it is determined that the object to be authenticated does not pass the in-vivo authentication, otherwise it is determined that the object to be authenticated passes the in-vivo authentication.

In one example, the image to be verified is a static image, and whether the face information, the identification card information, and the cross information in the image to be verified meet the requirement of in-vivo verification may be considered only, that is, whether the face belongs to the living body, whether the identification card belongs to the living body, and whether the face and the identification card belong to the living body as a whole may be determined, and whether the object to be verified passes in-vivo verification is determined according to the determination result. In this case, the plurality of living body judgment operations may include only the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation. The first living body judgment operation is mainly responsible for individual face authenticity judgment, the second living body judgment operation is mainly responsible for individual identity card authenticity judgment, and the third living body judgment operation is mainly responsible for overall authenticity judgment of the face and the identity card. The three living body judgment operations respectively obtain a judgment result indicating whether the main body respectively responsible is real or not, namely indicating whether the main body respectively responsible belongs to a living body or not, and if the judgment result of any one of the three living body judgment operations indicates that the main body responsible does not belong to the living body, determining that the object to be verified (namely the user) providing the identity card and the face cannot pass the living body verification. In this way, since the overall authenticity judgment is added on the basis of the individual authenticity judgment, the accuracy of the in-vivo authentication can be improved.

In another example, the image to be verified is a video, and in addition to the above-mentioned face information, identity card information and cross information, it may be further considered whether the action performed by the identity card matches the live action instruction. In this case, the living body judgment operation may further include a fourth living body judgment operation. Illustratively, the plurality of living body judgment operations further includes a fourth living body judgment operation, wherein the fourth living body judgment operation includes: and judging whether the action executed by the detected identity card is matched with the living body action instruction or not based on the image to be verified.

The fourth living body judgment operation can be implemented by various feasible motion detection and tracking methods, which are not described herein again. The present example can further improve the accuracy of the living body verification by adding the fourth living body judgment operation.

Illustratively, when the image to be verified is a video, the fourth living body judgment operation is performed based on at least two frames in the video, and the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed based on at least one frame of the at least two frames. When the first living body judgment operation is performed based on two or more video frames or two or more still images, the first living body judgment operation judges that the object to be verified is a living body only when the judgment result for each video frame or each still image judges that the object to be verified is a living body. The determination rules of the second living body determination operation and the third living body determination operation are the same. And in the fourth living body judgment operation, if the action of the identity card of any frame does not match with the living body action instruction, the result output by the fourth living body judgment operation is that the object to be verified is not a living body.

In a specific example, the image to be verified is a video, the fourth living body judgment operation is performed based on all video frames in the video, and the first living body judgment operation, the second living body judgment operation and the third living body judgment operation are also performed based on all video frames, so as to improve the accuracy of living body judgment.

In one example, the above-described living body judgment operation may be performed directly from the original image to be verified. In another example, the face image to be verified and the identity card image to be verified may be first extracted from the image to be verified, and the living body judgment operation described above may be performed based on the face image to be verified and the identity card image to be verified. The latter example is described below.

According to an embodiment of the present invention, after step S310, the living body verification method 200 may further include: extracting a face image to be verified only containing the detected face and an identity card image to be verified only containing the detected identity card from the image to be verified; and the first living body judgment operation may include: and judging whether the detected face belongs to a living body or not based on the face image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the identity card image to be verified, wherein the third living body judging operation comprises the following steps: and comprehensively judging whether the detected face and the detected identity card belong to living bodies on the whole based on the face image to be verified and the identity card image to be verified.

Under the condition of obtaining the face frame by detection, the face frame can be used for segmenting the image to be verified, and pixels in the face frame are segmented to obtain the face image to be verified. Similarly, under the condition of obtaining the identity card frame through detection, the identity card frame can be used for segmenting the image to be verified, and pixels in the identity card frame are segmented to obtain the image to be verified.

Subsequently, when the first living body judgment operation is executed, whether the detected face belongs to the living body or not can be judged based on the extracted face image to be verified, so that the interference of information at other image positions except the face can be avoided, and the efficiency and the accuracy of face authenticity judgment can be improved. When the second living body judgment operation is executed, whether the detected face belongs to the living body or not can be judged based on the extracted face image to be verified, similarly, the interference of information at other image positions except the identity card can be avoided, and the efficiency and the accuracy of the authenticity judgment of the identity card can be improved. When the third living body judgment operation is executed, whether the detected face and the detected identity card belong to the living body on the whole can be judged based on the extracted face image to be verified and the extracted identity card image to be verified, similarly, the interference of information at other image positions except the face and the identity card can be avoided, and the efficiency and the accuracy of the whole authenticity judgment of the face and the identity card can be improved.

According to the embodiment of the present invention, the comprehensively determining whether the detected face and the detected identity card belong to living bodies as a whole (i.e., the third living body determining operation) based on the face image to be verified and the identity card image to be verified may include: inputting a face image to be verified and an identity card image to be verified into a trained first convolution neural network so as to obtain the probability that the detected face and the detected identity card integrally belong to a living body; and determining whether the detected human face and the detected identity card belong to living bodies on the whole according to the probability.

In the third living body judgment operation, the judgment can be made using the trained first convolutional neural network. FIG. 4 shows a network structure diagram of a first convolutional neural network according to one embodiment of the present invention. It should be noted that the network structure shown in fig. 4 is only an example and is not a limitation to the present invention, and the first convolutional neural network in the embodiment of the present invention is not limited to the network structure shown in fig. 4, and may have any other suitable network structure, and the type of layers in the first convolutional neural network, the connection manner between the layers, the number of filters inside the layers, the size of the filters, and the like may be set as required.

Referring to fig. 4, the first convolutional neural network has six convolutional layers (denoted by conv0, conv1, conv2, conv3, conv4 and conv5, respectively) and a full-connected layer (denoted by fc 0), and the six convolutional layers and the full-connected layer are divided into two upper and lower layers and are respectively used for receiving an identity card image (including an identity card image to be verified and a positive sample identity card image and a negative sample identity card image described below) and a face image (including a face image to be verified and a positive sample face image and a negative sample face image described below). The two-way output signature of full-connected layer fc0 is put together into one connected layer (denoted concat), followed by one full-connected layer (denoted fc 1) and finally the output layer (denoted softmax).

As shown in fig. 4, the identity card image to be verified and the face image to be verified are input to the first convolutional neural network together, and in the processing process of the first convolutional neural network, information of the identity card image and the face image can be integrated. As described above, in the case where the identity card interacts with the face (e.g., the identity card partially blocks the face), the identity card and the face interact with each other. The first convolution neural network can process the information of the identity card and the face in a comprehensive mode, namely the cross information of the identity card and the face can be taken into consideration, and then whether the identity card and the face belong to a living body on the whole or not is judged, and if any one of the identity card and the face is not real, the identity card and the face are not considered to belong to the living body on the whole. The first convolutional neural network may output a probability (or confidence) that the detected face and the detected identity card belong to the living body as a whole. Illustratively, if the probability of the first convolutional neural network output is greater than or equal to 0.5, it may be determined that the detected face and the detected identification card as a whole belong to a living body, and if the probability of the first convolutional neural network output is less than 0.5, it may be determined that the detected face and the detected identification card as a whole do not belong to a living body. The first convolutional neural network can be obtained by training with a large number of sample images in advance.

The convolutional neural network can autonomously learn complex image characteristics and realize high-precision and high-performance image classification, so that the convolutional neural network can be used for living body judgment to obtain an accurate judgment result, and the accuracy of living body verification is improved.

According to an embodiment of the present invention, the in-vivo verification method 200 may further include a training step of the first convolutional neural network. Fig. 5 shows a schematic flow diagram of the training step S500 of the first convolutional neural network according to one embodiment of the present invention.

As shown in fig. 5, the training step S500 of the first convolutional neural network includes the following steps.

In step S510, training data is acquired, where the training data includes a positive sample image and a negative sample image, the positive sample image includes a real face and a real identity card, and the negative sample image includes a false face and a real identity card.

A large number of positive sample images may be acquired in advance. For example, 10000 videos containing real faces and real identity cards can be collected, wherein some video frames in each video can be trained as positive sample images. In the process of acquiring the positive sample image, an object (user) providing the identity card and the face may be required to turn or translate the identity card while shielding the face with the identity card, so as to acquire the positive sample image in which the interaction state of the face and the identity card is different.

In addition to the positive sample images, a large number of negative sample images may be acquired in advance. For example, 10000 videos containing false faces and real identity cards can be collected, wherein some video frames in each video can be trained as negative sample images. The false face may be, for example, a face obtained by copying a face played in a screen, a face obtained by printing, or the like. In the process of acquiring the negative sample image, an object (user) providing the identity card and the face can be required to turn or translate the identity card while shielding the face with the identity card so as to acquire the negative sample image with different interaction states of the face and the identity card.

Because the possibility to face is bigger usually to the face is fake to the degree of difficulty that face and ID card were fake simultaneously is higher, consequently in the training process of first convolution neural network, can mainly train to the face, it is that positive sample image is to real face collection, negative sample image is to false face collection, the ID card under two kinds of circumstances all can be real ID card, this can make the first convolution neural network that the training obtained mainly used judge the true and false of face, further strengthen the judgement precision of face authenticity. Certainly, in the training process of the first convolutional neural network, the negative sample image may also contain a real face and a false identity card or contain a false face and a false identity card, so that the first convolutional neural network obtained by training also considers the authenticity of the identity card.

In step S520, a positive sample face image containing only a face and a positive sample identification card image containing only an identification card are extracted from the positive sample image.

For each positive sample image acquired, a face therein may be detected using the face detection algorithm as described above, and an identification card therein may be detected using the identification card detection algorithm as described above. Subsequently, a positive sample face image containing only a face and a positive sample identification card image containing only an identification card may be extracted from the detection result.

In step S530, a negative sample face image containing only a face and a negative sample identification card image containing only an identification card are extracted from the negative sample image.

Similarly, for each negative sample image acquired, a face therein may be detected using the face detection algorithm as described above, and an identification card therein may be detected using the identification card detection algorithm as described above. Subsequently, a negative sample face image containing only a face and a negative sample identification card image containing only an identification card may be extracted according to the detection result.

In step S540, a neural network training is performed to obtain a first convolution neural network, with the positive sample face image and the positive sample identification card image as positive samples, and with the negative sample face image and the negative sample identification card image as negative samples.

The positive sample face image and the positive sample identity card image are used as positive samples, the negative sample face image and the negative sample identity card image are used as negative samples, and the positive samples and the negative sample identity card image are respectively input into a first convolution neural network with a network structure shown in fig. 4 for training. The neural network may be trained to converge using a stochastic gradient descent method to obtain the desired first convolution neural network.

The execution order of the steps shown in fig. 5 is merely an example and not a limitation, and the training step S500 of the first convolutional neural network may have other reasonable execution orders, for example, the step S520 may be executed after or simultaneously with the step S530.

According to an embodiment of the present invention, before step S520, the living body verification method 200 may further include: calculating a positive sample scaling ratio required for scaling the face in the positive sample image to a preset size, and scaling the positive sample image according to the positive sample scaling ratio; prior to step S530, the living body verification method 200 may further include: calculating a negative sample scaling ratio required for scaling the face in the negative sample image to a preset size, and scaling the negative sample image according to the negative sample scaling ratio; before extracting a face image to be verified including only the detected face and an identity card image to be verified including only the detected identity card from the image to be verified, the living body verification method 200 may further include: and calculating the scaling of the image to be verified required for scaling the detected face to a preset size, and scaling the image to be verified according to the scaling of the image to be verified.

The preset size may be any suitable size, and may be, for example, 150 pixels by 150 pixels. As described above, a face in an image may be detected using a face detection algorithm, which may be represented by a face box. The way in which faces are detected is similar for the image to be verified, the positive sample image and the negative sample image. After the face frame is detected, the process of calculating the scaling ratio of the face frame to the preset size and scaling the image according to the scaling ratio can be understood as the process of normalizing the face. Similar face normalization processes are carried out on the image to be verified, the positive sample image and the negative sample image, the sizes of the faces of the images can be adjusted to be basically consistent, processing is convenient, and errors of in-vivo verification are reduced.

It should be understood that the face normalization is not limited to the above one, and other reasonable implementations are possible. For example, for the image to be verified, after the face image and the identity card image are extracted from the image to be verified, the face image and the identity card image may be scaled according to the scaling of the image to be verified, and in this case, the whole image to be verified does not need to be scaled. For the positive sample image and the negative sample image, the face normalization can also be performed in a similar manner, and details are not repeated. It can be understood that in the way of extracting the face image and the identity card image after scaling the whole image to be verified, a certain amount of calculation can be saved because the face image and the identity card image do not need to be processed respectively.

According to an embodiment of the present invention, step S220 may include: and acquiring a video or a plurality of continuous static images of the object to be verified executing the living body action in a preset time period after the verification starting moment as an image to be verified.

As described above, the image to be verified may be a still image or a video frame in a video. In one example, after the in-vivo authentication starts, a video segment may be acquired by the camera within a preset time period, and some video frames in the video segment may be used for in-vivo authentication. The manner of selecting the video frame from the video may be set as required, which is not limited by the present invention. For example, one video frame may be selected at regular intervals from the video, and each of the selected video frames may be used as an image to be verified for live body verification. For another example, several video frames may be randomly selected from the video, and each of the selected video frames may be used as an image to be verified for live body verification. Of course, all video frames in the video may also be selected for liveness verification.

Because the interaction state of the face and the identity card may change in different video frames, especially when the object providing the face and the identity card performs the living body action according to the requirement, the change is obvious, so that the video frames in a section of video are used for living body verification, images to be verified acquired in different states can be taken into consideration, the accuracy and the passing rate of the living body verification are improved, and the user experience is improved.

According to an embodiment of the present invention, the living body verification method 200 may further include: and if the object to be verified passes the living body verification according to any one of the selected video frames, determining that the living body verification is successful, otherwise, determining that the living body verification fails.

In this embodiment, after the living body verification starts, if it is not determined that the object to be verified passes the living body verification according to the selected video frame within a preset time period, it may be determined that the living body verification fails, and if it is in the identity authentication application, it may not be possible to continue the subsequent identity consistency authentication operation (i.e., the above-mentioned identity comparison operation); on the contrary, if the to-be-verified object is determined to pass the living body verification according to any one of the selected video frames within the preset time period, the living body verification can be determined to be successful, and the subsequent identity consistency authentication operation can be continued.

According to an embodiment of the present invention, after step S310, the living body verification method 200 may further include: and if no human face is detected or no identity card is detected in the image to be verified, outputting a prompt for re-executing the living body verification.

As described above, during the whole living body verification process, the human face and the identity card in the image to be verified can be detected (i.e. located) and tracked in real time, if no human face is detected or no identity card is detected, the human face or the identity card is lost, and in this case, a prompt can be output to inform the object to be verified, which provides the identity card and the human face, to re-perform the living body verification. By the method, errors in the in-vivo verification process can be found in time, and the user is positively helped to pass the in-vivo verification, so that the in-vivo verification efficiency can be improved, and the user experience is improved. The prompt to re-perform liveness verification may be output in any suitable form, such as may be output in one or more of a text display form, an audio playback form, and a signal light flashing form.

According to an embodiment of the present invention, determining whether the detected face belongs to a living body (i.e., a first living body determination operation) based on the face image to be verified may include: and inputting the face image to be verified into the trained second convolutional neural network so as to judge whether the detected face belongs to a living body.

And inputting the face image to be verified into the trained second convolutional neural network, and judging whether the detected face is a real face or a false face obtained by means of CG software generation, screen copying and the like by analyzing the detail characteristics of the face. And if the detected face is a real face, the detected face is considered to belong to the living body, otherwise, the detected face is not considered to belong to the living body. The second convolutional neural network can be obtained by adopting offline training of a large number of sample images in advance, and can be regarded as a real face discriminator. In the process of training the neural network to obtain the second convolutional neural network, the training samples used can be a large number of images containing real faces and images containing false faces. The false faces may include the above-described faces obtained for screen copying, faces generated using CG software, faces obtained by printing, and the like. Similar to the first convolutional neural network, the neural network may be trained to converge by a stochastic gradient descent method to obtain a desired second convolutional neural network.

According to an embodiment of the present invention, determining whether the detected identity card belongs to a living body based on the identity card image to be verified (second living body determining operation) may include: and inputting the image of the identity card to be verified into the trained third convolutional neural network so as to judge whether the detected identity card belongs to a living body.

And inputting the image of the identity card to be verified into the trained third convolutional neural network, and judging whether the detected identity card is a real identity card or not by analyzing the detail characteristics of the identity card. If the detected identity card is a real identity card, the identity card is considered to belong to the living body, otherwise, the identity card is not considered to belong to the living body. The third convolutional neural network can be obtained by adopting a large number of sample images in advance for off-line training, and can be regarded as a real identity card discriminator. In the process of training the neural network to obtain the third convolutional neural network, the training samples used can be a large number of images containing real identity cards and images containing false identity cards. The false identity card may include the above-described identity card obtained for screen rendering, a hand-drawn identity card, and the like. Similarly to the first convolutional neural network and the second convolutional neural network, the neural network may be trained to converge by a stochastic gradient descent method, thereby obtaining a desired third convolutional neural network.

According to the embodiment of the present invention, step S320 may include: the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed in the arranged order, and if the result of any one of the living body judgment operations is negative, the execution of the subsequent living body judgment operation is stopped.

In one example, the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation may be performed in their entirety. In another example, the execution order of the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation may be arranged in advance, and the three living body judgment operations may be sequentially executed in the arranged order. For example, assuming that the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed in this order, if the detected face belongs to a living body as a result of the first living body judgment operation, the second living body judgment operation may be continuously performed, and if the detected face does not belong to a living body as a result of the first living body judgment operation, the remaining second living body judgment operation and third living body judgment operation are not continuously performed. When the second living body judgment operation is executed, whether to continue to execute the third living body judgment operation can also be judged according to the result, and details are not repeated.

Since it has been determined that the object to be authenticated has not passed the living body authentication in the case where the result of a certain previous living body judgment operation is not belonging to the living body, there is no need to perform a subsequent living body judgment operation, which can save the data calculation amount and the living body authentication time to some extent.

According to another aspect of the present invention, an identity authentication method is provided. The identity authentication method includes the living body verification method 200 described above, and further includes: and under the condition that the object to be verified passes the living body verification, judging whether the face on the identity card detected from the image to be verified is consistent with the face detected from the image to be verified.

In the identity authentication application, living body authentication can be performed firstly, and under the condition that an object to be authenticated, which provides an identity card and a human face, is determined to pass the living body authentication, the object to which the identity card actually belongs and the object to be authenticated can be continuously compared to authenticate whether the identity card and the object to be authenticated are the same person. This can be achieved by comparing the face on the identity card detected from the image to be verified (e.g. the detected identity card obtained in step S310) with the face detected from the image to be verified (e.g. the detected face obtained in step S310), and if the detected face on the identity card is consistent with the detected face, it indicates that the two belong to the same person, the identity authentication is successful, otherwise the identity authentication is failed. For the detected identity card, the name and the identity card number on the identity card can be automatically extracted by using an Optical Character Recognition (OCR) technology, and the name and the identity card number can also be used for identity authentication, so that the user experience is further improved. Specifically, after obtaining the name and the identification card number on the detected identification card, a biometric verification technique such as face recognition or voiceprint recognition may be used to compare the detected face or other data uploaded by the object to be verified, such as a voice signal file, with the biometric features (e.g., an authoritative citizen identification card image obtained from the ministry of public security, or a voiceprint signal of each user recorded in advance) corresponding to the recognized name and identification card number, which are prepared in advance, so as to determine whether the object to be verified is consistent with the user stored in advance.

The identity card and the face are verified in a living body verification process by combining the cross information of the identity card and the face, namely the identity card information is considered in the living body verification process, so that the problem of low safety caused by the fact that the identity card information and the face information are separately used in the conventional identity authentication process can be solved, and the safety and the reliability of identity authentication can be improved.

According to another aspect of the present invention, a witness device is provided. Figure 6 shows a schematic block diagram of a liveness verification device 600 in accordance with one embodiment of the present invention.

As shown in fig. 6, the living body authentication device 600 according to the embodiment of the present invention includes an instruction generating module 610, an image to be authenticated obtaining module 620, and an authentication pass determining module 630. The various modules may perform the various steps/functions of the in-vivo authentication method described above in connection with figures 2-5, respectively. Only the main functions of the respective modules of the living body authentication device 600 will be described below, and the details that have been described above will be omitted.

The instruction generating module 610 is configured to randomly generate a living body action instruction, where the living body action instruction is used to instruct an object to be verified to perform a corresponding living body action by holding an identity card. The instruction generation module 610 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.

The to-be-verified image obtaining module 620 is configured to collect, in real time, an image of the to-be-verified object executing the living body action, so as to obtain an image to be verified. The to-be-verified image obtaining module 620 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.

The verification passing determination module 630 is configured to determine whether the object to be verified passes living body verification based on the image to be verified. The verification-passing determination module 630 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.

According to an embodiment of the present invention, the verification passing determining module 630 includes: the detection submodule is used for detecting the face and the identity card in the image to be verified; a living body judgment sub-module for performing a plurality of living body judgment operations, wherein the living body judgment sub-module includes: a first living body judgment unit configured to perform a first living body judgment operation, wherein the first living body judgment unit includes a face judgment subunit configured to judge whether the detected face belongs to a living body based on the image to be verified; a second living body judgment unit configured to perform a second living body judgment operation, wherein the second living body judgment unit includes an identification card judgment subunit configured to judge whether the detected identification card belongs to a living body based on the image to be verified; a third living body judgment unit configured to perform a third living body judgment operation, wherein the third living body judgment unit includes a comprehensive judgment subunit configured to comprehensively judge whether the detected face and the detected identity card as a whole belong to a living body based on the image to be verified; and the verification passing determining submodule is used for determining whether the object to be verified passes the in-vivo verification according to the judgment result of each in-vivo judgment unit in the in-vivo judgment submodule, if the judgment result of any in-vivo judgment unit is negative, determining that the object to be verified does not pass the in-vivo verification, and otherwise determining that the object to be verified passes the in-vivo verification.

In one example, the living body judgment sub-module further includes a fourth living body judgment unit configured to perform a fourth living body judgment operation, where the fourth living body judgment unit includes an identification card action judgment sub-unit configured to judge whether an action performed by the detected identification card matches the living body action instruction based on the image to be verified.

Illustratively, when the image to be authenticated is a video, a fourth living body judgment unit performs the fourth living body judgment operation based on at least two frames in the video, and the first living body judgment unit, the second living body judgment unit, and the third living body judgment unit perform the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation, respectively, based on at least one frame of the at least two frames.

According to an embodiment of the present invention, the living body authentication device 600 further includes: a first image extraction module (not shown) configured to extract, from the image to be verified, a face image to be verified including only the detected face and an identification card image to be verified including only the detected identification card; and the face judgment subunit comprises a face judgment component for judging whether the detected face belongs to a living body based on the face image to be verified, the identity card judgment subunit comprises an identity card judgment component for judging whether the detected identity card belongs to the living body based on the identity card image to be verified, and the comprehensive judgment subunit comprises a comprehensive judgment component for comprehensively judging whether the detected face and the detected identity card belong to the living body as a whole based on the face image to be verified and the identity card image to be verified.

According to an embodiment of the present invention, the comprehensive judgment component includes: the first input subassembly is used for inputting the face image to be verified and the identity card image to be verified into a trained first convolutional neural network so as to obtain the probability that the detected face and the detected identity card integrally belong to a living body; and a living body determining sub-component for determining whether the detected human face and the detected identity card as a whole belong to a living body according to the probability.

According to an embodiment of the present invention, the living body authentication device 600 further includes: the training data acquisition module is used for acquiring training data, wherein the training data comprises a positive sample image and a negative sample image, the positive sample image comprises a real face and a real identity card, and the negative sample image comprises a false face and a real identity card; the second image extraction module is used for extracting a positive sample face image only containing a face and a positive sample identity card image only containing an identity card from the positive sample image; the third image extraction module is used for extracting a negative sample face image only containing a face and a negative sample identity card image only containing an identity card from the negative sample image; and the training module is used for carrying out neural network training by taking the positive sample face image and the positive sample identity card image as positive samples and taking the negative sample face image and the negative sample identity card image as negative samples to obtain the first convolutional neural network.

According to an embodiment of the present invention, the living body authentication device 600 further includes: a first scaling module, configured to calculate a positive sample scaling ratio required for scaling the face in the positive sample image to a preset size before the second image extraction module extracts the positive sample face image only containing the face and the positive sample identity card image only containing the identity card from the positive sample image, and scale the positive sample image according to the positive sample scaling ratio; a second scaling module, configured to calculate a negative sample scaling ratio required for scaling the face in the negative sample image to the preset size before the third image extraction module extracts the negative sample face image only containing the face and the negative sample identity card image only containing the identity card from the negative sample image, and scale the negative sample image according to the negative sample scaling ratio; and a third scaling module, configured to calculate a scaling ratio of the image to be verified, which is required for scaling the detected face to the preset size, before the first image extraction module extracts the image to be verified including only the detected face and the image to be verified including only the detected identity card from the image to be verified, and scale the image to be verified according to the scaling ratio of the image to be verified.

According to an embodiment of the present invention, the face judgment module includes: and the second input subassembly is used for inputting the face image to be verified into a trained second convolutional neural network so as to judge whether the detected face belongs to a living body.

According to the embodiment of the present invention, the identification card determining component includes: and the third input subassembly is used for inputting the image of the identity card to be verified into a trained third convolutional neural network so as to judge whether the detected identity card belongs to a living body.

According to an embodiment of the present invention, the living body authentication device 600 further includes: and the prompt output module is used for outputting a prompt for re-executing the living body verification if no human face is detected or no identity card is detected in the image to be verified.

According to an embodiment of the invention, the live action comprises flipping and/or translating the identity card while shielding the face with the identity card.

According to another aspect of the present invention, an identity authentication apparatus is provided, which includes the living body verification apparatus 600 described above, wherein the identity authentication apparatus further includes a face consistency determination module, configured to determine whether a face on an identity card detected from the image to be verified is consistent with a face detected from the image to be verified, in a case that the verification passing determination module 630 determines that the object to be verified passes the living body verification. The above description has described the implementation of the identity authentication method according to the embodiment of the present invention, and those skilled in the art can understand the implementation of the identity authentication apparatus and its advantages in combination with the above description of the identity authentication method, and details are not repeated.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

Figure 7 shows a schematic block diagram of a liveness verification system 700 according to one embodiment of the invention. In-vivo authentication system 700 includes an image acquisition device 710, a storage device 720, and a processor 730.

The image acquisition device 710 is used for acquiring an image to be verified. Image capture device 710 is optional and in-vivo verification system 700 may not include image capture device 710.

The storage 720 stores program codes for implementing respective steps in the living body authentication method according to the embodiment of the present invention.

The processor 730 is configured to run the program codes stored in the storage 720 to perform the corresponding steps of the living body authentication method according to the embodiment of the present invention, and is configured to implement the instruction generation module 610, the image to be authenticated obtaining module 620, and the authentication pass determination module 630 in the living body authentication device 600 according to the embodiment of the present invention.

In one embodiment, the program code, when executed by the processor 730, causes the in vivo verification system 700 to perform the following steps: randomly generating a living body action instruction, wherein the living body action instruction is used for indicating an object to be verified to carry out corresponding living body action by holding an identity card; acquiring an image of the object to be verified executing the living body action in real time to obtain an image to be verified; and determining whether the object to be verified passes the in-vivo verification based on the image to be verified.

In one embodiment, the program code, when executed by the processor 730, causes the in vivo authentication system 700 to perform the step of determining whether the object to be authenticated is in vivo authenticated based on the image to be authenticated comprises: performing a plurality of living body judgment operations, wherein the plurality of living body judgment operations include a first living body judgment operation, a second living body judgment operation, and a third living body judgment operation, wherein the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the image to be verified, wherein the third living body judging operation comprises the following steps: comprehensively judging whether the detected face and the detected identity card belong to living bodies on the basis of the image to be verified; and determining whether the object to be verified passes the in-vivo verification according to the judgment result of each in-vivo judgment operation in the plurality of in-vivo judgment operations, if the judgment result of any in-vivo judgment operation is negative, determining that the object to be verified does not pass the in-vivo verification, and otherwise, determining that the object to be verified passes the in-vivo verification.

In one embodiment, after the steps performed by the liveness verification system 700 to detect faces and identity cards in the image to be verified are caused to be performed by the program code when executed by the processor 730, the program code when executed by the processor 730 further causes the liveness verification system 700 to perform: extracting a face image to be verified only containing the detected face and an identity card image to be verified only containing the detected identity card from the image to be verified; and the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the face image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the identity card image to be verified, wherein the third living body judging operation comprises the following steps: and comprehensively judging whether the detected face and the detected identity card belong to living bodies on the whole based on the face image to be verified and the identity card image to be verified.

In one embodiment, the program code, when executed by the processor 730, causes the living body verification system 700 to perform the step of comprehensively determining whether the detected face and the detected identity card as a whole belong to a living body based on the face image to be verified and the identity card image to be verified, including: inputting the face image to be verified and the identity card image to be verified into a trained first convolutional neural network to obtain the probability that the detected face and the detected identity card integrally belong to a living body; and determining whether the detected human face and the detected identity card belong to living bodies on the whole according to the probability.

In one embodiment, the program code, when executed by the processor 730, further causes the in vivo verification system 700 to perform: acquiring training data, wherein the training data comprises a positive sample image and a negative sample image, the positive sample image comprises a real face and a real identity card, and the negative sample image comprises a false face and a real identity card; extracting a positive sample face image only containing a face and a positive sample identity card image only containing an identity card from the positive sample image; extracting a negative sample face image only containing a face and a negative sample identity card image only containing an identity card from the negative sample image; and performing neural network training to obtain the first convolution neural network by taking the positive sample face image and the positive sample identity card image as positive samples and taking the negative sample face image and the negative sample identity card image as negative samples.

In one embodiment, before the program code when executed by the processor 730 causes the liveness verification system 700 to perform the steps of extracting a positive sample face image containing only a face and a positive sample identification card image containing only an identification card from the positive sample image, the program code when executed by the processor 730 further causes the liveness verification system 700 to perform: calculating a positive sample scaling ratio required for scaling the face in the positive sample image to a preset size, and scaling the positive sample image according to the positive sample scaling ratio; before the steps performed by the liveness verification system 700 of extracting a negative sample face image containing only a face and a negative sample identification card image containing only an identification card from the negative sample image are caused to be performed by the liveness verification system 700 by the processor 730, the program code when executed by the processor 730 further causes the liveness verification system 700 to perform: calculating a negative sample scaling ratio required for scaling the face in the negative sample image to the preset size, and scaling the negative sample image according to the negative sample scaling ratio; before the steps performed by the liveness verification system 700 of extracting a face image to be verified containing only the detected face and an identity card image to be verified containing only the detected identity card from the image to be verified, which are executed by the processor 730, the program codes further cause the liveness verification system 700 to perform, when executed by the processor 730: and calculating the scaling of the image to be verified required for scaling the detected face to the preset size, and scaling the image to be verified according to the scaling of the image to be verified.

In one embodiment, after the steps performed by the liveness verification system 700 to detect faces and identity cards in the image to be verified are caused to be performed by the program code when executed by the processor 730, the program code when executed by the processor 730 further causes the liveness verification system 700 to perform: and if no human face is detected or no identity card is detected in the image to be verified, outputting a prompt for re-executing the living body verification.

In one embodiment, the live action includes flipping and/or translating the identity card while blocking the face with the identity card.

In one embodiment, the program code, when executed by the processor 730, causes the living body verification system 700 to determine whether the detected face belongs to a living body based on the face image to be verified, including: and inputting the face image to be verified into a trained second convolutional neural network so as to judge whether the detected face belongs to a living body.

In one embodiment, the program code, when executed by the processor 730, causes the living body verification system 700 to perform the step of determining whether the detected identity card belongs to a living body based on the identity card image to be verified, including: and inputting the image of the identity card to be verified into a trained third convolutional neural network so as to judge whether the detected identity card belongs to a living body.

In one embodiment, the program code when executed by the processor 730 causes the steps performed by the living body verification system 700 to perform a first living body judgment operation, a second living body judgment operation, and a third living body judgment operation to comprise: the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed in the arranged order, and if the result of any one of the living body judgment operations is no, the execution of the subsequent living body judgment operation is stopped.

In one embodiment, the plurality of living body judgment operations further includes a fourth living body judgment operation, wherein the fourth living body judgment operation includes: and judging whether the action executed by the detected identity card is matched with the living body action instruction or not based on the image to be verified.

In one embodiment, the image to be verified is a video, the fourth living body judgment operation is performed based on at least two frames in the video, and the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed based on at least one frame of the at least two frames.

Further, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions for executing respective steps of the living body authentication method according to an embodiment of the present invention when the program instructions are executed by a computer or a processor, and for implementing respective modules in the living body authentication device according to an embodiment of the present invention are stored. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media.

In one embodiment, the computer program instructions, when executed by a computer or processor, may cause the computer or processor to implement the respective functional modules of the living body authentication apparatus according to the embodiment of the present invention, and/or may perform the living body authentication method according to the embodiment of the present invention.

In one embodiment, the computer program instructions, when executed by a computer, cause the computer to perform the steps of: randomly generating a living body action instruction, wherein the living body action instruction is used for indicating an object to be verified to carry out corresponding living body action by holding an identity card; acquiring an image of the object to be verified executing the living body action in real time to obtain an image to be verified; and determining whether the object to be verified passes the in-vivo verification based on the image to be verified.

In one embodiment, the computer program instructions, when executed by a computer, cause the computer to perform the step of determining whether the object to be authenticated is authenticated by living body authentication based on the image to be authenticated, including: detecting the face and the identity card in the image to be verified; performing a plurality of living body judgment operations, wherein the plurality of living body judgment operations include a first living body judgment operation, a second living body judgment operation, and a third living body judgment operation, wherein the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the image to be verified, wherein the third living body judging operation comprises the following steps: comprehensively judging whether the detected face and the detected identity card belong to living bodies on the basis of the image to be verified; and determining whether the object to be verified passes the in-vivo verification according to the judgment result of each in-vivo judgment operation in the plurality of in-vivo judgment operations, if the judgment result of any in-vivo judgment operation is negative, determining that the object to be verified does not pass the in-vivo verification, and otherwise, determining that the object to be verified passes the in-vivo verification.

In one embodiment, after the step of detecting faces and identification cards in the image to be authenticated, which is performed by the computer, the computer program instructions, when executed by the computer, further cause the computer to perform: extracting a face image to be verified only containing the detected face and an identity card image to be verified only containing the detected identity card from the image to be verified; and the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the face image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the identity card image to be verified, wherein the third living body judging operation comprises the following steps: and comprehensively judging whether the detected face and the detected identity card belong to living bodies on the whole based on the face image to be verified and the identity card image to be verified.

In one embodiment, the computer program instructions, when executed by a computer, cause the computer to perform the step of comprehensively judging whether the detected face and the detected identity card as a whole belong to a living body based on the image of the face to be verified and the image of the identity card to be verified, including: inputting the face image to be verified and the identity card image to be verified into a trained first convolutional neural network to obtain the probability that the detected face and the detected identity card integrally belong to a living body; and determining whether the detected human face and the detected identity card belong to living bodies on the whole according to the probability.

In one embodiment, the computer program instructions, when executed by a computer, further cause the computer to perform: acquiring training data, wherein the training data comprises a positive sample image and a negative sample image, the positive sample image comprises a real face and a real identity card, and the negative sample image comprises a false face and a real identity card; extracting a positive sample face image only containing a face and a positive sample identity card image only containing an identity card from the positive sample image; extracting a negative sample face image only containing a face and a negative sample identity card image only containing an identity card from the negative sample image; and performing neural network training to obtain the first convolution neural network by taking the positive sample face image and the positive sample identity card image as positive samples and taking the negative sample face image and the negative sample identity card image as negative samples.

In one embodiment, before the computer program instructions, when executed by a computer, cause the computer to perform the step of extracting a positive sample face image containing only a face and a positive sample identification card image containing only an identification card from the positive sample image, the computer program instructions, when executed by a computer, further cause the computer to perform: calculating a positive sample scaling ratio required for scaling the face in the positive sample image to a preset size, and scaling the positive sample image according to the positive sample scaling ratio; before the computer program instructions, when executed by a computer, cause the computer to perform the step of extracting from the negative sample image a negative sample face image containing only a face and a negative sample identification card image containing only an identification card, the computer program instructions, when executed by a computer, further cause the computer to perform: calculating a negative sample scaling ratio required for scaling the face in the negative sample image to the preset size, and scaling the negative sample image according to the negative sample scaling ratio; before the computer program instructions, when executed by a computer, cause the computer to perform the step of extracting, from the image to be authenticated, an image of a face to be authenticated containing only the detected face and an image of an identity card to be authenticated containing only the detected identity card, the computer program instructions, when executed by a computer, further cause the computer to perform: and calculating the scaling of the image to be verified required for scaling the detected face to the preset size, and scaling the image to be verified according to the scaling of the image to be verified.

In one embodiment, after the step of detecting faces and identification cards in the image to be authenticated, which is performed by the computer, the computer program instructions, when executed by the computer, further cause the computer to perform: and if no human face is detected or no identity card is detected in the image to be verified, outputting a prompt for re-executing the living body verification.

In one embodiment, the live action includes flipping and/or translating the identity card while blocking the face with the identity card.

In one embodiment, the computer program instructions, when executed by a computer, cause the computer to perform the step of determining whether the detected face belongs to a living body based on the face image to be verified, including: and inputting the face image to be verified into a trained second convolutional neural network so as to judge whether the detected face belongs to a living body.

In one embodiment, the computer program instructions, when executed by a computer, cause the computer to perform the step of determining whether the detected identity card belongs to a living body based on the image of the identity card to be verified, including: and inputting the image of the identity card to be verified into a trained third convolutional neural network so as to judge whether the detected identity card belongs to a living body.

In one embodiment, the computer program instructions, when executed by a computer, cause the computer to perform the steps of performing a first living body judgment operation, a second living body judgment operation, and a third living body judgment operation comprising: the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed in the arranged order, and if the result of any one of the living body judgment operations is no, the execution of the subsequent living body judgment operation is stopped.

In one embodiment, the plurality of living body judgment operations further includes a fourth living body judgment operation, wherein the fourth living body judgment operation includes: and judging whether the action executed by the detected identity card is matched with the living body action instruction or not based on the image to be verified.

In one embodiment, the image to be verified is a video, the fourth living body judgment operation is performed based on at least two frames in the video, and the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed based on at least one frame of the at least two frames.

The modules in the living body verification system according to the embodiment of the present invention may be implemented by a processor of an electronic device that performs living body verification according to the embodiment of the present invention executing computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer-readable storage medium of a computer program product according to the embodiment of the present invention are executed by a computer.

According to the living body verification method and device and the identity authentication method and device provided by the embodiment of the invention, the image of the to-be-verified object which holds the identity card to execute the living body action is acquired, and the living body verification is carried out based on the acquired image, so that whether the to-be-verified object belongs to the living body can be judged by combining the information brought by the identity card in the living body verification process, and the accuracy of the living body verification can be improved.

Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.

In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.

It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.

The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some of the modules in the liveness verification device and the identity authentication device according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (27)

1. A method of witness verification, comprising:
randomly generating a living body action instruction, wherein the living body action instruction is used for indicating an object to be verified to carry out corresponding living body action by holding an identity card in a hand, and the living body action is that the identity card interacts with a human face and the interaction state changes;
acquiring an image of the object to be verified executing the living body action in real time to obtain an image to be verified, wherein the image to be verified has cross information, and the cross information is information formed on the image to be verified due to interaction between an identity card and a human face; and
and determining whether the object to be verified passes the living body verification or not based on the image to be verified.
2. The in-vivo authentication method as set forth in claim 1, wherein the determining whether the object to be authenticated is authenticated in vivo based on the image to be authenticated comprises:
detecting the face and the identity card in the image to be verified;
performing a plurality of living body judgment operations, wherein the plurality of living body judgment operations include a first living body judgment operation, a second living body judgment operation, and a third living body judgment operation, wherein the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the image to be verified, wherein the third living body judging operation comprises the following steps: comprehensively judging whether the detected face and the detected identity card belong to living bodies on the basis of the image to be verified; and
and determining whether the object to be verified passes the living body verification according to the judgment result of each living body judgment operation in the living body judgment operations, if the judgment result of any living body judgment operation is negative, determining that the object to be verified does not pass the living body verification, and otherwise, determining that the object to be verified passes the living body verification.
3. The in-vivo authentication method as set forth in claim 2, wherein after the detecting of the face and the identification card in the image to be authenticated, the in-vivo authentication method further comprises: extracting a face image to be verified only containing the detected face and an identity card image to be verified only containing the detected identity card from the image to be verified; and the number of the first and second electrodes,
the first living body judgment operation includes: and judging whether the detected face belongs to a living body or not based on the face image to be verified, wherein the second living body judging operation comprises the following steps: and judging whether the detected identity card belongs to a living body or not based on the identity card image to be verified, wherein the third living body judging operation comprises the following steps: and comprehensively judging whether the detected face and the detected identity card belong to living bodies on the whole based on the face image to be verified and the identity card image to be verified.
4. The living body authentication method according to claim 3, wherein the comprehensively determining whether the detected face and the detected identity card as a whole belong to a living body based on the face image to be authenticated and the identity card image to be authenticated comprises:
inputting the face image to be verified and the identity card image to be verified into a trained first convolutional neural network to obtain the probability that the detected face and the detected identity card integrally belong to a living body; and
and determining whether the detected human face and the detected identity card belong to living bodies on the whole according to the probability.
5. The in-vivo authentication method as set forth in claim 4, wherein the in-vivo authentication method further comprises:
acquiring training data, wherein the training data comprises a positive sample image and a negative sample image, the positive sample image comprises a real face and a real identity card, and the negative sample image comprises a false face and a real identity card;
extracting a positive sample face image only containing a face and a positive sample identity card image only containing an identity card from the positive sample image;
extracting a negative sample face image only containing a face and a negative sample identity card image only containing an identity card from the negative sample image; and
and carrying out neural network training by taking the positive sample face image and the positive sample identity card image as positive samples and taking the negative sample face image and the negative sample identity card image as negative samples to obtain the first convolution neural network.
6. The in-vivo authentication method as set forth in claim 5, wherein, before the extracting, from the positive sample image, a positive sample face image containing only a face and a positive sample identification card image containing only an identification card, the in-vivo authentication method further comprises:
calculating a positive sample scaling ratio required for scaling the face in the positive sample image to a preset size, and scaling the positive sample image according to the positive sample scaling ratio;
before the extracting, from the negative sample image, a negative sample face image containing only a face and a negative sample identification card image containing only an identification card, the in-vivo verification method further includes:
calculating a negative sample scaling ratio required for scaling the face in the negative sample image to the preset size, and scaling the negative sample image according to the negative sample scaling ratio;
before the extracting, from the image to be verified, a face image to be verified including only the detected face and an identity card image to be verified including only the detected identity card, the living body verification method further includes:
and calculating the scaling of the image to be verified required for scaling the detected face to the preset size, and scaling the image to be verified according to the scaling of the image to be verified.
7. The living body authentication method according to claim 3, wherein the determining whether the detected face belongs to a living body based on the face image to be authenticated comprises:
and inputting the face image to be verified into a trained second convolutional neural network so as to judge whether the detected face belongs to a living body.
8. The in-vivo authentication method as set forth in claim 3, wherein the determining whether the detected identity card belongs to a living body based on the identity card image to be authenticated comprises:
and inputting the image of the identity card to be verified into a trained third convolutional neural network so as to judge whether the detected identity card belongs to a living body.
9. The in-vivo authentication method as set forth in claim 2, wherein after the detecting of the face and the identification card in the image to be authenticated, the in-vivo authentication method further comprises:
and if no human face is detected or no identity card is detected in the image to be verified, outputting a prompt for re-executing the living body verification.
10. The living body verification method according to claim 2, wherein the performing of the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation includes:
the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed in the arranged order, and if the result of any one of the living body judgment operations is no, the execution of the subsequent living body judgment operation is stopped.
11. The living body verification method according to any one of claims 2 to 10, wherein the plurality of living body judgment operations further include a fourth living body judgment operation, wherein the fourth living body judgment operation includes: and judging whether the action executed by the detected identity card is matched with the living body action instruction or not based on the image to be verified.
12. The living body authentication method according to claim 11, wherein the image to be authenticated is a video, the fourth living body judgment operation is performed based on at least two frames in the video, and the first living body judgment operation, the second living body judgment operation, and the third living body judgment operation are performed based on at least one frame of the at least two frames.
13. The liveness verification method of any one of claims 1 to 10 wherein the liveness action comprises flipping and/or translating the identity card while shielding the face with the identity card.
14. An identity authentication method comprising the live body verification method of any one of claims 1 to 13, wherein the identity authentication method further comprises: and under the condition that the object to be verified passes the living body verification, judging whether the face on the identity card detected from the image to be verified is consistent with the face detected from the image to be verified.
15. A witness device, comprising:
the instruction generation module is used for randomly generating a living body action instruction, wherein the living body action instruction is used for indicating an object to be verified to carry out corresponding living body action by holding an identity card in a hand, and the living body action is that the identity card interacts with a human face and the interaction state changes;
the system comprises a to-be-verified image acquisition module, a to-be-verified image acquisition module and a verification module, wherein the to-be-verified image acquisition module is used for acquiring an image of the to-be-verified object executing the living body action in real time so as to acquire the to-be-verified image, and the to-be-verified image has cross information which is information formed on the to-be-verified image due to interaction between an identity card and a human face; and
and the verification passing determination module is used for determining whether the object to be verified passes the in-vivo verification or not based on the image to be verified.
16. The living body authentication device according to claim 15, wherein the authentication pass determination module includes:
the detection submodule is used for detecting the face and the identity card in the image to be verified;
a living body judgment sub-module for performing a plurality of living body judgment operations, wherein the living body judgment sub-module includes:
a first living body judgment unit configured to perform a first living body judgment operation, wherein the first living body judgment unit includes a face judgment subunit configured to judge whether the detected face belongs to a living body based on the image to be verified;
a second living body judgment unit configured to perform a second living body judgment operation, wherein the second living body judgment unit includes an identification card judgment subunit configured to judge whether the detected identification card belongs to a living body based on the image to be verified;
a third living body judgment unit configured to perform a third living body judgment operation, wherein the third living body judgment unit includes a comprehensive judgment subunit configured to comprehensively judge whether the detected face and the detected identity card as a whole belong to a living body based on the image to be verified; and
and the verification passing determining submodule is used for determining whether the object to be verified passes through the in-vivo verification according to the judgment result of each in-vivo judgment unit in the in-vivo judgment submodule, if the judgment result of any in-vivo judgment unit is negative, determining that the object to be verified does not pass through the in-vivo verification, and otherwise, determining that the object to be verified passes through the in-vivo verification.
17. The living body authentication device according to claim 16, wherein the living body authentication device further comprises: the first image extraction module is used for extracting a face image to be verified only containing the detected face and an identity card image to be verified only containing the detected identity card from the image to be verified; and the number of the first and second electrodes,
the face judging subunit comprises a face judging component and is used for judging whether the detected face belongs to a living body or not based on the face image to be verified, the identity card judging subunit comprises an identity card judging component and is used for judging whether the detected identity card belongs to the living body or not based on the identity card image to be verified, and the comprehensive judging subunit comprises a comprehensive judging component and is used for comprehensively judging whether the detected face and the detected identity card belong to the living body or not on the basis of the face image to be verified and the identity card image to be verified.
18. The living body authentication device according to claim 17, wherein the comprehensive judgment component includes:
the first input subassembly is used for inputting the face image to be verified and the identity card image to be verified into a trained first convolutional neural network so as to obtain the probability that the detected face and the detected identity card integrally belong to a living body; and
and the living body determining sub-component is used for determining whether the detected human face and the detected identity card belong to a living body on the whole according to the probability.
19. The living body authentication device according to claim 18, wherein the living body authentication device further comprises:
the training data acquisition module is used for acquiring training data, wherein the training data comprises a positive sample image and a negative sample image, the positive sample image comprises a real face and a real identity card, and the negative sample image comprises a false face and a real identity card;
the second image extraction module is used for extracting a positive sample face image only containing a face and a positive sample identity card image only containing an identity card from the positive sample image;
the third image extraction module is used for extracting a negative sample face image only containing a face and a negative sample identity card image only containing an identity card from the negative sample image; and
and the training module is used for carrying out neural network training by taking the positive sample face image and the positive sample identity card image as positive samples and taking the negative sample face image and the negative sample identity card image as negative samples to obtain the first convolution neural network.
20. The living body authentication device according to claim 19, wherein the living body authentication device further comprises:
a first scaling module, configured to calculate a positive sample scaling ratio required for scaling the face in the positive sample image to a preset size before the second image extraction module extracts the positive sample face image only containing the face and the positive sample identity card image only containing the identity card from the positive sample image, and scale the positive sample image according to the positive sample scaling ratio;
a second scaling module, configured to calculate a negative sample scaling ratio required for scaling the face in the negative sample image to the preset size before the third image extraction module extracts the negative sample face image only containing the face and the negative sample identity card image only containing the identity card from the negative sample image, and scale the negative sample image according to the negative sample scaling ratio; and
and the third scaling module is used for calculating the scaling of the image to be verified required for scaling the detected face to the preset size before the first image extraction module extracts the image to be verified including only the detected face and the image to be verified including only the detected identity card from the image to be verified, and scaling the image to be verified according to the scaling of the image to be verified.
21. The living body authentication device according to claim 17, wherein the face judgment component comprises:
and the second input subassembly is used for inputting the face image to be verified into a trained second convolutional neural network so as to judge whether the detected face belongs to a living body.
22. The liveness verification device of claim 17 wherein the identification card determination component comprises:
and the third input subassembly is used for inputting the image of the identity card to be verified into a trained third convolutional neural network so as to judge whether the detected identity card belongs to a living body.
23. The living body authentication device according to claim 16, wherein the living body authentication device further comprises:
and the prompt output module is used for outputting a prompt for re-executing the living body verification if no human face is detected or no identity card is detected in the image to be verified.
24. The living body verification apparatus according to any one of claims 16 to 23, wherein the living body judgment sub-module further includes a fourth living body judgment unit configured to perform a fourth living body judgment operation, wherein the fourth living body judgment unit includes an identification card action judgment sub-unit configured to judge whether or not an action performed by the detected identification card matches the living body action instruction based on the image to be verified.
25. The living body verification apparatus according to claim 24, wherein the image to be verified is a video, the fourth living body judgment unit performs the fourth living body judgment operation based on at least two frames in the video, and the first living body judgment unit, the second living body judgment unit, and the third living body judgment unit perform a first living body judgment operation, a second living body judgment operation, and a third living body judgment operation, respectively, based on at least one frame in the at least two frames.
26. The liveness verification device of claim 15, wherein the liveness action comprises flipping and/or translating the identity card while shielding the face with the identity card.
27. An identity authentication apparatus comprising the living body verification apparatus according to any one of claims 15 to 26, wherein the identity authentication apparatus further comprises a face conformity judgment module configured to judge whether a face on an identity card detected from the image to be verified and a face detected from the image to be verified coincide with each other in a case where the verification-passing determination module determines that the object to be verified passes the living body verification.
CN201610927708.0A 2016-10-31 2016-10-31 Living body verification method and device and identity authentication method and device CN106599772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610927708.0A CN106599772B (en) 2016-10-31 2016-10-31 Living body verification method and device and identity authentication method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610927708.0A CN106599772B (en) 2016-10-31 2016-10-31 Living body verification method and device and identity authentication method and device

Publications (2)

Publication Number Publication Date
CN106599772A CN106599772A (en) 2017-04-26
CN106599772B true CN106599772B (en) 2020-04-28

Family

ID=58556164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610927708.0A CN106599772B (en) 2016-10-31 2016-10-31 Living body verification method and device and identity authentication method and device

Country Status (1)

Country Link
CN (1) CN106599772B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273794A (en) * 2017-04-28 2017-10-20 北京建筑大学 Live body discrimination method and device in a kind of face recognition process
CN108804884B (en) * 2017-05-02 2020-08-07 北京旷视科技有限公司 Identity authentication method, identity authentication device and computer storage medium
CN107358157B (en) 2017-06-07 2020-10-02 创新先进技术有限公司 Face living body detection method and device and electronic equipment
CN108875468A (en) * 2017-06-12 2018-11-23 北京旷视科技有限公司 Biopsy method, In vivo detection system and storage medium
CN107316029B (en) * 2017-07-03 2018-11-23 腾讯科技(深圳)有限公司 A kind of living body verification method and equipment
CN107545241A (en) * 2017-07-19 2018-01-05 百度在线网络技术(北京)有限公司 Neural network model is trained and biopsy method, device and storage medium
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN107679457A (en) * 2017-09-06 2018-02-09 阿里巴巴集团控股有限公司 User identity method of calibration and device
CN107844748B (en) * 2017-10-17 2019-02-05 平安科技(深圳)有限公司 Auth method, device, storage medium and computer equipment
CN109840406A (en) * 2017-11-29 2019-06-04 百度在线网络技术(北京)有限公司 Living body verification method, device and computer equipment
CN107995207A (en) * 2017-12-14 2018-05-04 四川智美高科科技有限公司 A kind of identity Authentication System based on audio and video
CN108182409B (en) * 2017-12-29 2020-11-10 智慧眼科技股份有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN108494778A (en) * 2018-03-27 2018-09-04 百度在线网络技术(北京)有限公司 Identity identifying method and device
CN109145768A (en) * 2018-07-31 2019-01-04 北京旷视科技有限公司 Obtain the method and device of the human face data with face character
CN109543507A (en) * 2018-09-29 2019-03-29 深圳壹账通智能科技有限公司 Identity identifying method, device, terminal device and storage medium
CN109639664A (en) * 2018-12-06 2019-04-16 上海中信信息发展股份有限公司 Login validation method, apparatus and system
US20200210956A1 (en) * 2018-12-28 2020-07-02 Guillaume De Malzac De Sengla Electronic registered mail methods, apparatus, and system
CN109977839A (en) * 2019-03-20 2019-07-05 北京字节跳动网络技术有限公司 Information processing method and device
CN109934191A (en) * 2019-03-20 2019-06-25 北京字节跳动网络技术有限公司 Information processing method and device
CN110110597A (en) * 2019-04-02 2019-08-09 北京旷视科技有限公司 Biopsy method, device and In vivo detection terminal
CN110598710B (en) * 2019-08-21 2020-07-07 阿里巴巴集团控股有限公司 Certificate identification method and device
CN110705350B (en) * 2019-08-27 2020-08-25 阿里巴巴集团控股有限公司 Certificate identification method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN103634120A (en) * 2013-12-18 2014-03-12 上海市数字证书认证中心有限公司 Method and system for real-name authentication based on face recognition
CN104361274A (en) * 2014-10-30 2015-02-18 深圳市富途网络科技有限公司 Identity authentication method and system on basis of video identification
CN105447532A (en) * 2015-03-24 2016-03-30 北京天诚盛业科技有限公司 Identity authentication method and device
CN105512632A (en) * 2015-12-09 2016-04-20 北京旷视科技有限公司 In vivo detection method and device
CN105518708A (en) * 2015-04-29 2016-04-20 北京旷视科技有限公司 Method and equipment for verifying living human face, and computer program product
CN105518711A (en) * 2015-06-29 2016-04-20 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system, and computer program product
CN105518713A (en) * 2015-02-15 2016-04-20 北京旷视科技有限公司 Living human face verification method and system, computer program product
CN105612533A (en) * 2015-06-08 2016-05-25 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system and computer programe products
CN105930710A (en) * 2016-04-22 2016-09-07 北京旷视科技有限公司 Living body detection method and device
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100396924B1 (en) * 2001-02-27 2003-09-03 한국전자통신연구원 Apparatus and Method for Controlling Electrical Apparatus by using Bio-signal
WO2009110323A1 (en) * 2008-03-03 2009-09-11 日本電気株式会社 Living body judgment system, method for judging living body and program for judging living body
US10079827B2 (en) * 2015-03-16 2018-09-18 Ricoh Company, Ltd. Information processing apparatus, information processing method, and information processing system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622588A (en) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN103634120A (en) * 2013-12-18 2014-03-12 上海市数字证书认证中心有限公司 Method and system for real-name authentication based on face recognition
CN104361274A (en) * 2014-10-30 2015-02-18 深圳市富途网络科技有限公司 Identity authentication method and system on basis of video identification
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
CN105518713A (en) * 2015-02-15 2016-04-20 北京旷视科技有限公司 Living human face verification method and system, computer program product
CN105447532A (en) * 2015-03-24 2016-03-30 北京天诚盛业科技有限公司 Identity authentication method and device
CN105518708A (en) * 2015-04-29 2016-04-20 北京旷视科技有限公司 Method and equipment for verifying living human face, and computer program product
CN105612533A (en) * 2015-06-08 2016-05-25 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system and computer programe products
CN105518711A (en) * 2015-06-29 2016-04-20 北京旷视科技有限公司 In-vivo detection method, in-vivo detection system, and computer program product
CN105512632A (en) * 2015-12-09 2016-04-20 北京旷视科技有限公司 In vivo detection method and device
CN105930710A (en) * 2016-04-22 2016-09-07 北京旷视科技有限公司 Living body detection method and device

Also Published As

Publication number Publication date
CN106599772A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
RU2738325C2 (en) Method and device for authenticating an individual
US20170180362A1 (en) Identity authentication method and apparatus, terminal and server
US20200110921A1 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
JP6694829B2 (en) Rule-based video importance analysis
US10839061B2 (en) Method and apparatus for identity authentication
US8908977B2 (en) System and method for comparing images
US9373030B2 (en) Automated document recognition, identification, and data extraction
JP2020061171A (en) System and method for biometric authentication in connection with camera-equipped devices
US20170132456A1 (en) Enhanced face detection using depth information
KR101356358B1 (en) Computer-implemented method and apparatus for biometric authentication based on images of an eye
US10621454B2 (en) Living body detection method, living body detection system, and computer program product
US9740926B2 (en) Identity verification using biometric data
EP2546782B1 (en) Liveness detection
KR20190072563A (en) Method and apparatus for detecting facial live varnish, and electronic device
JP2018032391A (en) Liveness test method and apparatus
Chakraborty et al. An overview of face liveness detection
US8856541B1 (en) Liveness detection
Galdi et al. Multimodal authentication on smartphones: Combining iris and sensor recognition for a double check of user identity
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
US9524441B2 (en) System and method for identity authentication based on face recognition, and computer storage medium
KR102036978B1 (en) Liveness detection method and device, and identity authentication method and device
US10699103B2 (en) Living body detecting method and apparatus, device and storage medium
US9886640B1 (en) Method and apparatus to identify a live face image using a thermal radiation sensor and a visual radiation sensor
del Rio et al. Automated border control e-gates and facial recognition systems
US9864756B2 (en) Method, apparatus for providing a notification on a face recognition environment, and computer-readable recording medium for executing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

GR01 Patent grant
GR01 Patent grant