CN112200120A - Identity recognition method, living body recognition device and electronic equipment - Google Patents

Identity recognition method, living body recognition device and electronic equipment Download PDF

Info

Publication number
CN112200120A
CN112200120A CN202011149613.3A CN202011149613A CN112200120A CN 112200120 A CN112200120 A CN 112200120A CN 202011149613 A CN202011149613 A CN 202011149613A CN 112200120 A CN112200120 A CN 112200120A
Authority
CN
China
Prior art keywords
target user
living body
area
characteristic data
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011149613.3A
Other languages
Chinese (zh)
Other versions
CN112200120B (en
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202011149613.3A priority Critical patent/CN112200120B/en
Publication of CN112200120A publication Critical patent/CN112200120A/en
Application granted granted Critical
Publication of CN112200120B publication Critical patent/CN112200120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the specification provides an identity identification method, a living body identification device and electronic equipment. The identity recognition method comprises the following steps: and carrying out image acquisition aiming at preset living body actions on a target user to obtain image characteristic data corresponding to the target user. And identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user. And performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action. And judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user. And executing an identity recognition process matched with the living body judgment result of the target user for the target user.

Description

Identity recognition method, living body recognition device and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an identity recognition method, a living body recognition device, and an electronic device.
Background
Face recognition has been widely used in recent years and is an important technical means for identity authentication. The face recognition brings convenience to users and has some hidden dangers. For example, an attacker can impersonate an identity by using a high-precision head model to achieve the purpose of stealing information. Because the high-precision head model is very close to a real person, no effective interception measure exists at present.
Considering that a face recognition attack, once successful, is likely to cause significant loss to a user, it is necessary to develop an identity recognition scheme with a living body attack prevention capability.
Disclosure of Invention
An embodiment of the present specification aims to provide an identity identification method, a living body identification device, and an electronic apparatus, which are capable of realizing identity identification with a certain degree of anti-attack capability based on living body judgment.
In order to achieve the above object, the embodiments of the present specification are implemented as follows:
in a first aspect, an identity recognition method is provided, including:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user;
and executing an identity recognition process matched with the living body judgment result of the target user for the target user.
In a second aspect, a living body identification method is provided, including:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
In a third aspect, an identification apparatus is provided, including:
the image acquisition module is used for acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
the living body action recognition module is used for recognizing whether the target user carries out the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
the area authenticity identification module is used for carrying out area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, and the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
the living body judgment module is used for judging whether the target user is a living body object or not based on a preset living body action identification result and an area authenticity identification result corresponding to the target user;
and the identity recognition module executes an identity recognition process matched with the living body judgment result of the target user for the target user.
In a fourth aspect, an electronic device is provided comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user;
and executing an identity recognition process matched with the living body judgment result of the target user for the target user.
In a fifth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user;
and executing an identity recognition process matched with the living body judgment result of the target user for the target user.
In a sixth aspect, there is provided a living body identification device comprising:
the image acquisition module is used for acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
the living body action recognition module is used for recognizing whether the target user carries out the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
the area authenticity identification module is used for carrying out area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, and the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and the living body judgment module is used for judging whether the target user is a living body object or not based on a preset living body action identification result and an area authenticity identification result corresponding to the target user.
In a seventh aspect, an electronic device is provided that includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
In an eighth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
According to the scheme of the embodiment of the specification, when the identity of the target user is identified, image acquisition aiming at the preset living body action is firstly carried out on the target user, the preset living body action identification and the area true and false identification are carried out on the target user according to the obtained image characteristic data corresponding to the target user, and then whether the target user is a living body object or not is judged according to the identification results of the preset living body action identification and the area true and false identification, so that the identity identification process matched with the living body judgment result is executed on the target user. For example, when the target user is not judged to be a living body, the identity recognition is judged to be failed, so that the capability of intercepting the identity recognition attack based on the head model is realized.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative efforts.
Fig. 1 is a first flowchart of an identity recognition method provided in an embodiment of the present disclosure.
Fig. 2 is a schematic flow chart of a second identification method provided in an embodiment of the present disclosure.
Fig. 3 is a schematic flowchart of a living body identification method provided in an embodiment of the present specification.
Fig. 4 is a schematic structural diagram of an identification apparatus provided in an embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of a living body identification device provided in an embodiment of the present specification.
Fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of this specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
As mentioned above, for the face recognition technology, the main potential safety hazard is that an attacker uses a high-precision head model to impersonate an identity so as to steal information. The high-precision head model is very close to a real person, so that interception is extremely difficult. For this reason, the present disclosure aims to provide an identity recognition scheme capable of realizing a certain degree of attack resistance based on living body judgment.
Fig. 1 is a flowchart of an identity recognition method according to an embodiment of the present disclosure. The method shown in fig. 1 may be performed by a corresponding apparatus below, comprising the steps of:
s102, image acquisition aiming at preset living body actions is carried out on the target user, and image characteristic data corresponding to the target user are obtained.
The living body action is used for verifying whether a target user to be identified is a living body object. In the present specification embodiment, the preset living body action may include, but is not limited to: tongue-spitting action, mouth-pinching action, nose-pinching action, and the like. Taking the tongue-spitting motion as an example, the step may perform multi-frame image acquisition on the tongue-spitting motion of the target user, and extract image feature data related to the tongue-spitting motion from partial images in the image.
Furthermore, the modality of image acquisition is not specifically limited herein. By way of exemplary introduction, this step may be performed for image acquisition of at least one of an infrared light image, a visible color light image, and a 3D image of the target user. That is, the method of the embodiments of the present specification can realize the acquisition of multi-modal image feature data.
And S104, identifying whether the target user performs preset living body action or not based on the acquired image characteristic data corresponding to the target user.
The embodiment of the specification can use a deep learning model to mechanically identify whether a target user performs preset living body action.
The method comprises the steps of inputting obtained image characteristic data corresponding to a target user into a preset living body action recognition model, and obtaining a preset living body action recognition result corresponding to the target user, wherein the preset living body action recognition model is obtained by training image characteristic data corresponding to preset living body actions of a sample user and preset living body action recognition classification labels of the sample user.
In particular, in order to enable a preset living body action recognition model to have better recognition capability, the embodiment of the specification can use sample users of different classifications to perform diversified training.
Taking the simplest classification of two samples, namely a white sample and a black sample as an example, the embodiment of the present specification may mark that a sample user is a white sample user or a black sample user by presetting a living body motion identification classification label.
And then, carrying out positive example training on the preset living body action recognition model through the image characteristic data corresponding to the white sample user when the preset living body action is made, and carrying out negative example training on the preset living body action recognition model through the image characteristic data corresponding to the black sample user when the preset living body action is not made.
In the training process, after the image characteristic data of the sample user is input into the preset living body action recognition model, a training result given by the preset living body action recognition model can be obtained. This training result is the prediction result of the model for whether the sample user performs a preset living body action (also understood as predicting whether it is a black sample user or a white sample user). Here, the training result may be different from a true value result indicated by the preset living body motion recognition classification label. In the embodiment of the present disclosure, an error between a training result and a true value result may be calculated based on a loss function derived from maximum likelihood estimation, and parameters in a preset living body motion recognition model are adjusted (for example, weights of bottom vectors of the model are adjusted) to reduce the error, so as to achieve a training effect.
In addition, as other possible approaches, a more refined sample user training model may also be used. For example, the sample user may make the preset living body action probability classification according to the difference: high probability sample users (corresponding to a probability of making a preset living body action of 80% to 100%), normal probability sample users (corresponding to a probability of making a preset living body action of 40% to 79%), and low probability sample users (corresponding to a probability of making a preset living body action of 0% to 39%). And the high-probability sample user, the common-probability sample user and the low-probability sample user are distinguished by presetting living body action identification classification labels. In the training process, the training result given by the living body identification model is that whether the prediction sample user belongs to a high-probability sample user, a common-probability sample user or a low-probability sample user. Similarly, with the purpose of reducing the training result and the real result corresponding to the preset living body action recognition classification label, parameters in the preset living body action recognition model are adjusted (for example, the weight of the bottom vector of the model is adjusted), so that the training effect is achieved. Since the training principle is the same, it is not repeated here for example.
It should be noted that the model type of the preset living body motion recognition model is not unique, and is not specifically limited herein. It should be understood that, the deep learning model with the classification function can be trained to become the preset living body motion recognition model in the embodiment of the present specification by the training method illustrated above.
And S106, performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with a preset living body action.
The area authenticity identification is used for verifying whether the area to be identified corresponding to the target user is consistent with the area to be identified corresponding to the real living body object. In this embodiment, the area to be identified for the area authenticity identification corresponding to the tongue-opening motion may include a tongue area, the area to be identified for the area authenticity identification corresponding to the mouth-pinching motion may include a mouth area, and the area to be identified for the area authenticity identification corresponding to the nose-pinching motion may include a nose area. Taking the flicking of the tongue as an example, this step requires that the authenticity of the tongue of the target user be recognized based on the obtained image data of the target user.
Similarly, the embodiments of the present specification may perform area authenticity identification on the target user in a mechanical manner by using a deep learning model.
The method comprises the steps of inputting obtained image characteristic data corresponding to a target user into a region authenticity identification model associated with a preset living body action, and obtaining a region authenticity identification result corresponding to the target user, wherein the region authenticity identification model is obtained by training region image characteristic data corresponding to the preset living body action and a region authenticity identification classification label of a sample user based on the sample user, and the region corresponding to the region image characteristic data is a region to be identified corresponding to the region authenticity identification.
The area authenticity identification model can be trained by referring to a training mode of a preset living body action identification model. Since the principle is the same, it is not described herein again by way of example.
And S108, judging whether the target user is a living object or not based on the preset living action recognition result and the area authenticity recognition result corresponding to the target user.
Specifically, in this step, if it is determined that the target user makes a preset living body action and the authenticity of the area corresponding to the target user is identified as true, the target user can be determined as a living body object; otherwise, it is determined as a non-living object.
And S110, performing an identity recognition process matched with the living body judgment result of the target user on the target user.
Specifically, the embodiments of the present specification may perform identification through two policies.
One is to initiate face identification for the target user if the target user is determined to be a living object. If the target user is judged to be a non-living object (the possibility of head model and image impersonation exists), the user identity identification is directly judged to be failed.
And the other method is to directly use the area authenticity identification result as the identity identification result. Namely, a legal user is taken as a sample user, and the area authenticity identification model is trained, so that the area authenticity identification model has the capability of identifying whether the area to be identified of the target user is consistent with the legal user. And if the living body judgment result of the target user indicates that the living body object is a living body object, and the area authenticity identification result provided by the area authenticity identification model specifically indicates that the area to be identified except for the area authenticity identification corresponding to the target user is the real area of the legal user, judging that the identification of the target user is successful, otherwise, judging that the identification of the target user is failed.
When the method of the embodiment of the specification identifies the target user, image acquisition aiming at the preset living body action is firstly carried out on the target user, and the preset living body action identification and the area true and false identification are carried out on the target user according to the obtained image characteristic data corresponding to the target user. And judging whether the target user is a living object according to the identification results of the two users so as to execute an identity identification process matched with the living judgment result on the target user. For example, when the target user is not judged to be a living body, the identity recognition is judged to be failed, so that the capability of intercepting the identity recognition attack based on the head model is realized.
The method of the embodiments of the present disclosure is described below with reference to practical application scenarios.
The method of embodiments herein may identify a living subject using a tongue-blowing motion. Specifically, living body object judgment can be realized by tongue-opening motion recognition and tongue area authenticity recognition for a target user.
It should be understood that, when the image of the tongue opening action is collected for the target user, if the target user is matched to make the tongue opening action, the image feature data obtained by the image collection should include the facial image feature data and the tongue image feature data of the target user.
Here, the deep learning model supporting the input of the image format data may be trained in advance to obtain the tongue opening motion recognition model and the tongue region authenticity recognition model. The subsequent corresponding identification process is shown in fig. 2, and includes: leading the tongue, collecting images, identifying the motion of the tongue, dividing the tongue, judging the authenticity of the tongue, identifying the living body and identifying the identity.
Stage one: tongue-in-place guidance & image acquisition
The target user can be guided to perform the tongue opening action on the target user interaction interface, and meanwhile, the target user is subjected to multi-frame facial image acquisition.
And a second stage: a tongue-opening motion detection stage:
when the collected face images reach 30 frames, 15 frames of face images are selected and input into a pre-trained tongue opening motion recognition model, so that the tongue opening motion probability P1 of the target user is obtained, and P1 is the tongue opening motion recognition result of the target user.
And a third stage: tongue segmentation and authenticity judgment stage:
and performing area cropping on the selected face image of 15 frames, and segmenting to obtain a tongue area image. And then, inputting the 15 frames of tongue region images into a pre-trained tongue region authenticity identification model to obtain real probabilities O1 and O2 … … O5 corresponding to the 15 frames of tongue region images, and averaging O1 to O5 to obtain a probability P2 that the target user corresponds to a real tongue, wherein P2 is a tongue region authenticity identification result of the target user.
And a fourth stage: living body judgment
And performing weighted calculation on P1 and P2 to obtain a probability value P of whether the target user is the living object, if P reaches a preset threshold value T, determining that the target user is the living object, and otherwise, determining that the target user is a non-living object.
And a fifth stage: identity recognition
And multiplexing the acquired facial images of the target user to perform face identity recognition.
As described above, when the living body judgment result of the target user indicates a living body object, the face identification may be initiated for the target user, otherwise, the target user identification failure may be directly determined.
Or, if the living body judgment result of the target user indicates a living body object and the tongue area authenticity identification result specifically indicates that the target user has a real tongue of a legal user, the target user identity identification is judged to be successful, otherwise, the target user identity identification is judged to be failed.
The face identification belongs to the prior art, and is not described herein again by way of example.
As described above, the present embodiment trains a model for living body recognition in advance using image feature data of a sample user during a tongue-opening motion, and provides the living body recognition model with a capability of recognizing a living body based on the image feature data of the tongue-opening motion. When the identity recognition method is carried out on the target user, image characteristic data of the target user during tongue opening action are tried to be acquired through an image and input into a model for living body recognition to judge whether the target user is a living body or not, and therefore a matched identity recognition process is executed according to a living body recognition result of the target user. For example, when the target user is not identified as a living body, the identity recognition is judged to be failed, so that the capability of intercepting the identity recognition attack based on the head model is realized. The tongue spitting action is difficult to realize on a high-precision head die, and the tongue is not suitable for imitation, so that the high interception capability can be realized, and the high practical value is realized for identity recognition.
In addition, in correspondence with the method shown in fig. 1, an example of the present specification also provides a living body identification method. Fig. 3 is a schematic diagram of a living body identification method according to an embodiment of the present disclosure, including the following steps:
s302, image acquisition aiming at preset living body actions is carried out on a target user, and image characteristic data corresponding to the target user are obtained.
S304, based on the obtained image characteristic data corresponding to the target user, whether the target user carries out preset living body action or not is identified.
And S306, performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with a preset living body action.
And S308, judging whether the target user is a living object or not based on the preset living action recognition result and the area authenticity recognition result corresponding to the target user.
The living body judgment method in the embodiment of the specification can perform image acquisition on a target user aiming at a preset living body action, perform preset living body action identification and area true and false identification on the target user according to the obtained image characteristic data corresponding to the target user, and further judge whether the target user is a living body object according to the identification results of the preset living body action identification and the area true and false identification.
In addition, corresponding to the method shown in fig. 1, an embodiment of the present specification further provides an identity recognition apparatus. Fig. 4 is a schematic structural diagram of an identification apparatus 400 according to an embodiment of the present disclosure, including:
the image acquisition module 410 is used for acquiring an image of a target user aiming at a preset living body action and acquiring image characteristic data corresponding to the target user;
the living body action recognition module 420 is used for recognizing whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
the area authenticity identification module 430 is used for carrying out area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
the living body judgment module 440 is configured to judge whether the target user is a living body object based on a preset living body action recognition result and a region authenticity recognition result corresponding to the target user;
the identity recognition module 450 executes an identity recognition process matching the living body judgment result of the target user for the target user.
When the identity recognition device in the embodiment of the present description performs identity recognition on a target user, image acquisition for a preset living body action is performed on the target user first, and preset living body action recognition and area authenticity recognition are performed on the target user according to the obtained image feature data corresponding to the target user. And judging whether the target user is a living object according to the identification results of the two users so as to execute an identity identification process matched with the living judgment result on the target user. For example, when the target user is not judged to be a living body, the identity recognition is judged to be failed, so that the capability of intercepting the identity recognition attack based on the head model is realized.
Optionally, the living body action recognition module 420 specifically inputs the obtained image feature data corresponding to the target user into a preset living body action recognition model, and obtains a preset living body action recognition result corresponding to the target user, where the preset living body action recognition model is obtained based on training of the image feature data corresponding to the preset living body action performed by the sample user and a preset living body action recognition classification label of the sample user.
Optionally, the area authenticity identification module 430 specifically inputs the obtained image feature data corresponding to the target user into an area authenticity identification model associated with the preset living body action to obtain an area authenticity identification result corresponding to the target user, where the area authenticity identification model is obtained by training an area image feature data corresponding to the preset living body action and an area authenticity identification classification label of the sample user, and the area corresponding to the area image feature data is an area to be identified corresponding to the area authenticity identification.
Wherein the preset living body action comprises at least one of: a tongue-spitting action, a mouth-pinching action, and a nose-pinching action. The area to be identified for the area authenticity identification corresponding to the tongue spitting action comprises a tongue area, the area to be identified for the area authenticity identification corresponding to the mouth pinching action comprises a mouth area, and the area to be identified for the area authenticity identification corresponding to the nose pinching action comprises a nose area.
Optionally, when the living body determining module 440 is executed, if the living body determining result of the target user indicates a living body object, initiating face identification for the target user, otherwise, determining that the identification of the target user fails. The living body judgment module 440 performs face identification on the target user based on the facial image feature data of the target user acquired by the image acquisition module 410.
Optionally, the sample users training the area authenticity identification model include legal users; when the living body judgment module 440 is executed, if the living body judgment result of the target user indicates that the living body object is a living body object, and the area authenticity identification result corresponding to the target user provided by the area authenticity identification model specifically indicates that the area to be identified for the area authenticity identification corresponding to the target user is the real area of a legal user, it determines that the target user identity identification is successful, otherwise, it determines that the target user identity identification is failed.
Obviously, the identification apparatus in the embodiment of the present specification may be used as the execution subject of the identification method shown in fig. 1, and thus the functions of the identification method implemented in fig. 1 and fig. 2 can be implemented. Since the principle is the same, the detailed description is omitted here.
In addition, corresponding to the method shown in fig. 1, the embodiment of the present specification further provides a living body identification device. Fig. 5 is a schematic structural diagram of a living body identification device 500 according to an embodiment of the present specification, including:
the image acquisition module 510 performs image acquisition on a target user according to a preset living body action, and obtains image feature data corresponding to the target user.
And a living body action recognition module 520, configured to recognize whether the target user performs the preset living body action based on the obtained image feature data corresponding to the target user. And the number of the first and second groups,
and the area authenticity identification module 530 is configured to perform area authenticity identification on the target user based on the obtained image feature data corresponding to the target user, where an area to be identified corresponding to the area authenticity identification is associated with the preset living body action.
The living body judgment module 540 is configured to judge whether the target user is a living body object based on a preset living body action recognition result and a region authenticity recognition result corresponding to the target user.
The living body judgment device in the embodiment of the present description can perform image acquisition on a target user for a preset living body action, perform preset living body action recognition and area true and false recognition on the target user according to the obtained image feature data corresponding to the target user, and further judge whether the target user is a living body object according to recognition results of the preset living body action recognition and the area true and false recognition.
Obviously, the living body recognition apparatus according to the embodiment of the present specification can be used as the execution subject of the living body recognition method shown in fig. 3, and thus can realize the functions of the living body recognition method realized in fig. 1 and 2. Since the principle is the same, the detailed description is omitted here.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present specification. Referring to fig. 6, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
Optionally, the processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program, so as to form the above-mentioned identity recognition apparatus on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
and carrying out image acquisition aiming at preset living body actions on a target user to obtain image characteristic data corresponding to the target user.
And identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user. And the number of the first and second groups,
and performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action.
And judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
And executing an identity recognition process matched with the living body judgment result of the target user for the target user.
It should be understood that the electronic device of the embodiments of the present specification can implement the functions of the above-described identification apparatus in the embodiments shown in fig. 1 and fig. 2. Since the principle is the same, no further description is provided herein.
Alternatively, the processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program, and the living body identification device is formed on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
and carrying out image acquisition aiming at preset living body actions on a target user to obtain image characteristic data corresponding to the target user.
And identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user. And the number of the first and second groups,
and performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action.
And judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
It should be understood that the electronic device of the present specification can implement the functions of the living body identification apparatus described above in the embodiment shown in fig. 3. Since the principle is the same, no further description is provided herein.
The above-mentioned identification method disclosed in the embodiment shown in fig. 1 or the living body identification method disclosed in the embodiment shown in fig. 3 may be applied to a processor, and implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
Of course, besides the software implementation, the electronic device in this specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Furthermore, the present specification embodiments also propose a computer-readable storage medium storing one or more programs, the one or more programs including instructions.
Optionally, the instructions, when executed by a portable electronic device including a plurality of application programs, can cause the portable electronic device to perform the identity recognition method of the embodiment shown in fig. 1, and is specifically configured to perform the following method:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user;
and executing an identity recognition process matched with the living body judgment result of the target user for the target user.
It should be understood that the above-mentioned instructions, when executed by a portable electronic device including a plurality of application programs, can enable the above-mentioned identification apparatus to implement the functions of the embodiments shown in fig. 1 and fig. 2, and will not be described in detail herein.
Alternatively, the first and second electrodes may be,
the instructions, when executed by a portable electronic device comprising a plurality of application programs, are capable of causing the portable electronic device to perform the living body identification method of the embodiment shown in fig. 3, and in particular for performing the steps of:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification. Moreover, all other embodiments obtained by a person skilled in the art without making any inventive step shall fall within the scope of protection of this document.

Claims (14)

1. An identity recognition method, comprising:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein an area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user;
and executing an identity recognition process matched with the living body judgment result of the target user for the target user.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
identifying whether the target user performs the preset living body action or not based on the obtained image feature data corresponding to the target user, wherein the identification comprises the following steps:
inputting the obtained image characteristic data corresponding to the target user into a preset living body action recognition model to obtain a preset living body action recognition result corresponding to the target user, wherein the preset living body action recognition model is obtained by training image characteristic data corresponding to preset living body actions of the sample user and preset living body action recognition classification labels of the sample user.
3. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
based on the obtained image characteristic data corresponding to the target user, performing area authenticity identification on the target user, including:
inputting the obtained image characteristic data corresponding to the target user into a region authenticity identification model associated with the preset living body action to obtain a region authenticity identification result corresponding to the target user, wherein the region authenticity identification model is obtained by training region image characteristic data corresponding to the preset living body action and a region authenticity identification classification label of the sample user based on the sample user, and the region corresponding to the region image characteristic data is a region to be identified corresponding to the region authenticity identification.
4. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
the preset living body action includes at least one of: a tongue spitting action, a mouth pinching action and a nose pinching action;
the area to be identified for the area authenticity identification corresponding to the tongue spitting action comprises a tongue area, the area to be identified for the area authenticity identification corresponding to the mouth pinching action comprises a mouth area, and the area to be identified for the area authenticity identification corresponding to the nose pinching action comprises a nose area.
5. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
executing an identity recognition process matched with the living body judgment result of the target user on the target user, wherein the identity recognition process comprises the following steps:
and if the living body judgment result of the target user indicates that the target user is a living body object, initiating face identity recognition to the target user, otherwise, judging that the identity recognition of the target user fails.
6. The method of claim 5, wherein the first and second light sources are selected from the group consisting of,
the image feature data obtained by image acquisition comprises the facial image feature data of the target user, and the face identity recognition of the target user comprises the following steps:
and carrying out face identity recognition on the target user based on the facial image feature data of the target user obtained by image acquisition.
7. The method of claim 3, wherein the first and second light sources are selected from the group consisting of,
the sample users training the area authenticity identification model comprise legal users, and the identity identification process matched with the living body judgment result of the target user is executed to the target user, and the identity identification process comprises the following steps:
if the living body judgment result of the target user indicates that the living body object is a living body object, and the area authenticity identification result corresponding to the target user provided by the area authenticity identification model specifically indicates that the area to be identified corresponding to the area authenticity identification of the target user is the real area of the legal user, the target user identity identification is judged to be successful, otherwise, the target user identity identification is judged to be failed.
8. A living body identification method, comprising:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
9. An identification device comprising:
the image acquisition module is used for acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
the living body action recognition module is used for recognizing whether the target user carries out the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
the area authenticity identification module is used for carrying out area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, and the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
the living body judgment module is used for judging whether the target user is a living body object or not based on a preset living body action identification result and an area authenticity identification result corresponding to the target user;
and the identity recognition module executes an identity recognition process matched with the living body judgment result of the target user for the target user.
10. An electronic device includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user;
and executing an identity recognition process matched with the living body judgment result of the target user for the target user.
11. A computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user;
and executing an identity recognition process matched with the living body judgment result of the target user for the target user.
12. A living body identification device comprising:
the image acquisition module is used for acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
the living body action recognition module is used for recognizing whether the target user carries out the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
the area authenticity identification module is used for carrying out area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, and the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and the living body judgment module is used for judging whether the target user is a living body object or not based on a preset living body action identification result and an area authenticity identification result corresponding to the target user.
13. An electronic device includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor to:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
14. A computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image of a target user aiming at a preset living body action to obtain image characteristic data corresponding to the target user;
identifying whether the target user performs the preset living body action or not based on the obtained image characteristic data corresponding to the target user; and the number of the first and second groups,
performing area authenticity identification on the target user based on the obtained image characteristic data corresponding to the target user, wherein the area to be identified corresponding to the area authenticity identification is associated with the preset living body action;
and judging whether the target user is a living object or not based on a preset living action recognition result and an area authenticity recognition result corresponding to the target user.
CN202011149613.3A 2020-10-23 2020-10-23 Identity recognition method, living body recognition device and electronic equipment Active CN112200120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011149613.3A CN112200120B (en) 2020-10-23 2020-10-23 Identity recognition method, living body recognition device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011149613.3A CN112200120B (en) 2020-10-23 2020-10-23 Identity recognition method, living body recognition device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112200120A true CN112200120A (en) 2021-01-08
CN112200120B CN112200120B (en) 2023-06-30

Family

ID=74011044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011149613.3A Active CN112200120B (en) 2020-10-23 2020-10-23 Identity recognition method, living body recognition device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112200120B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335719A (en) * 2015-10-29 2016-02-17 北京汉王智远科技有限公司 Living body detection method and device
CN105426815A (en) * 2015-10-29 2016-03-23 北京汉王智远科技有限公司 Living body detection method and device
CN107133608A (en) * 2017-05-31 2017-09-05 天津中科智能识别产业技术研究院有限公司 Identity authorization system based on In vivo detection and face verification
CN107330370A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 A kind of brow furrows motion detection method and device and vivo identification method and system
EP3319009A1 (en) * 2015-07-02 2018-05-09 Boe Technology Group Co. Ltd. Living body recognition apparatus, living body recognition method and living body authentication system
CN108875676A (en) * 2018-06-28 2018-11-23 北京旷视科技有限公司 Biopsy method, apparatus and system
CN109886084A (en) * 2019-01-03 2019-06-14 广东数相智能科技有限公司 Face authentication method, electronic equipment and storage medium based on gyroscope
CN111353404A (en) * 2020-02-24 2020-06-30 支付宝实验室(新加坡)有限公司 Face recognition method, device and equipment
CN111767880A (en) * 2020-07-03 2020-10-13 腾讯科技(深圳)有限公司 Living body identity recognition method and device based on facial features and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3319009A1 (en) * 2015-07-02 2018-05-09 Boe Technology Group Co. Ltd. Living body recognition apparatus, living body recognition method and living body authentication system
CN105335719A (en) * 2015-10-29 2016-02-17 北京汉王智远科技有限公司 Living body detection method and device
CN105426815A (en) * 2015-10-29 2016-03-23 北京汉王智远科技有限公司 Living body detection method and device
CN107133608A (en) * 2017-05-31 2017-09-05 天津中科智能识别产业技术研究院有限公司 Identity authorization system based on In vivo detection and face verification
CN107330370A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 A kind of brow furrows motion detection method and device and vivo identification method and system
CN108875676A (en) * 2018-06-28 2018-11-23 北京旷视科技有限公司 Biopsy method, apparatus and system
US20200005061A1 (en) * 2018-06-28 2020-01-02 Beijing Kuangshi Technology Co., Ltd. Living body detection method and system, computer-readable storage medium
CN109886084A (en) * 2019-01-03 2019-06-14 广东数相智能科技有限公司 Face authentication method, electronic equipment and storage medium based on gyroscope
CN111353404A (en) * 2020-02-24 2020-06-30 支付宝实验室(新加坡)有限公司 Face recognition method, device and equipment
CN111767880A (en) * 2020-07-03 2020-10-13 腾讯科技(深圳)有限公司 Living body identity recognition method and device based on facial features and storage medium

Also Published As

Publication number Publication date
CN112200120B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US10650259B2 (en) Human face recognition method and recognition system based on lip movement information and voice information
KR102324468B1 (en) Method and apparatus for face verification
CN105335731B (en) Fingerprint identification method and device and terminal equipment
CN108280332B (en) Biological characteristic authentication, identification and detection method, device and equipment of mobile terminal
WO2020244071A1 (en) Neural network-based gesture recognition method and apparatus, storage medium, and device
CN110163096B (en) Person identification method, person identification device, electronic equipment and computer readable medium
US20180046848A1 (en) Method of recognizing fingerprints, apparatus and terminal devices
CN111881726A (en) Living body detection method and device and storage medium
CN111626371A (en) Image classification method, device and equipment and readable storage medium
CN109635625B (en) Intelligent identity verification method, equipment, storage medium and device
KR20170045813A (en) Detecting method and apparatus of biometrics region for user authentication
CN107025425B (en) Authentication method and device and method and device for training recognizer
CN110688878B (en) Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device
CN112699811B (en) Living body detection method, living body detection device, living body detection apparatus, living body detection storage medium, and program product
CN113591921A (en) Image recognition method and device, electronic equipment and storage medium
CN111046804A (en) Living body detection method, living body detection device, electronic equipment and readable storage medium
CN112200120A (en) Identity recognition method, living body recognition device and electronic equipment
CN106650363A (en) Identity recognition method and system
CN116129484A (en) Method, device, electronic equipment and storage medium for model training and living body detection
CN111339517B (en) Voiceprint feature sampling method, user identification method, device and electronic equipment
CN113222809B (en) Picture processing method and device for realizing privacy protection
CN111723651B (en) Face recognition method, face recognition device and terminal equipment
CN114596638A (en) Face living body detection method, device and storage medium
CN109376585B (en) Face recognition auxiliary method, face recognition method and terminal equipment
CN112487885A (en) Payment method, payment device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant