CN111860394A - Gesture estimation and gesture detection-based action living body recognition method - Google Patents

Gesture estimation and gesture detection-based action living body recognition method Download PDF

Info

Publication number
CN111860394A
CN111860394A CN202010736220.6A CN202010736220A CN111860394A CN 111860394 A CN111860394 A CN 111860394A CN 202010736220 A CN202010736220 A CN 202010736220A CN 111860394 A CN111860394 A CN 111860394A
Authority
CN
China
Prior art keywords
face
motion
recognition
identification
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010736220.6A
Other languages
Chinese (zh)
Inventor
吕文勇
王小东
赵小诣
程序
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu New Hope Finance Information Co Ltd
Original Assignee
Chengdu New Hope Finance Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu New Hope Finance Information Co Ltd filed Critical Chengdu New Hope Finance Information Co Ltd
Priority to CN202010736220.6A priority Critical patent/CN111860394A/en
Publication of CN111860394A publication Critical patent/CN111860394A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention discloses an action living body identification method based on posture estimation and action detection, which relates to the technical field of biological identification, and is characterized in that a user is enabled to cooperate to act to identify a real person and an attack, and an action instruction set is random, so that whether the user is a living body can be judged more accurately; before a live body is made, in order to improve the one-time passing rate of a real person, light ray identification, face identification in a circle, face opposite screen identification and face shielding identification are made for a user; estimating a depression angle, a deflection angle and a rotation angle of a user by adopting a posture estimation PSECNN model, and carrying out head shaking, head nodding and head raising recognition on the user so as to accurately recognize the head action of the user; during the identification of the blinking and mouth opening actions, the MCNN model and the ECNN model are adopted to replace the mainstream detection based on the characteristic points, so that the identification rate of blinking and mouth opening is ensured in precision; in the living body detection, the action instructions are random, the requirements on accurate actions and action sequences are met, anti-flap identification is performed, and the safety is higher.

Description

Gesture estimation and gesture detection-based action living body recognition method
Technical Field
The invention relates to the technical field of biological recognition, in particular to an action living body recognition method based on posture estimation and action detection.
Background
The living body detection is to study how to judge whether the face captured by the current camera is from a real living body (namely a person) or an attack (including a face photo, a video, a mask and the like). The human face camouflage usually comprises a plurality of modes, and under the commercial environment that the current human face recognition is one of important identity identification means, a series of counterfeiting can generate potential threats to a user of a human face recognition identity system, so that a plurality of measures are taken for distinguishing human face camouflage attacks in the research and commercial fields, the living body detection fundamentally belongs to a binary problem and is used for finding the difference between living body images and non-living body images, and the following aspects are mainly included:
(1) and judging based on the difference of the texture features.
(2) Based on non-rigid motion deformation.
(3) The materials are different.
(4) Image or video quality.
(5)3D information.
(6) Interactive information.
With the increasing quality of images and videos, the texture difference between living bodies and non-living bodies on a single-frame 2D image is small. Only by means of the RGB-based silent mode to distinguish whether the user is a living body or not is basically infeasible, and both the false acceptance rate and the correct rejection rate are high, so that the motion-based interactive living body recognition technology becomes the mainstream. Human beings can act or make sounds as required, but the artificial faces are difficult to do. Based on this consideration, interactive human face detection methods have been proposed. The interactive human face living body detection utilizes the action instruction to interact with a user, and the system distinguishes whether the human face in front of the camera is a living body human face or a false body human face by judging whether the user accurately completes the specified action. The common movement commands include nodding head, raising head, blinking, closing eyes, shielding eyes, raising eyebrows, frowning, smiling face, opening tongue, opening mouth, reading a segment of text, and the like.
The existing interactive human face living body detection action instruction is designed to be fixed, so that the human face living body detection algorithm can be broken through by pre-recording videos completing the action instruction. The interactive human face living body detection effectively weakens the influence of the intra-class difference of the prosthesis human face on the performance of the algorithm through the well-designed interactive action, so that the recognition rate is high, the universality is better, and the method is widely applied to actual business scenes such as finance, medical treatment and the like at present.
The action living body is a living body detection technology which is high in precision and good in user experience at present. By requiring the user to coordinate to make some facial actions, it is verified whether it is an identity forgery attack. The actions currently designed are: and (3) blinking, opening mouth, nodding head, raising head, shaking head left and right and the like, wherein after the actions are detected to be completed by the user, the user is considered as a living body, and the living body passes the authentication, otherwise, the user refuses the authentication. It is emphasized that under this strategy, the front end only captures motion and does not distinguish between video and real people.
However, the currently mainstream motion living technology has the following problems:
(1) the occlusion detection is not performed, for example, the face of the user wears a mask, or the user wears sunglasses, or the user covers the mouth with hands, and the identification precision of mouth opening and blinking becomes worse, and the living body still tells that the user fails after the living body is completed, so that the user cannot find the occlusion detection in advance and timely reminds the occlusion detection.
(2) The face of the user is not recognized in real time by aligning with the screen circle, so that the face of the user may not be in the camera in the process of doing, and finally, the action recognition fails because the face of the user is not detected, or the action fails because the action does not meet the specification.
(3) Blink, open the recognition of mouth and calculate the distance based on the mode of characteristic point, and the threshold value is difficult to adapt big or small eyes, squinting, scene such as mouth size nonconformity, leads to the true man to blink, perhaps opens the mouth and does not discern, and the picture of attack or the paper of printing do not blink or open the mouth but be discerned as blinking and open the mouth, also can not remove to set up a dynamic threshold value according to everyone.
(4) The face anti-copying identification is not carried out after the action identification, particularly for video attack, if a user records the video of another person and happens to be consistent with the action sequence of a living body, the living body identification is broken.
(5) The action recognition is easy to fail under the condition that the light of a user is darker or stronger because the light recognition is not carried out.
Disclosure of Invention
The present invention aims to provide a motion living body recognition method based on attitude estimation and motion detection, which can alleviate the above problems.
In order to alleviate the above problems, the technical scheme adopted by the invention is as follows:
a motion living body identification method based on attitude estimation and motion detection comprises the following steps:
real-time alignment screen recognition, comprising: shooting a face photo by using a camera, and carrying out light ray identification, face in circle identification, face opposite to a screen identification and face shielding identification on the face photo, wherein when the light rays in the face photo are proper, the face is in circle, the face opposite to the screen is not shielded, the face is aligned to the screen in real time and passes the screen identification, otherwise, the face does not pass the screen identification;
random motion recognition, performed after the real-time alignment screen recognition passes, comprising: generating more than 2 motion instructions of blinking, mouth opening, head shaking, head raising and head nodding for a user at random, completing the motion of the user under a camera according to the motion instructions to obtain a motion video, if the motion postures and the sequence among the motions in the motion video are correct, passing the random motion recognition, or not passing the random motion recognition, wherein the head shaking, head raising and head nodding motions are recognized by adopting a posture estimation PSECNN model, and the blinking and mouth opening motions are recognized by adopting an MCNN model and an ECNN model respectively;
the face copying recognition is carried out after the random motion recognition is passed and is used for judging whether the motion video is copied, if not, the face copying recognition is passed, otherwise, the face copying recognition is not passed;
and the face comparison is carried out after the face copying identification is passed, and is used for comparing the face in the action video with the face of the user stored in the database, if the similarity threshold is reached, the face comparison is passed, and if the similarity threshold is not passed, the face comparison is not passed.
Further, during the light ray identification, the light ray is divided into strong light, normal light and weak light with the intensity weakened in sequence, and if the light ray in the face photo is identified to be the normal light, the light ray is considered to be appropriate.
Furthermore, the photo light of the human face is identified through a three-classification algorithm model comprising a strong light judgment threshold and a weak light judgment threshold.
Furthermore, the three-classification algorithm model is an LCNN model, and if the light rays in the face photo are recognized to be strong light/weak light, the user is prompted to perform living body recognition under normal light.
Further, the identifying the human face in the circle comprises:
if (face width/face photo width) > (circle width/camera width), or (face height/face photo height) > (circle height/camera height), determining that the face size exceeds the circle size;
if (the distance between the face and the top of the face/the height of the face photo) > (the distance between the circle and the top of the camera/the height of the camera), and ((the distance between the face and the top of the face + the height of the face)/the height of the face photo) < ((the distance between the circle and the top of the camera + the height of the circle)/the height of the camera), the upper and lower positions of the face are proper;
if (the distance between the face and the left edge of the face photo/the width of the face photo) > (the distance between the circle and the left edge of the camera/the width of the camera), and ((the distance between the face and the left edge of the face photo + the width of the face)/the width of the face photo) < ((the distance between the circle and the left edge of the camera + the width of the circle)/the width of the camera), the left and right positions of the face are appropriate;
if the upper, lower, left and right positions of the face are proper, the face in the face photo is judged to be in the circle, and if not, the user is prompted to face the screen.
Further, when the face right-to-screen recognition is carried out, the face pose in the face photo is judged through a pose estimation PSECNN model, the pose estimation PSECNN model is provided with a depression angle threshold range, a rotation angle threshold range and a deflection angle threshold range, if the face depression angle in the face photo does not exceed the depression angle threshold range, the face rotation angle does not exceed the rotation angle threshold range, and the face deflection angle does not exceed the deflection angle threshold range, the face in the face photo is judged to be right-to-screen, otherwise, a user is prompted to be right-to-screen.
Further, the face shielding identification is to use an object detection FOD model to identify after the face in the face photo is extracted.
Furthermore, the shelters which can be identified by the object detection FOD model comprise glasses, a mask, a hat and a scarf, and if a certain shelter is identified, a user is prompted to remove the shelter.
Further, in the random motion recognition, there are 3 types of motion commands generated randomly at a time.
Further, the face duplication recognition comprises: and acquiring a face video frame from the action video, if the video in the face video frame has a frame, the face copying identification is not passed, otherwise, the face in the face video frame is scratched out, the scratched out face is identified based on the texture, and whether the action video is copied or not is judged.
Compared with the prior art, the invention has the beneficial effects that:
1) according to the method, the user is enabled to cooperate to act to identify the real person and the attack, the action instruction set is random, the attacker is difficult to record the video in advance to break the living body detection algorithm, and whether the user is a living body can be judged more accurately;
2) before living body making, in order to improve the one-time passing rate of a real person, real-time screen alignment identification is performed on a user, wherein the screen alignment identification comprises light ray identification, face identification in a circle, face opposite screen identification and face shielding identification, and only when the screen alignment identification is passed in real time, the living body identification can be performed, so that the user experience and the one-time passing rate are enhanced;
3) the method adopts the attitude estimation PSECNN model to estimate the depression angle, the deflection angle and the rotation angle of the user, and carries out head shaking, head nodding and head raising identification on the user, thereby accurately identifying the head action of the user;
4) when the blink and mouth opening actions are recognized, the MCNN model and the ECNN model are adopted to replace mainstream characteristic point-based detection, so that the recognition rate of blinking and mouth opening is ensured in precision, the condition that the regression of the characteristic points is inaccurate due to printed paper or illumination shaking, the recognition is wrong is prevented, and a dynamic threshold value for recognition is easy to set;
5) the invention not only has random action instructions, but also needs to meet the requirements of right action and outward action sequence, and also performs anti-rollover recognition on the action video, thereby preventing false video action attack.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flowchart of an embodiment of a motion living body recognition method based on pose estimation and motion detection;
FIG. 2 is a diagram of attitude estimation in an embodiment;
FIG. 3 is a flowchart of an action live body prevention attack in an embodiment;
FIG. 4 is an effect diagram of light recognition, face-to-screen recognition, face occlusion recognition and face-in-circle recognition in the embodiment;
FIG. 5 is a diagram showing the effect of random motion recognition of an action living body in the embodiment;
FIG. 6 is an effect diagram of face occlusion recognition in an embodiment;
fig. 7 is a diagram of the effect of the human face action in the embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Referring to fig. 1 to 7, the present embodiment discloses a motion living body recognition method based on pose estimation and motion detection, which includes real-time alignment screen recognition, random motion recognition, face duplication recognition and face comparison.
1. Real-time alignment screen recognition
The identification process comprises: the method comprises the steps of shooting a face photo by using a camera, carrying out light identification on the face photo, identifying the face in a circle, identifying the face opposite to a screen and carrying out face shielding identification, and when the light in the face photo is proper, the face is in the circle, the face opposite to the screen and the face is not shielded, aligning the face photo to the screen in real time to identify the face photo, or not, allowing the face photo to pass through.
1) Light ray identification
In the process, the light is divided into strong light, normal light and weak light with the intensity weakened in sequence, and if the light in the face photo is recognized to be the normal light, the light is considered to be appropriate.
The light of the face photo is identified through a three-classification algorithm model comprising a strong light judgment threshold and a weak light judgment threshold, the three-classification algorithm model is a trained LCNN model, and if the light in the face photo is identified to be strong light/weak light, a user is prompted to perform living body identification under normal light, as shown in fig. 4-1, the prompt is that the light is too dark, and the user should try again in an environment with bright light.
2) Face in-circle recognition
The identification process utilizes the length and width of a camera, the radius of a circle on a camera screen, the distance between the top of the distance and the left side, the length and width of a shot picture, the size of a human face, the distance between the top of the distance of the human face and the left side, and whether the human face is identified in the circle or not, and the identification algorithm is as follows:
if (face width/face photo width) > (circle width/camera width), or (face height/face photo height) > (circle height/camera height), determining that the face size exceeds the circle size;
if (the distance between the face and the top of the face/the height of the face photo) > (the distance between the circle and the top of the camera/the height of the camera), and ((the distance between the face and the top of the face + the height of the face)/the height of the face photo) < ((the distance between the circle and the top of the camera + the height of the circle)/the height of the camera), the upper and lower positions of the face are proper;
if (the distance between the face and the left edge of the face photo/the width of the face photo) > (the distance between the circle and the left edge of the camera/the width of the camera), and ((the distance between the face and the left edge of the face photo + the width of the face)/the width of the face photo) < ((the distance between the circle and the left edge of the camera + the width of the circle)/the width of the camera), the left and right positions of the face are appropriate;
if the upper, lower, left and right positions of the face are proper, the face in the face photo is judged to be in the circle, and if not, the user is prompted to face the screen. As shown in fig. 4-2, the diagram prompts "please place the face in a circle".
3) Face-to-screen recognition
When the face is identified to be directly opposite to the screen, the face pose in the face photo is judged through a pose estimation PSECNN model, the pose estimation PSECNN model is provided with a depression angle threshold range, a rotation angle threshold range and a deflection angle threshold range, if the face depression angle in the face photo does not exceed the depression angle threshold range, the face rotation angle does not exceed the rotation angle threshold range, and the face deflection angle does not exceed the deflection angle threshold range, the face in the face photo is judged to be directly opposite to the screen, otherwise, a user is prompted to be directly opposite to the screen. As shown in fig. 4-3, the face is not facing the screen, so "please face the screen".
4) Face occlusion recognition
The trained object detection FOD model treats occlusion as object detection, for example, when a user wearing glasses is detected, the user is considered to be occluded, and in occlusion recognition, the user is recognized based on a face after face buckling, so that objects except the face are not considered to be occluded.
The trained object detection FOD model can identify the shielding objects such as glasses, masks, hats and scarves for the user, if a certain shielding object is identified, the user is prompted to remove the shielding object, if the shielding object is not removed, the random action identification error rate is high, and even if the random action identification is passed, the face comparison fails.
As shown in fig. 4-4, the user wearing the mask is recognized during occlusion recognition, and thus "please remove the mask" is prompted. Fig. 6 is a diagram illustrating the effect of face occlusion recognition in the embodiment.
2. Random motion recognition
The identification process is carried out after the real-time alignment screen identification is passed, and comprises the following steps: the method comprises the steps that more than 2 action instructions of blinking, mouth opening, head shaking, head raising and head nodding are generated for a user at random, the user completes actions under a camera according to the action instructions to obtain an action video, if all action postures and sequences among the actions in the action video are correct, random action recognition is passed, otherwise, the random action recognition is not passed, wherein the actions of head shaking, head raising and head nodding are recognized by adopting a posture estimation PSECNN model, and the actions of blinking and mouth opening are recognized by adopting an MCNN model and an ECNN model respectively.
In this embodiment, there are 3 types of motion commands generated at a time by random motion recognition, and attacks such as masks, videos, photographs, printed photographs, face-clipped photographs, and bent photographs are prevented by random motion recognition.
In this embodiment, when the user starts an action living body process after facing the screen, a deep learning posture estimation model is trained, the action of the user is recognized, and the head shaking action uses a deflection angle and a rotation angle, wherein the angle is from-90 degrees to 90 degrees, regardless of whether the face of the user faces left or right, the head shaking action is considered to be passed if one side deflects to a certain degree and the video has a face facing and no bags are shaken up and down, and the same is true for head nodding and head raising. As shown in fig. 2(a), 2(b), and 2(c), the attitude estimation diagrams of head up, nodding, and shaking head are shown, respectively. The models adopted in the embodiment are optimized, the models are simple and efficient, and the parameters are only 3M.
In the process of blink and mouth opening action recognition, the model conducts frame extraction analysis on the collected user video, if the eyes are opened and closed and no bags are left, right, up and down swayed, the blink action is considered to pass through, otherwise, the attack is considered to be attack, and the same principle is applied to mouth opening. Meanwhile, the MCNN model and the ECNN model are optimized, the models are simple and efficient, and the parameters are only 3M. Fig. 5 shows an effect diagram of the random motion recognition of the motion living body in the embodiment, in which fig. 5-1 and 5-2 prompt to lift the head, fig. 5-3 prompt to blink, fig. 5-4 prompt to ask a little bit for authentication, and so on. Fig. 7 is a diagram illustrating the effect of the human face action in the present embodiment.
3. Face reproduction recognition
The identification process is carried out after the random motion identification is passed, and is used for judging whether the motion video is copied or not, if not, the face copying identification is passed, otherwise, the face copying identification is not passed.
The face copying recognition comprises the following steps: acquiring a face video frame from the action video, and if the video in the face video frame has a frame, determining that the video attacks, and the face reproduction identification fails;
if the video in the face video frame does not leak the frame, the face is difficult to see whether the video attack is the video attack or not, the face in the face video frame is scratched out at the moment to perform the copying recognition, a depth estimation model and a texture model are trained, the models are fused, then the multi-model decision is performed, the scratched out face is recognized in the mode, and whether the action video is copied or not is judged.
4. Human face comparison
The comparison process is carried out after the face copying identification is passed, and is used for comparing the face in the action video with the face of the user stored in the database, if the similarity threshold is reached, the face comparison is passed, otherwise, the face comparison is not passed.
As shown in fig. 3, the schematic diagram of the action living body attack prevention of this embodiment is provided with 4 layers of barriers, which are respectively real-time alignment screen recognition, random action recognition, face reproduction recognition and face comparison, and the 4 layers of barriers can prevent malicious attacks, so that a real user can pass through the diagram with better experience, and an attack user is rejected.
In the random motion recognition process of this embodiment, in addition to using the sequence between the motion gestures and motions as the recognition passing condition, the motion completion time limit and the authentication number limit may be increased, that is, a motion completion time threshold and an authentication number threshold are set, if the time for the user to complete the motion according to the motion command exceeds the time threshold, the random motion recognition of the current authentication is not passed, and if the user authentication number exceeds the authentication number threshold, the authentication interface is not provided.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A motion living body identification method based on attitude estimation and motion detection is characterized by comprising the following steps:
real-time alignment screen recognition, comprising: shooting a face photo by using a camera, and carrying out light ray identification, face in circle identification, face opposite to a screen identification and face shielding identification on the face photo, wherein when the light rays in the face photo are proper, the face is in circle, the face opposite to the screen is not shielded, the face is aligned to the screen in real time and passes the screen identification, otherwise, the face does not pass the screen identification;
random motion recognition, performed after the real-time alignment screen recognition passes, comprising: generating more than 2 motion instructions of blinking, mouth opening, head shaking, head raising and head nodding for a user at random, completing the motion of the user under a camera according to the motion instructions to obtain a motion video, if the motion postures and the sequence among the motions in the motion video are correct, passing the random motion recognition, or not passing the random motion recognition, wherein the head shaking, head raising and head nodding motions are recognized by adopting a posture estimation PSECNN model, and the blinking and mouth opening motions are recognized by adopting an MCNN model and an ECNN model respectively;
the face copying recognition is carried out after the random motion recognition is passed and is used for judging whether the motion video is copied, if not, the face copying recognition is passed, otherwise, the face copying recognition is not passed;
and the face comparison is carried out after the face copying identification is passed, and is used for comparing the face in the action video with the face of the user stored in the database, if the similarity threshold is reached, the face comparison is passed, and if the similarity threshold is not passed, the face comparison is not passed.
2. The method for motion recognition based on pose estimation and motion detection as claimed in claim 1, wherein the light is divided into strong light, normal light and weak light with decreasing intensity in sequence when performing the light recognition, and the light is considered to be suitable if the light in the face photo is recognized as normal light.
3. The method for motion living body recognition based on pose estimation and motion detection as claimed in claim 2, wherein the photo light of the human face is recognized through a three-classification algorithm model comprising a strong light determination threshold and a weak light determination threshold.
4. The method as claimed in claim 3, wherein the three-classification algorithm model is an LCNN model, and if the light in the face photo is identified as strong/weak light, the user is prompted to perform living body recognition under normal light.
5. The method for motion live recognition based on pose estimation and motion detection according to claim 1, wherein the recognizing the human face in a circle comprises:
if (face width/face photo width) > (circle width/camera width), or (face height/face photo height) > (circle height/camera height), determining that the face size exceeds the circle size;
if (the distance between the face and the top of the face/the height of the face photo) > (the distance between the circle and the top of the camera/the height of the camera), and ((the distance between the face and the top of the face + the height of the face)/the height of the face photo) < ((the distance between the circle and the top of the camera + the height of the circle)/the height of the camera), the upper and lower positions of the face are proper;
if (the distance between the face and the left edge of the face photo/the width of the face photo) > (the distance between the circle and the left edge of the camera/the width of the camera), and ((the distance between the face and the left edge of the face photo + the width of the face)/the width of the face photo) < ((the distance between the circle and the left edge of the camera + the width of the circle)/the width of the camera), the left and right positions of the face are appropriate;
if the upper, lower, left and right positions of the face are proper, the face in the face photo is judged to be in the circle, and if not, the user is prompted to face the screen.
6. The motion living body recognition method based on pose estimation and motion detection as claimed in claim 1, wherein when performing the face-to-screen recognition, the pose estimation PSECNN model is used to determine the face pose in the face photo, and the pose estimation PSECNN model is provided with a dip angle threshold range, a rotation angle threshold range and a declination angle threshold range, if the face dip angle in the face photo does not exceed the dip angle threshold range, the face rotation angle does not exceed the rotation angle threshold range, and the face declination angle does not exceed the declination angle threshold range, the face in the face photo is determined to be to screen, otherwise, the user is prompted to face the screen.
7. The method for motion live recognition based on pose estimation and motion detection as claimed in claim 1, wherein the face occlusion recognition is recognition by using an object detection FOD model after the face in the face photo is extracted.
8. The method for motion live recognition based on pose estimation and motion detection as claimed in claim 7, wherein the object detection FOD model can identify the shelters including glasses, facial mask, hat and scarf, and if a certain shelter is identified, prompt the user to remove the shelter.
9. The method for motion living body recognition based on pose estimation and motion detection according to claim 1, wherein there are 3 motion commands generated randomly at a time in the random motion recognition.
10. The method for motion live recognition based on pose estimation and motion detection according to claim 1, wherein the face duplication recognition comprises: and acquiring a face video frame from the action video, if the video in the face video frame has a frame, the face copying identification is not passed, otherwise, the face in the face video frame is scratched out, the scratched out face is identified based on the texture, and whether the action video is copied or not is judged.
CN202010736220.6A 2020-07-28 2020-07-28 Gesture estimation and gesture detection-based action living body recognition method Pending CN111860394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010736220.6A CN111860394A (en) 2020-07-28 2020-07-28 Gesture estimation and gesture detection-based action living body recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010736220.6A CN111860394A (en) 2020-07-28 2020-07-28 Gesture estimation and gesture detection-based action living body recognition method

Publications (1)

Publication Number Publication Date
CN111860394A true CN111860394A (en) 2020-10-30

Family

ID=72948374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010736220.6A Pending CN111860394A (en) 2020-07-28 2020-07-28 Gesture estimation and gesture detection-based action living body recognition method

Country Status (1)

Country Link
CN (1) CN111860394A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257685A (en) * 2020-12-08 2021-01-22 成都新希望金融信息有限公司 Face copying recognition method and device, electronic equipment and storage medium
CN112507985A (en) * 2021-02-03 2021-03-16 成都新希望金融信息有限公司 Face image screening method and device, electronic equipment and storage medium
CN112511739A (en) * 2020-11-20 2021-03-16 上海盛付通电子支付服务有限公司 Interactive information generation method and equipment
CN116112630A (en) * 2023-04-04 2023-05-12 成都新希望金融信息有限公司 Intelligent video face tag switching method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN106203369A (en) * 2016-07-18 2016-12-07 三峡大学 Active stochastic and dynamic for anti-counterfeiting recognition of face instructs generation system
CN107316029A (en) * 2017-07-03 2017-11-03 腾讯科技(深圳)有限公司 A kind of live body verification method and equipment
CN108447159A (en) * 2018-03-28 2018-08-24 百度在线网络技术(北京)有限公司 Man face image acquiring method, apparatus and access management system
CN108875485A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 A kind of base map input method, apparatus and system
CN109359537A (en) * 2018-09-14 2019-02-19 杭州宇泛智能科技有限公司 Human face posture angle detecting method neural network based and system
CN109840515A (en) * 2019-03-06 2019-06-04 百度在线网络技术(北京)有限公司 Facial pose method of adjustment, device and terminal
CN110647811A (en) * 2019-08-15 2020-01-03 中国平安人寿保险股份有限公司 Human face posture detection method and device and computer readable storage medium
CN111144277A (en) * 2019-12-25 2020-05-12 东南大学 Face verification method and system with living body detection function
KR20200059112A (en) * 2018-11-19 2020-05-28 한성대학교 산학협력단 System for Providing User-Robot Interaction and Computer Program Therefore

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN106203369A (en) * 2016-07-18 2016-12-07 三峡大学 Active stochastic and dynamic for anti-counterfeiting recognition of face instructs generation system
CN107316029A (en) * 2017-07-03 2017-11-03 腾讯科技(深圳)有限公司 A kind of live body verification method and equipment
CN108875485A (en) * 2017-09-22 2018-11-23 北京旷视科技有限公司 A kind of base map input method, apparatus and system
CN108447159A (en) * 2018-03-28 2018-08-24 百度在线网络技术(北京)有限公司 Man face image acquiring method, apparatus and access management system
CN109359537A (en) * 2018-09-14 2019-02-19 杭州宇泛智能科技有限公司 Human face posture angle detecting method neural network based and system
KR20200059112A (en) * 2018-11-19 2020-05-28 한성대학교 산학협력단 System for Providing User-Robot Interaction and Computer Program Therefore
CN109840515A (en) * 2019-03-06 2019-06-04 百度在线网络技术(北京)有限公司 Facial pose method of adjustment, device and terminal
CN110647811A (en) * 2019-08-15 2020-01-03 中国平安人寿保险股份有限公司 Human face posture detection method and device and computer readable storage medium
CN111144277A (en) * 2019-12-25 2020-05-12 东南大学 Face verification method and system with living body detection function

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112511739A (en) * 2020-11-20 2021-03-16 上海盛付通电子支付服务有限公司 Interactive information generation method and equipment
CN112511739B (en) * 2020-11-20 2022-05-06 上海盛付通电子支付服务有限公司 Interactive information generation method and equipment
CN112257685A (en) * 2020-12-08 2021-01-22 成都新希望金融信息有限公司 Face copying recognition method and device, electronic equipment and storage medium
CN112507985A (en) * 2021-02-03 2021-03-16 成都新希望金融信息有限公司 Face image screening method and device, electronic equipment and storage medium
CN116112630A (en) * 2023-04-04 2023-05-12 成都新希望金融信息有限公司 Intelligent video face tag switching method
CN116112630B (en) * 2023-04-04 2023-06-23 成都新希望金融信息有限公司 Intelligent video face tag switching method

Similar Documents

Publication Publication Date Title
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
CN107609383B (en) 3D face identity authentication method and device
CN107346422B (en) Living body face recognition method based on blink detection
CN107633165B (en) 3D face identity authentication method and device
CN111860394A (en) Gesture estimation and gesture detection-based action living body recognition method
US9922238B2 (en) Apparatuses, systems, and methods for confirming identity
CN103440479B (en) A kind of method and system for detecting living body human face
JP5010905B2 (en) Face recognition device
Wilber et al. Can we still avoid automatic face detection?
Kähm et al. 2d face liveness detection: An overview
WO2016172872A1 (en) Method and device for verifying real human face, and computer program product
Wu et al. Two-stream CNNs for gesture-based verification and identification: Learning user style
CN103678984A (en) Method for achieving user authentication by utilizing camera
CN106997452B (en) Living body verification method and device
CN110326001A (en) The system and method for executing the user authentication based on fingerprint using the image captured using mobile device
CN104361326A (en) Method for distinguishing living human face
CN108537131B (en) Face recognition living body detection method based on face characteristic points and optical flow field
CN111582238B (en) Living body detection method and device applied to face shielding scene
CN107480586B (en) Face characteristic point displacement-based biometric photo counterfeit attack detection method
CN111353404A (en) Face recognition method, device and equipment
CN107862298B (en) Winking living body detection method based on infrared camera device
CN110705454A (en) Face recognition method with living body detection function
CN111259757B (en) Living body identification method, device and equipment based on image
JP4082203B2 (en) Open / close eye determination device
JP3970573B2 (en) Facial image recognition apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030