CN112699857A - Living body verification method and device based on human face posture and electronic equipment - Google Patents

Living body verification method and device based on human face posture and electronic equipment Download PDF

Info

Publication number
CN112699857A
CN112699857A CN202110310443.0A CN202110310443A CN112699857A CN 112699857 A CN112699857 A CN 112699857A CN 202110310443 A CN202110310443 A CN 202110310443A CN 112699857 A CN112699857 A CN 112699857A
Authority
CN
China
Prior art keywords
face
image
key point
face key
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110310443.0A
Other languages
Chinese (zh)
Inventor
李建伟
王秋明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanjian Information Technology Co Ltd
Original Assignee
Beijing Yuanjian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanjian Information Technology Co Ltd filed Critical Beijing Yuanjian Information Technology Co Ltd
Priority to CN202110310443.0A priority Critical patent/CN112699857A/en
Publication of CN112699857A publication Critical patent/CN112699857A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The application provides a living body verification method, a living body verification device and electronic equipment based on a human face posture, wherein a target human face image of an object to be detected is obtained, and a target human face key point is extracted from the target human face image; determining a face feature image based on the extracted target face key points; inputting at least two face feature images into a pre-trained face gesture recognition model to obtain a face gesture action angle of the object to be detected; and when the face posture action angle is larger than a preset angle threshold value, determining that the target face image is from a real person. Therefore, the face pose angle information of the detection object is obtained through the face feature image and the face pose recognition model, whether the target face image is a real person or not is judged according to the obtained face pose angle information, and the accuracy of the living body detection is effectively improved.

Description

Living body verification method and device based on human face posture and electronic equipment
Technical Field
The present application relates to the field of in vivo verification of human face gestures, and in particular, to a method and an apparatus for in vivo verification based on human face gestures, and an electronic device.
Background
With the rapid development of science and technology, the recognition of the human face gesture is more and more applied to the life of people, and with the development of the human face gesture recognition technology, higher requirements are made on the human face gesture recognition effect. The face pose estimation is an important step in solving the multi-pose face recognition problem, the human-computer interaction problem and other application scenarios.
At the present stage, there are various methods for face pose estimation, which may be based on an apparent method and a classification-based method, but because the pose of the face is continuous, it takes time to acquire the pose of the face, and meanwhile, the accuracy of acquisition is low, which causes the face pose recognition technology to be applied to in-vivo verification, which may cause the in-vivo verification to be inaccurate.
Disclosure of Invention
In view of the above, an object of the present application is to provide a face pose-based in-vivo verification method, an apparatus and an electronic device, in which face pose angle information of a detection object is acquired through a face feature image and a face pose recognition model, and whether a target face image is a real person is determined according to the acquired face pose angle information, so as to effectively improve accuracy of in-vivo detection.
The application mainly comprises the following aspects:
in a first aspect, an embodiment of the present application provides a living body verification method based on a human face pose, where the living body verification method includes:
acquiring a target face image of an object to be detected, and extracting a target face key point from the target face image;
determining a face feature image based on the extracted target face key points;
inputting at least two face feature images into a pre-trained face gesture recognition model to obtain a face gesture action angle of the object to be detected;
and when the face posture action angle is larger than a preset angle threshold value, determining that the target face image is from a real person.
In some embodiments, the face pose recognition model is trained by:
acquiring a plurality of face key points from a pre-established face key point database;
generating a face key point feature map based on the obtained plurality of face key points;
combining the face key point feature image with a preset reference image to obtain a sample feature image;
and inputting the sample characteristic images and the corresponding face posture labels into a pre-selected and constructed deep learning model, and training to obtain the face posture recognition model.
In some embodiments, the face keypoint feature map is generated by:
aiming at each face key point, calculating two-dimensional Gaussian distribution points of the face key point in a preset pixel range;
and generating the human face key point feature map according to the determined multiple two-dimensional Gaussian distribution points.
In some embodiments, combining the face keypoint feature map with a preset reference image to obtain a sample feature image includes: and combining the face key point feature map with the reference image according to each face key point to obtain a sample feature image corresponding to the face key point feature map.
In some embodiments, the in-vivo authentication method further comprises:
and when the face posture action angle is smaller than or equal to a preset angle threshold value, determining that the target face image is from a photo.
In a second aspect, an embodiment of the present application further provides a living body verification device based on a human face pose, where the living body verification device includes:
the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring a target face image of an object to be detected and extracting a target face key point from the target face image;
the face feature image determining module is used for determining a face feature image based on the extracted target face key points;
the human face gesture action angle output module is used for inputting at least two human face feature images into a human face gesture recognition model trained in advance to obtain a human face gesture action angle of the object to be detected;
and the first verification module is used for determining that the target face image is from a real person when the face posture action angle is larger than a preset angle threshold value.
In some embodiments, the in-vivo verification apparatus further comprises a model training module for training the face pose recognition model by:
acquiring a plurality of face key points from a pre-established face key point database;
generating a face key point feature map based on the obtained plurality of face key points;
combining the face key point feature image with a preset reference image to obtain a sample feature image;
and inputting the sample characteristic images and the corresponding face posture labels into a pre-selected and constructed deep learning model, and training to obtain the face posture recognition model.
In some examples, the living body verification apparatus further comprises a face key point feature map generation module, the face key point feature map generation module is configured to generate a face key point feature map by:
aiming at each face key point, calculating two-dimensional Gaussian distribution points of the face key point in a preset pixel range;
and generating the human face key point feature map according to the determined multiple two-dimensional Gaussian distribution points.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, and when the electronic device is operated, the processor and the memory communicate via the bus, and the machine-readable instructions when executed by the processor perform the steps of the human face pose based liveness verification method according to the first aspect or any one of the possible embodiments of the first aspect.
In a fourth aspect, this application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the living body verification method based on human face pose described in the first aspect or any one of the possible implementation manners of the first aspect.
The application provides a living body verification method, a living body verification device and electronic equipment based on a human face posture, wherein a target human face image of an object to be detected is obtained, and a target human face key point is extracted from the target human face image; determining a face feature image based on the extracted target face key points; inputting at least two face feature images into a pre-trained face gesture recognition model to obtain a face gesture action angle of the object to be detected; and when the face posture action angle is larger than a preset angle threshold value, determining that the target face image is from a real person.
Therefore, the face pose angle information of the detection object is obtained through the face feature image and the face pose recognition model, whether the target face image is a real person or not is judged according to the obtained face pose angle information, and the accuracy of the living body detection is effectively improved.
In order to make the aforementioned and other objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart of a living body verification method based on a human face pose according to an embodiment of the present application;
fig. 2 is a face key point diagram in a living body verification method based on a face pose provided in an embodiment of the present application;
FIG. 3 is a face keypoint feature diagram in a living body verification method based on face pose according to an embodiment of the present application;
FIG. 4 is a flowchart of another in-vivo authentication method based on human face pose according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a living body verification device based on a human face pose according to an embodiment of the present application;
fig. 6 is a second schematic structural diagram of a living body verification apparatus based on human face pose according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and that steps without logical context may be performed in reverse order or concurrently. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
To enable one of ordinary skill in the art to use the present disclosure, the following embodiments are presented in conjunction with a specific application scenario, "live verification of human face pose," and it will be apparent to those of ordinary skill in the art that the general principles defined herein may be applied to other embodiments and application scenarios without departing from the spirit and scope of the present disclosure.
The method, the apparatus, the electronic device or the computer-readable storage medium described in the embodiments of the present application may be applied to any scenario in which a living body verification of a face pose needs to be performed, and the embodiments of the present application do not limit a specific application scenario.
It is worth noting that, in the present stage, there are various methods for face pose estimation, which may be based on an appearance method, a classification method, and the like, but because the pose of the face is continuous, it takes time to acquire the pose of the face, and meanwhile, the accuracy of acquisition is low, which causes the face pose recognition technology to be applied to the in-vivo verification, which may cause the in-vivo verification to be inaccurate.
In view of the above, one aspect of the present application provides a living body verification method based on a face pose, which obtains face pose angle information of a detection object through a face feature image and a face pose recognition model, and judges whether a target face image is a real person according to the obtained face pose angle information, thereby effectively improving accuracy of living body detection.
For the convenience of understanding of the present application, the technical solutions provided in the present application will be described in detail below with reference to specific embodiments.
Referring to fig. 1, fig. 1 is a flowchart of in-vivo verification based on human face pose according to an embodiment of the present disclosure. As shown in fig. 1, the living body authentication method includes:
s101: the method comprises the steps of obtaining a target face image of an object to be detected, and extracting target face key points from the target face image.
In the step, a target face image to be verified of the object to be detected is obtained, wherein the target face image is an image to be verified of a user, and may include a video including a video screenshot of the user to be detected, a photo of the object to be detected, and the like, and a target face key point is extracted from the target face image.
The target face key points may be target face key points at other positions such as eyes, mouths, noses and the like in the target face image.
S102: and determining a face feature image based on the extracted key points of the target face.
In this step, a face feature image is determined according to the target face key points extracted in step S101, where the face feature image may be a feature image formed by other face parts such as an eyebrow, a nose, and eyes, and the number of the face feature images may be multiple.
S103: and inputting at least two face characteristic images into a pre-trained face gesture recognition model to obtain a face gesture action angle of the object to be detected.
In this step, at least 2 face feature images obtained based on step S102 are input into a face pose recognition model trained in advance, and when the face feature images are input into the face pose recognition model, a face pose action angle corresponding to the object to be detected is generated. The human face posture action angle is euler angle information, the euler angles comprise a Pitch angle (Pitch), a Yaw angle (Yaw) and a Roll angle (Roll), and the general limit angles of the human face action angle rotation are Pitch-60.4-69.6 degrees, Yaw-79.8-75.3 degrees and Roll-40.9-63.3 degrees.
The human face gesture recognition model is obtained by performing learning training according to the human face key point feature map.
The method also comprises the following steps of training the face gesture recognition model:
(1) and acquiring a plurality of face key points from a pre-established face key point database.
The face key points which are labeled in advance through large-scale data fitting are stored in a face key point database, and a plurality of face key points are extracted from the face key point database. Referring to fig. 2, fig. 2 is a face key point diagram of a living body verification method based on a face pose according to an embodiment of the present application. As shown in fig. 2, the map is composed of a plurality of face key points, each face key point has different position information, and a face key point map is formed by the position of each face key point. The position of each face key point in the face key point diagram is replaced by a sequence number, and each sequence number has different face corresponding relations.
(2) And generating a face key point feature map based on the obtained plurality of face key points.
Here, a face key point feature map is generated by using the obtained plurality of face key points through a gaussian function algorithm. The face key point feature map is a face key point feature map corresponding to each key point in the acquired plurality of face key points.
Based on the obtained plurality of face key points, the face key point feature map is generated by the following steps:
(a) and calculating two-dimensional Gaussian distribution points of the face key points in a preset pixel range aiming at each face key point.
Here, by using a Gaussian function
Figure 168789DEST_PATH_IMAGE001
And calculating two-dimensional Gaussian distribution points generated by each acquired human face key point in a preset pixel range.
For example, in the Gaussian function
Figure 272880DEST_PATH_IMAGE001
In the above method, the variance sigma may be 0.5, and then the range of the preset pixels is 9 × 9, that is, the range of x and y in the two-dimensional gaussian function is [ -4, 4]The integer value of (a). And respectively substituting x and y into the Gaussian function to generate a single-channel probability score feature map of each acquired human face key face.
(b) And generating the human face key point feature map according to the determined multiple two-dimensional Gaussian distribution points.
The two-dimensional Gaussian distribution map of each key point of the acquired face key points, namely the single-channel probability score feature map, is determined, and the single-channel probability score feature maps are combined together to generate the face key point feature map according to the position information of each face key point in each single-channel probability score feature map. Referring to fig. 3, fig. 3 is a face key point feature diagram of a living body verification method based on a face pose according to an embodiment of the present application. As shown in fig. 3, each face keypoint in the face keypoint feature map is a two-dimensional gaussian distribution map of each face keypoint calculated by a gaussian function, that is, a single-channel probability score feature map. And combining a plurality of single-channel probability score feature maps together according to the position information of each key point to generate a face key point feature map.
(3) And combining the face key point feature image with a preset reference image to obtain a sample feature image.
Here, the face keypoint feature map is overlaid with a preset reference image to generate a sample feature image, where the reference image may be a three primary color (red green blue, RGB) image, and the sample feature image may be a 4-channel image.
Combining the face key point feature map with a preset reference image to obtain a sample feature image, wherein the sample feature image comprises the following steps: and combining the face key point feature map with the reference image according to each face key point to obtain a sample feature image corresponding to the face key point feature map.
Here, when the position information of each face key point in the face key point feature map is successfully matched with the position information of each key point in the reference image, that is, when the position information of each face key point in the face key point feature map is consistent with the position information of each key point in the reference image, the face key point feature map is successfully matched with the reference image to obtain a sample feature image corresponding to the face key point feature map. The sample feature image is a 4-channel image generated by combining a face key point feature map and a reference image (RGB image).
(4) And inputting the sample characteristic images and the corresponding face posture labels into a pre-selected and constructed deep learning model, and training to obtain the living body verification model.
The generated sample feature image (4-channel image) and the label of the corresponding face pose are input into a pre-constructed deep learning model, and input information is trained in the deep learning model to obtain a sample feature image and a face pose recognition model corresponding to the label of the face pose. The face pose tag is a face generated by merging three-dimensional (3D) face data reconstruction or a three-dimensional pose acquired by 3D and obtained by an algorithm.
In order to make the recognition model of the live face gesture more efficient, the specific execution functions of each network layer in the model training process are as follows:
a data input layer: the input data is the superposition of the generated Gaussian face key point feature map and the original face data region, and the feature map of the 4-channel face region with the size of 160 × 4; and (3) rolling layers: input 160 x 4 output 156 x 8; maximum pooling layer: input 156 x 8 output 78 x 8; and (3) rolling layers: input 78 x 8 output 76 x 16; maximum pooling layer: input 76 x 16 output 38 x 16; and (3) rolling layers: input 38 × 16 and output 36 × 32; maximum pooling layer: input is 36 × 32 and output is 18 × 32; and (3) rolling layers: input 18 x 32 output 16 x 32; maximum pooling layer: input 16 x 32 and output 8 x 32; and (3) rolling layers: input 160 x 4 output 156 x 8; maximum pooling layer: input 8 x 32 and output 4 x 32; a data shape changing layer (reshape) for changing the original data shape from 4 × 32 to 512; a full link layer: input is 512 and output is 512. And finally outputting the human face posture angle and the angle category after the network structure training is completed, wherein the human face posture angle information comprises a pitch angle, a yaw angle and a roll angle, and the human face posture angle information corresponds to a head-up, a head-shaking and a head-nodding respectively. Meanwhile, in order to make the convergence of the loss more stable and quicker, the loss is calculated by adopting the following 2 formulas:
Figure 52617DEST_PATH_IMAGE002
Figure 447827DEST_PATH_IMAGE003
in the cross control part, the angle between the range of [ -90, 90] is divided into 180 categories according to one degree, then joint training is carried out, namely loss = l2+ cross, the training mode uses an SGD algorithm, and the iteration is carried out for 60 cycles.
In specific implementation, a face key point feature map is generated by a two-dimensional Gaussian calculation method according to acquired face key points, the face key point feature map and a reference image are combined to generate a sample feature image, the sample feature image is an image of 4 channels, the sample feature image and a face pose label are input into a pre-constructed deep learning model to be subjected to model training, and a face pose recognition model is obtained after training is finished and can predict the pose of a face, such as nodding, shaking and head raising of the face. The gesture of the human face can be verified according to the human face gesture recognition model, and the obtained human face feature map is input into the human face gesture recognition model to obtain the human face gesture action angle of the object to be detected.
S104: and when the face posture action angle is larger than a preset angle threshold value, determining that the target face image is from a real person.
In this step, the face feature image is input into a pre-trained face pose recognition model based on step S103 to obtain a face pose action angle of the object to be detected, and when the face pose action angle of the object to be detected is greater than a preset angle threshold, it can be determined that the target face image is from a real person.
The preset angle threshold is used for judging whether the target face image is a real person or not, if the target face image is the real person, the face posture action angle of the object to be detected is larger than the preset angle threshold, and namely the angle change of the yaw angle is larger when the real person shakes left and right.
The application provides a living body verification method based on a human face gesture, which comprises the steps of obtaining a target human face image of an object to be detected, and extracting a target human face key point from the target human face image; determining a face feature image based on the extracted target face key points; inputting at least two face feature images into a pre-trained face gesture recognition model to obtain a face gesture action angle of the object to be detected; and when the face posture action angle is larger than a preset angle threshold value, determining that the target face image is from a real person.
Therefore, the face pose angle information of the detection object is obtained through the face feature image and the face pose recognition model, whether the target face image is a real person or not is judged according to the obtained face pose angle information, and the accuracy of the living body detection is effectively improved.
Referring to fig. 4, fig. 4 is a flowchart of another living body verification method based on human face pose according to an embodiment of the present application, as shown in fig. 4.
S401: the method comprises the steps of obtaining a target face image of an object to be detected, and extracting target face key points from the target face image.
S402: and determining a face feature image based on the extracted key points of the target face.
S403: and inputting at least two face characteristic images into a pre-trained face gesture recognition model to obtain a face gesture action angle of the object to be detected.
S404: and when the face posture action angle is larger than a preset angle threshold value, determining that the target face image is from a photo.
In this step, when the face pose action angle of the object to be verified generated in step S403 is smaller than or equal to the preset angle threshold, it is determined that the face image to be verified at this time is generated by following the user to be verified, and may be regarded as a photo of the user to be verified, for example.
The descriptions of S401 to S403 may refer to the descriptions of S101 to S103, and the same technical effects can be achieved, which is not described in detail herein.
The application provides a living body verification method based on a human face gesture, which comprises the steps of obtaining a target human face image of an object to be detected, and extracting a target human face key point from the target human face image; determining a face feature image based on the extracted target face key points; inputting at least two face feature images into a pre-trained face gesture recognition model to obtain a face gesture action angle of the object to be detected; and when the face posture action angle is smaller than or equal to the preset angle threshold value, determining that the target face image is from a photo.
Therefore, the face pose angle information of the detection object is obtained through the face feature image and the face pose recognition model, whether the target face image is a real person or not is judged according to the obtained face pose angle information, and the accuracy of the living body detection is effectively improved.
Based on the same application concept, the embodiment of the present application further provides a living body verification device based on a human face gesture corresponding to the living body verification method based on a human face gesture provided by the above embodiment, and as the principle of solving the problem of the device in the embodiment of the present application is similar to that of the living body verification method based on a human face gesture in the above embodiment of the present application, the implementation of the device can refer to the implementation of the method, and repeated parts are not repeated.
Referring to fig. 5 and 6, fig. 5 is a first schematic structural diagram of a living body verification device based on a human face pose according to an embodiment of the present application, and fig. 6 is a second schematic structural diagram of a living body verification device based on a human face pose according to an embodiment of the present application. As shown in fig. 5, the living body authentication device 500 includes:
an obtaining module 501, configured to obtain a target face image of an object to be detected, and extract a target face key point from the target face image;
a face feature image determining module 502, configured to determine a face feature image based on the extracted target face key point;
a face pose action angle output module 503, configured to input at least two face feature images into a pre-trained face pose recognition model, so as to obtain a face pose action angle of the object to be detected;
a first verification module 504, configured to determine that the target face image is from a real person when the face pose action angle is greater than a preset angle threshold.
Optionally, the living body verification apparatus 500 further includes a model training module 505, and the model training module 505 is configured to:
acquiring a plurality of face key points from a pre-established face key point database;
generating a face key point feature map based on the obtained plurality of face key points;
combining the face key point feature image with a preset reference image to obtain a sample feature image;
and inputting the sample characteristic images and the corresponding face posture labels into a pre-selected and constructed deep learning model, and training to obtain the face posture recognition model.
Optionally, as shown in fig. 5, the living body verification apparatus 500 further includes a face key point feature map generation module 506, where the face key point feature map generation module 506 generates a face key point feature map by the following steps:
aiming at each face key point, calculating two-dimensional Gaussian distribution points of the face key point in a preset pixel range;
and generating the human face key point feature map according to the determined multiple two-dimensional Gaussian distribution points.
Optionally, as shown in fig. 5, the living body verification apparatus 500 further includes a sample feature image generation module 507, where the sample feature image generation module 507, when the sample feature image generation module 507 is configured to combine the sample keypoint feature map with a preset reference image to obtain a sample feature image, the sample feature image generation module 507 is configured to: and combining the face key point feature map with the reference image according to each face key point to obtain a sample feature image corresponding to the face key point feature map.
Optionally, as shown in fig. 6, the living body authentication device 500 further comprises a second authentication module 508, wherein the second authentication module 508 is configured to:
and when the face posture action angle is larger than a preset angle threshold value, determining that the target face image is from a photo.
The application provides a living body verification device based on a human face gesture, comprising an acquisition module, a verification module and a verification module, wherein the acquisition module is used for acquiring a target human face image of an object to be detected and extracting a target human face key point from the target human face image; the face feature image determining module is used for determining a face feature image based on the extracted target face key points; the human face gesture action angle output module is used for inputting at least two human face feature images into a human face gesture recognition model trained in advance to obtain a human face gesture action angle of the object to be detected; and the first verification module is used for determining that the target face image is from a real person when the face posture action angle is larger than a preset angle threshold value.
Therefore, the face pose angle information of the detection object is obtained through the face feature image and the face pose recognition model, whether the target face image is a real person or not is judged according to the obtained face pose angle information, and the accuracy of the living body detection is effectively improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 7, the electronic device 700 includes a processor 710, a memory 720, and a bus 730.
The memory 720 stores machine-readable instructions executable by the processor 710, when the electronic device 700 runs, the processor 710 and the memory 720 communicate through the bus 730, and when the machine-readable instructions are executed by the processor 710, the steps of the living body verification method based on the human face pose in the embodiment of the method shown in fig. 1 and fig. 4 can be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit, and if the functions are implemented in the form of software functional units and sold or used as independent products, the functions may be stored in a nonvolatile computer readable storage medium that is executable by one processor. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A living body verification method based on human face gestures is characterized by comprising the following steps:
acquiring a target face image of an object to be detected, and extracting a target face key point from the target face image;
determining a face feature image based on the extracted target face key points;
inputting at least two face feature images into a pre-trained face gesture recognition model to obtain a face gesture action angle of the object to be detected;
and when the face posture action angle is larger than a preset angle threshold value, determining that the target face image is from a real person.
2. The in-vivo authentication method according to claim 1, wherein the face pose recognition model is trained by:
acquiring a plurality of face key points from a pre-established face key point database;
generating a face key point feature map based on the obtained plurality of face key points;
combining the face key point feature image with a preset reference image to obtain a sample feature image;
and inputting the sample characteristic images and the corresponding face posture labels into a pre-selected and constructed deep learning model, and training to obtain the face posture recognition model.
3. The in-vivo verification method according to claim 2, wherein the face key point feature map is generated by the following steps:
aiming at each face key point, calculating two-dimensional Gaussian distribution points of the face key point in a preset pixel range;
and generating the human face key point feature map according to the determined multiple two-dimensional Gaussian distribution points.
4. The in-vivo verification method according to claim 2, wherein combining the face keypoint feature map with a preset reference image to obtain a sample feature image comprises: and combining the face key point feature map with the reference image according to each face key point to obtain a sample feature image corresponding to the face key point feature map.
5. The living body authentication method according to claim 1, further comprising:
and when the face posture action angle is smaller than or equal to the preset angle threshold value, determining that the target face image is from a photo.
6. A living body verification apparatus based on a human face pose, the living body verification apparatus comprising:
the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring a target face image of an object to be detected and extracting a target face key point from the target face image;
the face feature image determining module is used for determining a face feature image based on the extracted target face key points;
the human face gesture action angle output module is used for inputting at least two human face feature images into a human face gesture recognition model trained in advance to obtain a human face gesture action angle of the object to be detected;
and the first verification module is used for determining that the target face image is from a real person when the face posture action angle is larger than a preset angle threshold value.
7. The in-vivo verification device according to claim 6, further comprising a model training module for training the face pose recognition model by:
acquiring a plurality of face key points from a pre-established face key point database;
generating a face key point feature map based on the obtained plurality of face key points;
combining the face key point feature image with a preset reference image to obtain a sample feature image;
and inputting the sample characteristic images and the corresponding face posture labels into a pre-selected and constructed deep learning model, and training to obtain the face posture recognition model.
8. The in-vivo authentication device according to claim 6, further comprising a face keypoint feature map generation module configured to generate a face keypoint feature map by:
aiming at each face key point, calculating two-dimensional Gaussian distribution points of the face key point in a preset pixel range;
and generating the human face key point feature map according to the determined multiple two-dimensional Gaussian distribution points.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the face pose based liveness verification method of any one of claims 1 to 5.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the face pose based living body verification method according to any one of claims 1 to 5.
CN202110310443.0A 2021-03-24 2021-03-24 Living body verification method and device based on human face posture and electronic equipment Pending CN112699857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110310443.0A CN112699857A (en) 2021-03-24 2021-03-24 Living body verification method and device based on human face posture and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110310443.0A CN112699857A (en) 2021-03-24 2021-03-24 Living body verification method and device based on human face posture and electronic equipment

Publications (1)

Publication Number Publication Date
CN112699857A true CN112699857A (en) 2021-04-23

Family

ID=75515493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110310443.0A Pending CN112699857A (en) 2021-03-24 2021-03-24 Living body verification method and device based on human face posture and electronic equipment

Country Status (1)

Country Link
CN (1) CN112699857A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553928A (en) * 2021-07-13 2021-10-26 厦门瑞为信息技术有限公司 Human face living body detection method and system and computer equipment
CN114422832A (en) * 2022-01-17 2022-04-29 上海哔哩哔哩科技有限公司 Anchor virtual image generation method and device
CN115294320A (en) * 2022-10-08 2022-11-04 平安银行股份有限公司 Method and device for determining image rotation angle, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897658A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The discrimination method and device of face live body
CN107316029A (en) * 2017-07-03 2017-11-03 腾讯科技(深圳)有限公司 A kind of live body verification method and equipment
EP3654643A1 (en) * 2017-07-10 2020-05-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. White balance processing method and apparatus
CN111967319A (en) * 2020-07-14 2020-11-20 高新兴科技集团股份有限公司 Infrared and visible light based in-vivo detection method, device, equipment and storage medium
CN112101123A (en) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 Attention detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897658A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The discrimination method and device of face live body
CN107316029A (en) * 2017-07-03 2017-11-03 腾讯科技(深圳)有限公司 A kind of live body verification method and equipment
EP3654643A1 (en) * 2017-07-10 2020-05-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. White balance processing method and apparatus
CN111967319A (en) * 2020-07-14 2020-11-20 高新兴科技集团股份有限公司 Infrared and visible light based in-vivo detection method, device, equipment and storage medium
CN112101123A (en) * 2020-08-20 2020-12-18 深圳数联天下智能科技有限公司 Attention detection method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553928A (en) * 2021-07-13 2021-10-26 厦门瑞为信息技术有限公司 Human face living body detection method and system and computer equipment
CN113553928B (en) * 2021-07-13 2024-03-22 厦门瑞为信息技术有限公司 Human face living body detection method, system and computer equipment
CN114422832A (en) * 2022-01-17 2022-04-29 上海哔哩哔哩科技有限公司 Anchor virtual image generation method and device
CN115294320A (en) * 2022-10-08 2022-11-04 平安银行股份有限公司 Method and device for determining image rotation angle, electronic equipment and storage medium
CN115294320B (en) * 2022-10-08 2022-12-20 平安银行股份有限公司 Method and device for determining image rotation angle, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Wang et al. A deep coarse-to-fine network for head pose estimation from synthetic data
CN111126272B (en) Posture acquisition method, and training method and device of key point coordinate positioning model
Ge et al. 3d convolutional neural networks for efficient and robust hand pose estimation from single depth images
WO2021169637A1 (en) Image recognition method and apparatus, computer device and storage medium
JP6754619B2 (en) Face recognition method and device
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
EP3644277A1 (en) Image processing system, image processing method, and program
CN112699857A (en) Living body verification method and device based on human face posture and electronic equipment
CN111240476B (en) Interaction method and device based on augmented reality, storage medium and computer equipment
Shu et al. Kinship-guided age progression
CN112800903A (en) Dynamic expression recognition method and system based on space-time diagram convolutional neural network
CN103425964A (en) Image processing apparatus, image processing method, and computer program
CN113570684A (en) Image processing method, image processing device, computer equipment and storage medium
Alabbasi et al. Real time facial emotion recognition using kinect V2 sensor
CN115050064A (en) Face living body detection method, device, equipment and medium
CN115578393B (en) Key point detection method, key point training method, key point detection device, key point training device, key point detection equipment, key point detection medium and key point detection medium
CN111598051B (en) Face verification method, device, equipment and readable storage medium
Neverova Deep learning for human motion analysis
CN114333046A (en) Dance action scoring method, device, equipment and storage medium
CN113591763A (en) Method and device for classifying and identifying face shape, storage medium and computer equipment
Yu et al. A video-based facial motion tracking and expression recognition system
CN115205933A (en) Facial expression recognition method, device, equipment and readable storage medium
CN114283265A (en) Unsupervised face correcting method based on 3D rotation modeling
CN114170623A (en) Human interaction detection equipment and method and device thereof, and readable storage medium
Vezzetti et al. Application of geometry to rgb images for facial landmark localisation-a preliminary approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210423

RJ01 Rejection of invention patent application after publication