CN111652019B - Face living body detection method and device - Google Patents
Face living body detection method and device Download PDFInfo
- Publication number
- CN111652019B CN111652019B CN201910302473.XA CN201910302473A CN111652019B CN 111652019 B CN111652019 B CN 111652019B CN 201910302473 A CN201910302473 A CN 201910302473A CN 111652019 B CN111652019 B CN 111652019B
- Authority
- CN
- China
- Prior art keywords
- face
- expression
- picture
- facial
- living body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the field of face recognition, and provides a face living body detection method, which is carried out by an expression criterion, wherein the expression criterion comprises the following steps: recognizing a first face picture and sending out an expression prompt instruction; identifying a second face picture, and judging the facial expression in the second face picture; and if the facial expression in the second facial image is consistent with the expression in the expression prompting instruction, judging that the current facial image is a living facial image. The invention also provides a human face living body detection device. By adding facial expression criteria, whether the target is a human face living body or not is detected, and accuracy of a judgment result is improved.
Description
Technical Field
The invention relates to the field of face recognition, in particular to a method and a device for detecting human face living bodies.
Background
Face recognition is a kind of biological feature recognition, and plays a significant role in identity confirmation. It is easier to collect feature information than fingerprint recognition, iris recognition, voice recognition. In face detection, static detection of two-dimensional face images is common, and as the requirement of product safety is improved, the application of a face living detection technology is generated.
Conventional face recognition systems cannot distinguish whether a face in a currently acquired image is a live face or a prosthetic face. With the wide application of face recognition technology, face attack modes of prostheses such as face photos, face videos, three-dimensional masks and the like are layered endlessly, and face living detection is increasingly focused by industry and academia. Face living body detection is also becoming an indispensable link in face recognition systems.
In chinese patent application CN201810054425 a method for face biopsy is disclosed, comprising: generating face image acquisition instructions of different visual angles in response to receiving a face living body detection request; responding to the fact that the view angles of at least two face images to be detected acquired by an image acquisition device accord with different view angles indicated by the image acquisition instructions, and carrying out three-dimensional reconstruction of the faces based on the at least two face images to be detected to obtain a three-dimensional reconstruction result; and determining whether the human faces in at least two human face images to be detected are human face living bodies or not based on the three-dimensional reconstruction result.
The method only judges whether the target is a human face living body based on three-dimensional reconstruction of the human face, and the method can not distinguish some false human face three-dimensional moulds, so that misjudgment is easy to generate and the method can be used by an illegal person.
Disclosure of Invention
Therefore, the invention aims to provide a human face living body detection method and device, which are used for detecting whether a target is a human face living body or not by adding a human face facial expression criterion and a human face rotation criterion, so that the accuracy of a judgment result is improved.
According to the human face living body detection method provided by the invention, the detection method is carried out through expression criteria, and the expression criteria comprise:
recognizing a first face picture and sending out an expression prompt instruction;
identifying a second face picture, and judging the facial expression in the second face picture;
and if the facial expression in the second facial image is consistent with the expression in the expression prompting instruction, judging that the current facial image is a living facial image.
Preferably, the judging of the facial expression in the second face picture is achieved by changing the positions of facial feature points in the first face picture and the second face picture.
Preferably, the facial feature points at least include a left eye, a right eye, a nose tip, a left mouth corner, and a right mouth corner, the first position coordinates of the facial feature points are recorded when the first facial image is identified, the second position coordinates of the facial feature points are recorded when the second facial image is identified, and the facial expression is judged according to the changes of the first position coordinates and the second position coordinates of the corresponding facial feature points.
Preferably, the detection method is further performed by face rotation criteria, including:
sending out a face rotation instruction;
recognizing a third face picture, and judging the face rotation direction in the third face picture;
and if the face rotation direction in the third face picture is consistent with the rotation direction in the face rotation instruction, judging that the current face is a living face.
Preferably, the determining the face rotation direction in the third face picture is implemented by changing angles of face feature points in the first face picture and the third face picture.
Preferably, the judgment of the angle change is calculated according to a calculation formula of the face feature point around a three-dimensional vector, wherein the three-dimensional vector is a vector (0, 1) where a Z axis of a coordinate system to which the face feature point belongs is located.
Preferably, the identifying the facial feature points of the face through a multi-task convolutional neural network, wherein the multi-task convolutional neural network identifying comprises the following steps:
scaling the target picture to form a plurality of image pyramids under different scales;
generating candidate windows and frame regression vectors of face areas in each picture by using P-Net, correcting the candidate windows by using a frame regression method, and merging overlapped candidate frames by using a non-maximum suppression method;
using R-Net to improve the candidate window, inputting the candidate window passing through P-Net into R-Net, rejecting the error window, and continuing to process the candidate frame by using frame regression and non-maximum suppression method;
and outputting the final face frame and the face feature point positions by using the O-Net.
Preferably, the face rotation criterion is performed before or after the expression criterion.
According to an object of the present invention, a face living body detection apparatus includes:
the face acquisition device is used for acquiring face pictures;
the face recognition device is used for recognizing the face of the acquired face picture and recording the position of the facial feature point on the face;
the criterion generating device generates an expression prompting instruction;
and the central processing device is used for controlling the face acquisition device to acquire the face at least twice, controlling the face recognition device to recognize the change condition of facial feature points on the face during the second acquisition, obtaining facial expression information, comparing the facial expression information with the expression in the expression prompt instruction, and obtaining a face living body detection result according to the comparison result.
Preferably, the criterion generating device further comprises a face rotation instruction, the central processing device controls the face collecting device to collect the face for the third time, controls the face collecting device to identify the angle change condition of the facial feature points on the face, obtains face rotation information, compares the face rotation information with the rotation direction in the face rotation instruction, and obtains a face living body detection result according to the comparison result.
Compared with the prior art, the invention has the advantages that:
1. the invention finds the criteria of smiling action and uniform rotation action around the z axis of the face, so that the detection method is directly and effectively;
2. the invention can judge the smiling action and the uniform rotation action around the z axis of the face by only positioning the three-dimensional space coordinates of the five characteristic points of the left eye, the right eye, the nose tip and the left mouth corner of the face, thereby achieving the aim of detecting the living body of the face and having high practicability.
3. The invention only selects a small number of characteristic points, has small calculated amount, short program operation time and high detection efficiency.
Drawings
Fig. 1 is a flow chart of the face living body detection method of the present invention.
Fig. 2 is a network configuration diagram of MTCNN.
FIG. 3 is a flow chart of the face rotation criterion detection method.
Fig. 4 is a block diagram of the face biopsy device of the present invention.
Fig. 5 is a schematic diagram showing the change of the value of (new_x2-new_x1)/L along with the rotation angle θ when the face rotates around the z axis at a constant speed.
Detailed Description
The present invention will be described in detail below with reference to the specific embodiments shown in the drawings, but these embodiments are not limited to the present invention, and structural, method, or functional modifications made by those skilled in the art based on these embodiments are included in the scope of the present invention.
The method for detecting the human face living body increases the expression criterion and the rotation criterion of the human face, and judges whether the human face being identified is a living body human face or a three-dimensional mold or a photo by prompting the person to be detected to make different expressions. Because the expression criteria are difficult to imitate by the non-living body die, the accuracy of human face living body detection can be greatly improved, and malicious imitation by illegal personnel is avoided.
Referring to fig. 1, fig. 1 is a flow chart of a face living body detection method according to the present invention, and as shown in the drawing, the detection method includes:
s11, recognizing a first face picture and sending out an expression prompt instruction.
When the human face living body detection is needed, the detected human face is close to the image acquisition device to acquire the human face image, and at the moment, the acquired first human face picture is often a common picture without special expression. Then, by using an image recognition technology, each feature point on the face, such as left and right eyes, nose tips, left and right mouth corners, etc., is recognized, and the feature points are projected into a plane coordinate system, and position coordinate data of each feature point is recorded.
In a preferred embodiment, the face ground feature extraction network is a multitasking convolutional neural network (Multi-task convolutional neural network; MTCNN). The algorithm aligns face detection in a network, realizes a multi-task learning network through CNN model cascade connection, and realizes a model of face detection and facial feature point calibration. Particularly, the algorithm can calibrate the 5 characteristic points of the left eye, the right eye, the nose tip, the left mouth angle and the right mouth angle on the face, compared with other algorithms, the facial characteristic points required to be identified are greatly reduced, and the speed and the accuracy of face identification are improved.
The whole model is divided into three stages, wherein a series of candidate windows are rapidly generated through a shallow CNN network in the first stage; the second stage filters out most of non-face candidate windows through a CNN network with stronger capability; the third stage finds 5 marker points, i.e. 5 facial feature points, on the face through a more powerful network.
Referring to fig. 2, fig. 2 is a network structure diagram of the MTCNN, and as shown in the drawing, the MTCNN network 1 is composed of 3 network structures: P-Net11, R-Net12, O-Net13.
P-Net11 (pro-sal Network, area candidate Network): the network structure mainly obtains the regression vectors of candidate windows and boundary boxes of the face area. And uses the bounding box as a regression to calibrate the candidate windows and then merges the highly overlapping candidate boxes by Non-maximum suppression (Non-Maximum Suppression, NMS).
R-Net12 (refining Network): the network structure also removes those false-positive (false-positive) areas by bounding box regression and NMS. Because the network structure is different from the P-Net network structure, one more full connection layer is added, so that better effect of inhibiting false-positive can be obtained.
O-Net13 (Output Network): the network structure is more than the R-Net network structure by one layer of roll base layers, so the processing result is finer. The effect is the same as that of R-Net 12. But the network structure performs more supervision on the face area, and outputs 5 landmarks (landmark).
The complete MTCNN treatment steps are as follows:
scaling the target picture to form a plurality of image pyramids under different scales;
generating a plurality of candidate windows and frame regression vectors of a face area in each picture by using P-Net, correcting the candidate windows by using a frame regression method, and merging overlapped candidate frames by using a non-maximum suppression method;
using R-Net to improve the candidate window, inputting the candidate window passing through P-Net into R-Net, rejecting the error window, and continuing to process the candidate frame by using frame regression and non-maximum suppression method;
and outputting the final face frame and the face feature point positions by using the O-Net.
After the facial feature point positions of the face are provided, the expression judgment can be performed. Typically, when a person's face produces an expression change, these facial feature points form some positional changes. For example, when a person smiles, the corners of the mouth of the person can be seen to rise, that is, the person faces in the ordinary state are compared, and the position coordinates of the left and right corners of the mouth can be found to be displaced upwards relatively in the smiling state. For example, when a person is frowning, both eyes of the person are slightly closed, and the distance between the eyes is slightly shortened, etc. According to the position change relation of facial feature points of the person under different expressions, the current facial expression of the face can be judged.
At this time, an expression prompting instruction can be generated through a criterion generating device to prompt the face user to be checked to make an expression. For example, a smile prompt instruction is sent out to require the current face to perform smile expression action.
S12, identifying a second face picture, and judging the facial expression in the second face picture.
After the user makes the expression action according to the expression prompt instruction. The face acquisition device acquires a second face picture, namely a face picture with an expression, then identifies the second face picture, acquires second position coordinates of relevant face feature points, compares the second position coordinates with first position coordinates of each face feature point in a first face picture, obtains position change conditions of each face feature point in the first face picture and the second face picture, and obtains the current expression of the second face picture according to the change conditions of the first position coordinates and the second position coordinates of the corresponding face feature points.
And S13, if the facial expression in the second facial image is consistent with the expression in the expression prompt instruction, judging that the current facial image is a living facial image.
Finally, in step S13, the detected expression in the second face picture is compared with the facial expression required in the expression prompting instruction, if the detected expression is consistent with the facial expression required in the expression prompting instruction, the current face is proved to be a living face, and other non-living faces are not proved to be living faces.
In a preferred embodiment, after the expression criteria are performed, further detection verification may be performed by face rotation criteria. Referring to fig. 3, fig. 3 is a detection method of the face rotation criterion, which includes:
s21, sending out a face rotation instruction. The rotation instruction is, for example, a left turn or a right turn. The simple left-right rotation can be regarded as rotation around a vertical axis (which is defined as a Z axis by the invention), so that the face rotation criterion can be converted into a data problem that the three-dimensional feature vector rotates around a three-dimensional vector Z axis (0, 1).
S22, recognizing a third face picture, and judging the face rotation direction in the third face picture.
After the person receives the face rotation instruction, the face acquisition device further acquires a third face picture, and then identifies and records facial feature points in the third face picture.
According to the analysis, the judgment of the face rotation direction in the third face picture can be realized by changing the angles of the face feature points in the first face picture and the third face picture. If the face can rotate according to the prompt given by the rotating instruction, the collected face picture can generate the effects of different angles. The judgment of the angle change is calculated according to a calculation formula of the three-dimensional vector of the facial feature points, wherein the three-dimensional vector is a vector (0, 1) where the Z axis of the coordinate system to which the facial feature points belong is located.
S23, if the face rotation direction in the third face picture is consistent with the rotation direction in the face rotation instruction, judging that the current face is a living face.
It should be noted that the rotation criterion may occur after or before the expression criterion.
Referring to fig. 4, fig. 4 is a schematic block diagram of the face living body detection apparatus of the present invention. As shown in the figure, the human face living body detection device comprises: the face acquisition device 1 is used for acquiring face pictures; the face recognition device 2 is used for recognizing the acquired face picture and recording the facial feature point positions on the face; a criterion generating device 3 for generating an expression prompting instruction; and the central processing device 4 controls the face acquisition device to acquire the face at least twice, and controls the face recognition device to recognize the change condition of facial feature points on the face during the second acquisition to obtain facial expression information, compares the facial expression information with the expression in the expression prompt instruction, and obtains a face living body detection result according to the comparison result.
In a preferred embodiment, the criterion generating device 3 further includes a face rotation instruction generation unit, the central processing unit 4 controls the face acquisition unit 1 to perform face acquisition for the third time, controls the face recognition unit 2 to recognize the angle change condition of the facial feature points on the face, obtains face rotation information, compares the face rotation information with the rotation direction in the face rotation instruction, and obtains a face living body detection result according to the comparison result.
The following is a detailed description of one embodiment:
the face acquisition device is a camera, for example, and is positioned on a target area, and acquires images when a face to be detected approaches. The located target area image is attached with coordinate information, so that the coordinates of the face feature points can be conveniently extracted in the later period. After the image is collected, mtcnn face detection detects the image, a criterion generating device reminds the front face of a target to face the camera to make smile expression, facial characteristic point positions are used for judging facial smile expression, and if the facial smile expression meets the criterion, the next step is carried out: the prompting target faces the camera to do the action of uniform rotation of the face to the right, and then do the action of uniform rotation of the face to the left. And judging that the face rotates around the z axis by using the facial feature point positions, and judging that the detected target is a face living body if the face meets the criterion.
Collecting a series of pictures of the human face in a rotating action around a z-axis, cutting each frame of human face detected in real time by mtcnn human face detection, and carrying out normalization processing on the subsequent pictures by taking a first frame of picture as a standard: assuming that the front view is projected onto the xoz plane, two-dimensional coordinates (x 1, z 1) (x 2, z 2) (x 3, z 3) of three feature points in the xoz plane are extracted therefrom by the ratio of the nose tip-to-left eye height z3-z1 to the nose tip-to-left eye height first_z3-first_z1 of the first frame picture:
n=(z3-z1)/(first_z3-first_z1)
and then the length and width of each frame of picture are reduced or enlarged by n times in the same proportion. This allows the distances between the feature points in the series of normalized pictures to be matched.
The picture is mapped on a xoz plane, the first coordinates (old_x1, old_z1) of the left eye feature point, the first coordinates (old_x2, old_z2) of the right eye feature point of the front view are mapped, the third coordinates (new_x1, new_z1) of the left eye feature point and the third coordinates (new_x2, new_z2) of the right eye feature point are extracted from the normalized picture, and the rotation angle theta is calculated according to the coordinates of the two eyes:
cos(θ)*old_x1-sin(θ)*old_y1=new_x1
cos(θ)*old_x2-sin(θ)*old_y2=new_x2
since the coordinates of the left and right eyes of the person in the y-axis direction are approximately equal old_y2=old_y1, it is possible to obtain
cos(θ)=(new_x2-new_x1)/(old_x2-old_x1)
θ=arccos((new_x2-new_x1)/(old_x2-old_x1))
The rotation angle θ is obtained.
The rotation angle of the front side illumination is 0 °, l=old_x2-old_x1 is a fixed value, and indicates the distance between the two eyes of the front side illumination in the x-axis direction. The face rotates around the z axis at a constant speed, so that the rotation angles theta are uniformly distributed on [0, pi/2 ], the values of (new_x2-new_x1)/L are shown in fig. 5, and fig. 5 is a schematic diagram of the change of the values of (new_x2-new_x1)/L along with the rotation angles theta when the face rotates around the z axis at a constant speed.
If the value of (new_x2-new_x1)/L is consistent with fig. 5 and the value of new_x1 is gradually increasing, it can be determined that the face is doing a right uniform rotation motion around the z-axis.
If the value of (new_x2-new_x1)/L is consistent with fig. 5 and the value of new_x1 is gradually decreasing, it can be determined that the face is doing a motion of rotating left and right around the z-axis at a uniform speed.
In summary, the invention judges whether the detection target is a human face living body according to the smiling expression and the face rotation action of the human face, and finds the criteria of the smiling action and the uniform rotation action around the z axis of the human face, so that the detection method is directly and effectively. The invention can judge the smiling action and the uniform rotation action around the z axis of the face by only positioning the three-dimensional space coordinates of the five characteristic points of the left eye, the right eye, the nose tip and the left mouth corner of the face, thereby achieving the aim of detecting the living body of the face and having high practicability. The invention only selects a small number of characteristic points, has small calculated amount, short program operation time and high detection efficiency.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.
Claims (8)
1. The human face living body detection method is characterized by being carried out by expression criteria, wherein the expression criteria comprise;
recognizing a first face picture and sending out an expression prompt instruction;
identifying a second face picture, and judging the facial expression in the second face picture;
if the facial expression in the second facial image is consistent with the expression in the expression prompting instruction, judging that the current facial image is a living facial image;
the judging of the facial expression in the second facial image is achieved through position change of facial feature points in the first facial image and the second facial image, the facial feature points are identified through a multi-task convolutional neural network, and the multi-task convolutional neural network identification comprises the following steps:
scaling the target picture to form a plurality of image pyramids under different scales;
generating candidate windows and frame regression vectors of face areas in each picture by using P-Net, correcting the candidate windows by using a frame regression method, and merging overlapped candidate frames by using a non-maximum suppression method; using R-Net to improve the candidate window, inputting the candidate window passing through P-Net into R-Net, rejecting the error window, and continuing to process the candidate frame by using frame regression and non-maximum suppression method;
and outputting the final face frame and the face feature point positions by using the O-Net.
2. The face living body detection method according to claim 1, wherein the face feature points at least include a left eye, a right eye, a nose tip, a left mouth corner, and a right mouth corner, the first position coordinates of the face feature points are recorded when the first face picture is identified, the second position coordinates of the face feature points are recorded when the second face picture is identified, and the face expression is judged according to the changes of the first position coordinates and the second position coordinates of the corresponding face feature points.
3. The face living body detection method according to claim 1, wherein the detection method is further performed by face rotation criteria, comprising:
sending out a face rotation instruction;
recognizing a third face picture, and judging the face rotation direction in the third face picture;
and if the face rotation direction in the third face picture is consistent with the rotation direction in the face rotation instruction, judging that the current face is a living face.
4. A face living body detection method according to claim 3, wherein the determination of the face rotation direction in the third face picture is achieved by angle change of face feature points in the first face picture and the third face picture.
5. The face living body detection method according to claim 4, wherein the determination of the angle change is calculated according to a calculation formula of the face feature point around a three-dimensional vector, wherein the three-dimensional vector is a vector (0, 1) where a Z-axis of a coordinate system to which the face feature point belongs.
6. The face living body detection method according to claim 4, wherein the face rotation criterion is performed before or after the expression criterion.
7. A human face living body detection apparatus, characterized by comprising:
the face acquisition device is used for acquiring face pictures;
the face recognition device is used for recognizing the face of the acquired face picture and recording the position of the facial feature point on the face;
the criterion generating device generates an expression prompting instruction;
the central processing device controls the face acquisition device to acquire the face at least twice, and controls the face recognition device to recognize the change condition of facial feature points on the face during the second acquisition to obtain facial expression information, compares the facial expression information with the expression in the expression prompt instruction, and obtains a face living body detection result according to the comparison result;
the face recognition device is specifically used for recognizing the characteristic points of the face of the person through a multitasking convolutional neural network, and the multitasking convolutional neural network recognition comprises the following steps:
scaling the target picture to form a plurality of image pyramids under different scales;
generating candidate windows and frame regression vectors of face areas in each picture by using P-Net, correcting the candidate windows by using a frame regression method, and merging overlapped candidate frames by using a non-maximum suppression method; using R-Net to improve the candidate window, inputting the candidate window passing through P-Net into R-Net, rejecting the error window, and continuing to process the candidate frame by using frame regression and non-maximum suppression method;
and outputting the final face frame and the face feature point positions by using the O-Net.
8. The apparatus for detecting a human face in vivo according to claim 7, wherein said criterion generating means further comprises means for generating a human face rotation instruction, said central processing means controls said human face collecting means to collect a human face for a third time, and controls said human face collecting means to recognize a change in angle of a facial feature point on the human face, to obtain human face rotation information, to compare the human face rotation information with a rotation direction in said human face rotation instruction, and to obtain a human face in vivo detection result based on the comparison result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910302473.XA CN111652019B (en) | 2019-04-16 | 2019-04-16 | Face living body detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910302473.XA CN111652019B (en) | 2019-04-16 | 2019-04-16 | Face living body detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111652019A CN111652019A (en) | 2020-09-11 |
CN111652019B true CN111652019B (en) | 2023-06-20 |
Family
ID=72346246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910302473.XA Active CN111652019B (en) | 2019-04-16 | 2019-04-16 | Face living body detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111652019B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118351602A (en) * | 2024-06-14 | 2024-07-16 | 杭州海康威视数字技术股份有限公司 | Iris recognition equipment and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751110A (en) * | 2013-12-31 | 2015-07-01 | 汉王科技股份有限公司 | Bio-assay detection method and device |
CN106355147A (en) * | 2016-08-26 | 2017-01-25 | 张艳 | Acquiring method and detecting method of live face head pose detection regression apparatus |
CN108985134A (en) * | 2017-06-01 | 2018-12-11 | 重庆中科云丛科技有限公司 | Face In vivo detection and brush face method of commerce and system based on binocular camera |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10565433B2 (en) * | 2017-03-30 | 2020-02-18 | George Mason University | Age invariant face recognition using convolutional neural networks and set distances |
-
2019
- 2019-04-16 CN CN201910302473.XA patent/CN111652019B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751110A (en) * | 2013-12-31 | 2015-07-01 | 汉王科技股份有限公司 | Bio-assay detection method and device |
CN106355147A (en) * | 2016-08-26 | 2017-01-25 | 张艳 | Acquiring method and detecting method of live face head pose detection regression apparatus |
CN108985134A (en) * | 2017-06-01 | 2018-12-11 | 重庆中科云丛科技有限公司 | Face In vivo detection and brush face method of commerce and system based on binocular camera |
Non-Patent Citations (1)
Title |
---|
钱勇生 ; 邵洁 ; 季欣欣 ; 李晓瑞 ; 莫晨 ; 程其玉 ; .基于改进卷积神经网络的多视角人脸表情识别.计算机工程与应用.2018,(24),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111652019A (en) | 2020-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921100B (en) | Face recognition method and system based on visible light image and infrared image fusion | |
CN105809144B (en) | A kind of gesture recognition system and method using movement cutting | |
JP5715833B2 (en) | Posture state estimation apparatus and posture state estimation method | |
CN105574518B (en) | Method and device for detecting living human face | |
CN105095829B (en) | A kind of face identification method and system | |
US8989455B2 (en) | Enhanced face detection using depth information | |
CN105740779B (en) | Method and device for detecting living human face | |
JP2006133946A (en) | Moving object recognition device | |
CN105740778B (en) | Improved three-dimensional human face in-vivo detection method and device | |
JP2013178816A (en) | Image processing apparatus, imaging apparatus and image processing method | |
JP2005056387A (en) | Image processor, imaging apparatus and image processing method | |
JP5170094B2 (en) | Spoofing detection system, spoofing detection method, and spoofing detection program | |
US10311287B2 (en) | Face recognition system and method | |
CN114894337B (en) | Temperature measurement method and device for outdoor face recognition | |
CN109410138B (en) | Method, device and system for modifying double chin | |
Lee et al. | An automated video-based system for iris recognition | |
JP2004265267A (en) | Face authentication method and face authentication device | |
CN111652018B (en) | Face registration method and authentication method | |
JP2014064083A (en) | Monitoring device and method | |
CN111652019B (en) | Face living body detection method and device | |
CN108875488B (en) | Object tracking method, object tracking apparatus, and computer-readable storage medium | |
KR20110024178A (en) | Device and method for face recognition using 3 dimensional shape information | |
Gupta et al. | A robust approach of facial orientation recognition from facial features | |
US20210042510A1 (en) | Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms | |
Takeuchi et al. | Multimodal soft biometrie verification by hand shape and handwriting motion in the air |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |