CN105740778B - Improved three-dimensional human face in-vivo detection method and device - Google Patents

Improved three-dimensional human face in-vivo detection method and device Download PDF

Info

Publication number
CN105740778B
CN105740778B CN201610048479.5A CN201610048479A CN105740778B CN 105740778 B CN105740778 B CN 105740778B CN 201610048479 A CN201610048479 A CN 201610048479A CN 105740778 B CN105740778 B CN 105740778B
Authority
CN
China
Prior art keywords
actual
curved surface
characteristic point
point
curvature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610048479.5A
Other languages
Chinese (zh)
Other versions
CN105740778A (en
Inventor
孔勇
王玉瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eye Intelligent Technology Co Ltd
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Eye Intelligence Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eye Intelligence Technology Co Ltd filed Critical Beijing Eye Intelligence Technology Co Ltd
Priority to CN201610048479.5A priority Critical patent/CN105740778B/en
Publication of CN105740778A publication Critical patent/CN105740778A/en
Application granted granted Critical
Publication of CN105740778B publication Critical patent/CN105740778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces

Abstract

The invention provides an improved three-dimensional human face in-vivo detection method and a device thereof, which relate to the technical neighborhood of human face recognition and image processing, and the method comprises the following steps: collecting a three-dimensional face image; selecting a plurality of feature points from the three-dimensional face image, and acquiring three-dimensional coordinate information of each point in the neighborhood of each feature point; calculating the actual curvature of the curved surface of each characteristic point according to the three-dimensional coordinate information of each point in the neighborhood of each characteristic point; and judging whether the three-dimensional face image is from a living body or not according to the actual curved surface curvature of each feature point. The invention carries out in-vivo detection according to the actual curvature of the curved surface of the characteristic point, so that the detection result has high accuracy and robustness and stability.

Description

Improved three-dimensional human face in-vivo detection method and device
Technology neighborhood
The invention relates to the field of face recognition and image processing technologies, in particular to an improved three-dimensional face in-vivo detection method and an improved three-dimensional face in-vivo detection device.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to capture an image or video stream containing a face with a camera or a video camera, automatically detect and track the face in the image, and then perform face recognition on the detected face. The safety and privacy can be effectively enhanced by using the face recognition technology, but a problem exists in reality, namely false information such as a printed photo, a photo in a mobile phone or a pad or a video can break through a safety line of a face detection system, and the safety and privacy are threatened.
The living body detection technology is introduced into the face recognition, so that illegal users can be effectively prevented from utilizing the false information of photos, videos and the like of legal users to pass through the face detection system, and therefore security holes are avoided. In the prior art, a human face living body detection method already exists. For example, chinese patent application No. 201310133442.9 discloses a method and system for distinguishing a real face from a picture face, in which two cameras are used to collect images to match faces, three-dimensional feature point coordinates of face feature points are established according to the face feature points and projection matrices of the two cameras for the matched faces, a maximum depth difference between the feature points in the three-dimensional feature point coordinates is obtained, and a comparison is made between the maximum depth difference between the feature points and a preset depth threshold to determine whether the faces are real faces.
The existing human face living body detection method only uses the maximum depth difference value between the characteristic points to carry out simple judgment, and has low accuracy and poor robustness and stability.
Disclosure of Invention
The invention aims to solve the technical problem of how to provide a three-dimensional human face in-vivo detection method and a three-dimensional human face in-vivo detection device so as to improve the accuracy, robustness and stability of a detection result.
In order to solve the above problems, the present invention discloses an improved three-dimensional human face living body detection method, which comprises:
collecting a three-dimensional face image;
selecting a plurality of feature points from the three-dimensional face image, and acquiring three-dimensional coordinate information of each point in the neighborhood of each feature point;
calculating the actual curvature of the curved surface of each characteristic point according to the three-dimensional coordinate information of each point in the neighborhood of each characteristic point;
and judging whether the three-dimensional face image is from a living body or not according to the actual curved surface curvature of each feature point.
As an illustration, the actual curvature of the curved surface is: actual gaussian curvature and actual planar curvature.
As an example, the determining whether the three-dimensional face image is from a living body according to the curvature information includes:
presetting a standard curved surface type of each characteristic point;
obtaining the actual curved surface type of each characteristic point according to the actual Gaussian curvature and the actual plane curvature of each characteristic point;
and comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point to judge whether the three-dimensional face image is from a living body.
As an example, the determining whether the three-dimensional face image is from a living body by comparing the actual surface type of each feature point with the standard surface type of the feature point includes:
comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point;
acquiring the number of feature points of which the actual surface types completely conform to the corresponding standard surface types, and if the number of the completely conforming feature points is greater than or equal to a preset threshold value, judging that the three-dimensional face image is from a living body; and if the number of the completely matched feature points is less than a preset threshold value, judging that the three-dimensional face image is from a non-living body.
As an illustration, the plurality of feature points includes: the characteristic points of the glabella, the nasal root, the outer canthus of the left eye, the inner canthus of the right eye, the outer canthus of the right eye, the tip of the nose, the left edge of the nose, the right edge of the nose and the middle of the nose are combined.
The invention also discloses an improved three-dimensional human face in-vivo detection device, which comprises:
the acquisition module is used for acquiring a three-dimensional face image;
the acquisition module is used for selecting a plurality of characteristic points from the three-dimensional face image and acquiring three-dimensional coordinate information of each point in the neighborhood of each characteristic point;
the calculation module is used for calculating the actual curved surface curvature of each characteristic point according to the three-dimensional coordinate information of each point in the neighborhood of each characteristic point;
and the judging module is used for judging whether the three-dimensional face image is from a living body according to the actual curved surface curvature of each feature point.
As an illustration, the actual curvature of the curved surface is: actual gaussian curvature and actual planar curvature.
As an illustration, the determining module includes:
the type presetting module is used for presetting the standard curved surface type of each characteristic point;
the type obtaining module is used for obtaining the actual curved surface type of each characteristic point according to the actual Gaussian curvature and the actual plane curvature of each characteristic point;
and the type comparison module is used for comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point to judge whether the three-dimensional face image is from a living body.
As an illustration, the type comparison module includes:
the comparison submodule is used for comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point;
the judgment submodule is used for acquiring the number of the feature points of which the actual surface types completely accord with the corresponding standard surface types, and judging that the three-dimensional face image is from a living body if the number of the completely-accordant feature points is greater than or equal to a preset threshold value; and if the number of the completely matched feature points is less than a preset threshold value, judging that the three-dimensional face image is from a non-living body.
As an illustration, the plurality of feature points includes: the characteristic points of the glabella, the nasal root, the outer canthus of the left eye, the inner canthus of the right eye, the outer canthus of the right eye, the tip of the nose, the left edge of the nose, the right edge of the nose and the middle of the nose are combined.
Compared with the prior art, the invention has the following advantages:
the method calculates the actual curved surface curvature of each characteristic point according to the three-dimensional coordinate information of each point in the neighborhood of each characteristic point, judges whether the three-dimensional face image is from a living body according to the actual curved surface curvature of each characteristic point, and has the advantages that the curved surface curvature of the characteristic points is stable because some characteristic points are positioned on the curved surface or the plane with the specific shape of the face and are not easily influenced by expressions, the living body detection is carried out according to the actual curved surface curvature of the characteristic points, the detection result has high accuracy, and the detection result also has robustness and stability.
The actual curvature of the curved surface of the invention may be: and the actual Gaussian curvature and the actual plane curvature can well represent the bending degree and the type of the curved surface where the characteristic point is located through the actual Gaussian curvature and the actual plane curvature.
The plurality of feature points selected by the present invention may include: the characteristic points of the glabella, the nasal root, the outer canthus of the left eye, the inner canthus of the right eye, the outer canthus of the right eye, the tip of the nose, the left edge of the nose, the right edge of the nose and the middle of the nose are combined. Because the human eyes and the nose are the two most sunken and the most prominent areas of the human face respectively, the feature points selected from the areas have good stability and good feature representativeness, and meanwhile, the selected feature points are positioned on a curved surface or a plane with a specific shape and are not easily influenced by expressions.
Drawings
FIG. 1 is a flow chart of an embodiment of an improved three-dimensional face liveness detection method of the present invention;
FIG. 2 is a schematic diagram illustrating an exemplary manner of selecting facial feature points according to an embodiment of the method of the present invention;
FIG. 3 is a schematic diagram illustrating an example of step 104 in an embodiment of the method of the present invention;
FIG. 4 is a schematic diagram of another example of step 104 in an embodiment of the method of the present invention;
FIG. 5 is a schematic structural diagram of an embodiment of an improved three-dimensional human face liveness detection device according to the present invention;
FIG. 6 is a block diagram of an exemplary determining module 504 of an embodiment of the present invention;
fig. 7 is a schematic diagram of another exemplary structure of the determining module 504 in the embodiment of the apparatus of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flow chart of an embodiment of the improved three-dimensional human face live detection method of the present invention is shown, and the method includes:
step 101, collecting a three-dimensional face image.
Three-dimensional face images are typically acquired with a three-dimensional data acquisition module, which may include: the 3D camera is used for collecting three-dimensional face data; and the data processing module is used for processing the three-dimensional face data to generate a three-dimensional face image. Unlike a two-dimensional face image, a three-dimensional face image contains depth information of a face, which is distance data of a three-dimensional space. Specifically, a lens of the 3D camera may be used as an origin of a three-dimensional space to establish a three-dimensional coordinate system, a direction of the 3D camera facing the face may be used as a positive direction of a z-axis, and the positive directions of the x-axis and the y-axis may be determined according to a left-hand coordinate system, so that the three-dimensional face data is an x, y, and z coordinate of a certain point on the face in the three-dimensional coordinate system, where the z coordinate represents depth information of the point. The 3D camera can directly acquire the three-dimensional coordinates of some specific points on the face, and the three-dimensional coordinates of other points can be calculated through the three-dimensional coordinates of the acquired points.
As an illustration, a 3D camera may include: the infrared laser transmitter simulates the parallax principle of human eyes, a beam of infrared light is emitted by the infrared laser transmitter, the first infrared sensor and the second infrared sensor track the position of the beam of light, and then the depth information in the three-dimensional face image is calculated by the triangulation positioning principle. The 3D camera may also have other implementation forms, which all belong to the content of the prior art, and the embodiment of the present invention is not described herein again.
And 102, selecting a plurality of feature points from the three-dimensional face image, and acquiring three-dimensional coordinate information of each point in the neighborhood of each feature point.
In the step, a plurality of feature points capable of representing the face features are selected from the three-dimensional face image. The selected feature points are not particularly limited in the present invention, and typically, the feature points are selected in one or a combination of several of the eye region, the nose region, and the mouth region. The human eye area, the nose area and the mouth area are respectively positioned at the upper part, the middle part and the lower part of the face and can represent the main characteristics of the face, and the human eyes and the nose are respectively the two areas which are the most sunken and the most outstanding of the face, so that the stability and the characteristic representativeness are good. The selected feature points should be on a curved surface or a plane of a particular shape and not be susceptible to expression.
As an example, the feature points are selected in the human eye region and the nose region, as shown in fig. 2, the plurality of feature points selected in step 102 may be: the distance between the eyebrows (feature point 1), the base of the nose (feature point 2), the outer canthus of the left eye (feature point 3), the inner canthus of the left eye (feature point 4), the inner canthus of the right eye (feature point 5), the outer canthus of the right eye (feature point 6), the tip of the nose (feature point 7), the left edge of the nose (feature point 8), the right edge of the nose (feature point 9), and the middle of the nose (feature point 10).
As another example, the plurality of feature points selected in step 102 may include, in addition to the 10 points shown in fig. 2: the middle point of the upper lip edge, the middle point of the lower lip edge, a point on the bridge of the nose, and other points.
As a further example, the plurality of feature points selected in step 102 may also be several of the ten points shown in fig. 2.
And after the feature points are selected, acquiring the three-dimensional coordinate information of each point in the neighborhood of each feature point. Preferably, the size of the neighborhood is 3 × 3, that is, 8 neighboring points around the feature point are selected, and the 8 neighboring points and the feature point constitute 9 points in total, then the three-dimensional coordinate information of the 9 points is further obtained, and the actual curvature of the curved surface of the corresponding feature point is calculated by using the three-dimensional coordinate information of the 9 points, and the selection mode of the neighborhood can well represent the actual curvature of the curved surface of the feature point, and has better robustness.
And 103, calculating the actual curvature of the curved surface of each characteristic point according to the three-dimensional coordinate information of each point in the neighborhood of each characteristic point.
According to the differential geometry principle, the actual curved surface curvature of each characteristic point can be calculated through the three-dimensional coordinate information of a plurality of adjacent points, the actual curved surface curvature refers to the curved surface curvature of the characteristic point obtained through actual calculation, and the actual curved surface curvature represents the actual curved shape of the curved surface where each characteristic point is located.
As an illustration, the actual surface curvature calculated is: actual gaussian curvature and actual planar curvature. The product of two principal curvatures of a surface at a point is called the gaussian curvature of the surface at that point, which reflects the general degree of curvature of the surface. The mean curvature is the average of two principal curvatures, and the mean curvature locally describes the curvature of a curved surface embedded in the surrounding space (e.g., a two-dimensional curved surface embedded in a three-dimensional euclidean space). Through the actual Gaussian curvature and the actual plane curvature, the bending degree and the type of the curved surface where the characteristic points are located can be well characterized.
As another example, the actual surface curvature calculated is: actual principal curvature, actual gaussian curvature, and actual plane curvature. The specific calculation methods of the actual principal curvature, the actual gaussian curvature and the actual plane curvature belong to the contents of the prior art, and are not described herein again.
And 104, judging whether the three-dimensional face image is from a living body according to the actual curved surface curvature of each feature point.
In this step, an identification standard may be preset, and if it is determined according to the actual curvature of the curved surface of each feature point that the identification standard is met, it is determined that the three-dimensional face image is from a living body, otherwise, it is determined that the three-dimensional face image is from a non-living body.
According to the method and the device, the actual curvature of the curved surface of each feature point is calculated according to the three-dimensional coordinate information of each point in the neighborhood of each feature point, whether the three-dimensional face image is from a living body or not is judged according to the actual curvature of the curved surface of each feature point, because some feature points are positioned on the curved surface or the plane of the specific shape of the face and are not easily influenced by expressions, the curvature of the curved surface of the feature points is stable, living body detection is carried out according to the actual curvature of the curved surface of the feature points, the accuracy of the detection result is high, and the detection result also has robustness and stability.
As an illustration, as shown in fig. 3, the step 104 may include the following sub-steps:
step 301, presetting the standard surface type of each feature point.
For a living human face, the feature points on a curved surface or a plane of a specific shape have standard curved surface curvatures, and each standard curved surface curvature corresponds to one curved surface type. Taking 10 feature points shown in fig. 2 as an example, the standard surface type of each feature point is preset as follows:
at a characteristic point 1 (between eyebrows), K is 0, H is 0, and the curved surface where the characteristic point is located is a plane;
at a characteristic point 2 (nasal root), K is 0, H is less than 0, and the curved surface where the K is located is saddle-shaped;
the positions of a characteristic point 3 (left eye external canthus), a characteristic point 4 (left eye internal canthus), a characteristic point 5 (right eye internal canthus) and a characteristic point 6 (right eye external canthus) are K >0, H <0, and the curved surface where the characteristic points are located is a concave surface type;
at the characteristic point 7 (nose tip), K is more than 0, H is more than 0, and the curved surface where the characteristic point is located is a convex surface;
the positions of the characteristic point 8 (left nose), the characteristic point 9 (right nose) and the characteristic point 10 (middle nose) are K >0, H <0, and the curved surface where the characteristic points are located is a concave surface type.
And K is the preset standard Gaussian curvature of each characteristic point, and H is the preset standard plane curvature of each characteristic point.
And step 302, obtaining the actual curved surface type of each feature point according to the actual Gaussian curvature and the actual plane curvature of each feature point.
It is feasible that the curvature of the curved surface of the feature point is taken as a measure, for example, for a non-curved photograph, which is basically a plane, the actual gaussian curvature and the actual plane curvature of the 10 feature points shown in fig. 2 are both 0, and the curved surface of each feature point is a plane, which is obviously different from the preset standard. For a curved photograph, for example, a photograph in which the left and right edges are curved outward by holding the photograph in hand, the actual gaussian curvature and the actual plane curvature of the feature points 1, 2, 7, 10 are both 0, and the curved surface in which they are located is a plane, while the actual gaussian curvature and the actual plane curvature of the remaining feature points 3, 4, 5, 6, 8, 9 are both greater than 0, and the curved surface in which they are located is a convex surface, which is significantly different from the preset standard. Other cases can be analyzed with reference to the above-described methods.
Step 303, comparing the actual curved surface type of each feature point with the standard curved surface type of the feature point, and determining whether the three-dimensional face image is from a living body.
As an illustration, the step 303 includes the following sub-steps:
step 3031, comparing the actual surface type of each characteristic point with the standard surface type of the characteristic point;
3032, acquiring the number of the feature points of which the actual surface types completely conform to the corresponding standard surface types, and if the number of the completely conforming feature points is greater than or equal to a first preset threshold value, judging that the three-dimensional face image is from a living body; and if the number of the completely matched feature points is less than a first preset threshold value, judging that the three-dimensional face image is from a non-living body.
A corresponding first preset threshold may be set according to the number of the plurality of feature points selected in step 102, where the first preset threshold should be less than or equal to the number of the plurality of feature points selected in step 102. Selecting a proper first preset threshold, wherein if the first preset threshold is too large, the calculated amount is large; if the first preset threshold is too small, the judgment standard is too low, and the accuracy of face recognition is poor.
As an example, for 10 feature points shown in fig. 2, the first preset threshold may be 8, acquiring the number of feature points where the actual surface type of the 10 feature points completely matches with the corresponding standard surface type, and if the number of completely matching feature points is greater than or equal to 8, determining that the three-dimensional face image is from a living body; and if the number of the completely matched feature points is less than 8, judging that the three-dimensional face image is from a non-living body.
As a preferred embodiment, for each three-dimensional face image, a multi-dimensional feature vector may be constructed, the comparison result of the actual curved surface type of each feature point and the standard curved surface type of the feature point is taken as each feature value of the multi-dimensional feature vector, and if the actual curved surface type of a feature point is the same as the standard curved surface type of the feature point, the feature value corresponding to the feature point is 1; and if the actual surface type of one characteristic point is different from the standard surface type of the characteristic point, the characteristic value corresponding to the characteristic point is 0. For example, as 10 feature points shown in fig. 2, a 10-dimensional feature vector [ a1, a2, … …, a10] representing the face image may be constructed, where a1, a2, … …, a10 respectively represent feature values corresponding to the feature points 1, 2 … … 10, the standard surface type of the feature point 1 is a flat surface type, and if the actual surface type of the feature point 1 obtained through step 302 is a flat surface type, the feature value a1 corresponding to the feature point 1 is 1, that is, the first numerical value a1 of the multi-dimensional feature vector is 1; if the actual surface type of the feature point 1 obtained through the step 302 is not a flat type, the feature value a1 corresponding to the feature point 1 is 0, i.e., the first numerical value a1 of the multi-dimensional feature vector is 0. The processing procedure of the remaining 9 feature points can refer to feature point 1, and is not described herein again.
According to 10-dimensional feature vector [ a1, a2, … …, a10]]It is further decided that if the sum of the individual feature values of this 10-dimensional feature is greater than or equal to a first preset threshold, here preferably 8, i.e.,
Figure BDA0000913608040000081
judging that the three-dimensional face image is from a living body; if the sum of the individual feature values of this 10-dimensional feature is smaller than a first preset threshold, that is,the three-dimensional face image is judged to be from a non-living body.
As another illustration, the step 303 may include the following sub-steps:
step 3033, comparing the actual surface type of each characteristic point with the standard surface type of the characteristic point;
3034, acquiring the number of the feature points of which the actual surface types are not consistent with the corresponding standard surface types, and if the number of the non-consistent feature points is less than a second preset threshold value, judging that the three-dimensional face image is from a living body; and if the number of the unmatched feature points is larger than or equal to a second preset threshold value, judging that the three-dimensional face image is from a non-living body.
A second preset threshold may be set according to the number of the feature points selected in step 102, where the second preset threshold should be less than or equal to the number of the feature points selected in step 102. And selecting a proper second preset threshold, wherein if the second preset threshold is too large, the recognition standard is too low, and the accuracy of face recognition is poor.
As an example, for 10 feature points shown in fig. 2, if the second preset threshold may be 2, acquiring the number of feature points, of which the actual surface types of the 10 feature points do not conform to the corresponding standard surface type, and if the number of the feature points that do not conform to the standard surface type is less than 2, determining that the three-dimensional face image is from a living body; and if the number of the unmatched feature points is more than or equal to 2, judging that the three-dimensional face image is from a non-living body.
As another example, as shown in fig. 4, the step 104 may include the following sub-steps:
step 401, presetting a standard gaussian curvature and a standard plane curvature of each feature point.
And step 402, comparing the actual Gaussian curvature and the actual plane curvature of each feature point with the standard Gaussian curvature and the standard plane curvature of the feature point, and judging whether the three-dimensional face image is from a living body.
As an illustration, the step 402 may include the following sub-steps:
step 4021, comparing the actual Gaussian curvature and the actual plane curvature of each feature point with the standard Gaussian curvature and the standard plane curvature of the feature point;
step 4022, acquiring the number of feature points of which the actual Gaussian curvatures and the actual plane curvatures are completely matched with the corresponding standard Gaussian curvatures and the standard plane curvatures, and if the number of the completely matched feature points is greater than or equal to a third preset threshold, judging that the three-dimensional face image is from a living body; and if the number of the matched feature points is less than a third preset threshold value, judging that the three-dimensional face image is from a non-living body.
Specifically, if the actual gaussian curvature of a feature point is the same as the positive or negative value of the corresponding standard gaussian curvature, and the actual plane curvature of the feature point is the same as the positive or negative value of the corresponding standard plane curvature, it is determined that the actual gaussian curvature and the actual plane curvature of the feature point are completely matched with the corresponding standard gaussian curvature and the corresponding standard plane curvature. If the actual Gaussian curvature of a feature point is different from the positive and negative values of the corresponding standard Gaussian curvature and/or the actual plane curvature of the feature point is different from the positive and negative values of the corresponding standard plane curvature, the fact that the actual Gaussian curvature and the actual plane curvature of the feature point are not completely matched with the corresponding standard Gaussian curvature and the standard plane curvature is shown. The positive and negative values include: a value of positive (greater than 0), a value of negative (less than 0) and a value of 0.
A third preset threshold may be set according to the number of the feature points selected in step 102, where the third preset threshold should be less than or equal to the number of the feature points selected in step 102. Selecting a proper third preset threshold, wherein if the third preset threshold is too large, the calculated amount is large; if the third preset threshold is too small, the judgment standard is too low, and the accuracy of face recognition is poor. As an example, the third preset threshold may be 8 for 10 feature points shown in fig. 2.
Referring to fig. 5, there is shown a schematic structural diagram of an embodiment of the improved three-dimensional human face living body detection device 500 of the present invention, including:
the acquisition module 501 is used for acquiring a three-dimensional face image;
an obtaining module 502, configured to select multiple feature points from the three-dimensional face image, and obtain three-dimensional coordinate information of each point in a neighborhood of each feature point;
a calculating module 503, configured to calculate an actual curvature of the curved surface of each feature point according to three-dimensional coordinate information of each point in a neighborhood of each feature point;
and the judging module 504 is configured to judge whether the three-dimensional face image is from a living body according to the actual curvature of the curved surface of each feature point.
According to the method and the device, the actual curvature of the curved surface of each feature point is calculated according to the three-dimensional coordinate information of each point in the neighborhood of each feature point, whether the three-dimensional face image is from a living body or not is judged according to the actual curvature of the curved surface of each feature point, because some feature points are positioned on the curved surface or the plane of the specific shape of the face and are not easily influenced by expressions, the curvature of the curved surface of the feature points is stable, living body detection is carried out according to the actual curvature of the curved surface of the feature points, the accuracy of the detection result is high, and the detection result also has robustness and stability.
As an illustration, as shown in fig. 2, the plurality of feature points include: the characteristic points of the glabella, the nasal root, the outer canthus of the left eye, the inner canthus of the right eye, the outer canthus of the right eye, the tip of the nose, the left edge of the nose, the right edge of the nose and the middle of the nose are combined. As another example, the plurality of feature points may include, in addition to the 10 points shown in fig. 2: the middle point of the upper lip edge, the middle point of the lower lip edge, a point on the bridge of the nose, and other points.
As an illustration, the actual curvature of the curved surface is: actual gaussian curvature and actual planar curvature. As another illustration, the actual surface curvature is: actual principal curvature, actual gaussian curvature, and actual plane curvature.
As an illustration, as shown in fig. 6, the determining module 504 includes:
a type presetting module 601, configured to preset a standard curved surface type of each feature point;
a type obtaining module 602, configured to obtain an actual curved surface type of each feature point according to the actual gaussian curvature and the actual plane curvature of each feature point;
a type comparing module 603, configured to compare the actual curved surface type of each feature point with the standard curved surface type of the feature point, and determine whether the three-dimensional face image is from a living body.
As an illustration, the type comparison module 603 includes:
the first comparison submodule is used for comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point;
the first judgment submodule is used for acquiring the number of the characteristic points, of which the actual surface types of the characteristic points completely conform to the corresponding standard surface types, and judging that the three-dimensional face image is from a living body if the number of the completely conforming characteristic points is greater than or equal to a preset threshold value; and if the number of the completely matched feature points is less than a preset threshold value, judging that the three-dimensional face image is from a non-living body.
As another example, as shown in fig. 7, the determining module 504 includes:
a curvature presetting module 701, configured to preset a standard gaussian curvature and a standard plane curvature of each feature point;
and a curvature comparison module 702, configured to determine whether the three-dimensional face image is from a living body by comparing the actual gaussian curvature and the actual plane curvature of each feature point with the standard gaussian curvature and the standard plane curvature of the feature point.
As an illustration, the curvature comparison module 702 includes:
the second comparison submodule is used for comparing the actual Gaussian curvature and the actual plane curvature of each feature point with the standard Gaussian curvature and the standard plane curvature of the feature point;
the second judgment submodule is used for acquiring the number of the feature points of which the actual Gaussian curvatures and the actual plane curvatures are completely matched with the corresponding standard Gaussian curvatures and standard plane curvatures, and judging that the three-dimensional face image is from a living body if the number of the completely matched feature points is greater than or equal to a third preset threshold; and if the number of the matched feature points is less than a third preset threshold value, judging that the three-dimensional face image is from a non-living body.
Specifically, if the actual gaussian curvature of a feature point is the same as the positive or negative value of the corresponding standard gaussian curvature, and the actual plane curvature of the feature point is the same as the positive or negative value of the corresponding standard plane curvature, it is determined that the actual gaussian curvature and the actual plane curvature of the feature point are completely matched with the corresponding standard gaussian curvature and the corresponding standard plane curvature. If the actual Gaussian curvature of a feature point is different from the positive and negative values of the corresponding standard Gaussian curvature and/or the actual plane curvature of the feature point is different from the positive and negative values of the corresponding standard plane curvature, the fact that the actual Gaussian curvature and the actual plane curvature of the feature point are not completely matched with the corresponding standard Gaussian curvature and the standard plane curvature is shown. The positive and negative values include: a value of positive (greater than 0), a value of negative (less than 0) and a value of 0.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The above detailed description is made on an improved three-dimensional human face living body detection method and a three-dimensional human face living body detection device provided by the present invention, and a specific example is applied in the text to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (4)

1. An improved three-dimensional human face living body detection method is characterized by comprising the following steps:
collecting a three-dimensional face image;
selecting a plurality of feature points in the three-dimensional face image, and acquiring three-dimensional coordinate information of each point in a neighborhood of each feature point, wherein the plurality of feature points comprise: several or all of characteristic points in the glabella, the nasal root, the outer canthus of the left eye, the inner canthus of the right eye, the outer canthus of the right eye, the tip of the nose, the left edge of the nose, the right edge of the nose and the middle of the nose are combined;
calculating the actual curvature of the curved surface of each feature point according to the three-dimensional coordinate information of each point in the neighborhood of each feature point, wherein the actual curvature of the curved surface is as follows: actual gaussian curvature and actual plane curvature;
judging whether the three-dimensional face image comes from a living body according to the actual curved surface curvature of each feature point, wherein the judging step comprises the following steps:
presetting a standard curved surface type of each feature point, wherein the standard curved surface type is as follows:
at the characteristic point between eyebrows, K is 0, H is 0, and the curved surface where K and H are located is a plane;
at the characteristic point of the nose root, K is 0, H is less than 0, and the curved surface where the nose root is located is saddle-shaped;
the left eye external canthus characteristic point, the left eye internal canthus characteristic point, the right eye internal canthus characteristic point and the right eye external canthus characteristic point, K is greater than 0, H is less than 0, and the curved surface is concave;
at the nose tip characteristic point, K is greater than 0, H is greater than 0, and the curved surface where the nose tip characteristic point is located is a convex surface;
the curve surface of the nose left characteristic point, the nose right characteristic point and the nose middle characteristic point is a concave surface type, wherein K is a preset standard Gaussian curvature of each characteristic point, and H is a preset standard plane curvature of each characteristic point;
obtaining the actual curved surface type of each characteristic point according to the actual Gaussian curvature and the actual plane curvature of each characteristic point;
and comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point to judge whether the three-dimensional face image is from a living body.
2. The method as claimed in claim 1, wherein the determining whether the three-dimensional face image is from a living body by comparing the actual surface type of each feature point with the standard surface type of the feature point comprises:
comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point;
acquiring the number of feature points of which the actual surface types completely conform to the corresponding standard surface types, and if the number of the completely conforming feature points is greater than or equal to a preset threshold value, judging that the three-dimensional face image is from a living body; and if the number of the completely matched feature points is less than a preset threshold value, judging that the three-dimensional face image is from a non-living body.
3. An improved three-dimensional human face in-vivo detection device, comprising:
the acquisition module is used for acquiring a three-dimensional face image;
an obtaining module, configured to select multiple feature points from the three-dimensional face image, and obtain three-dimensional coordinate information of each point in a neighborhood of each feature point, where the multiple feature points include: several or all of characteristic points in the glabella, the nasal root, the outer canthus of the left eye, the inner canthus of the right eye, the outer canthus of the right eye, the tip of the nose, the left edge of the nose, the right edge of the nose and the middle of the nose are combined;
the calculation module is used for calculating the actual curvature of the curved surface of each feature point according to the three-dimensional coordinate information of each point in the neighborhood of each feature point, wherein the actual curvature of the curved surface is as follows: actual gaussian curvature and actual plane curvature;
the judging module is used for judging whether the three-dimensional face image comes from a living body according to the actual curved surface curvature of each feature point, wherein the judging module comprises:
the type presetting module is used for presetting a standard curved surface type of each feature point, wherein the standard curved surface type is as follows:
at the characteristic point between eyebrows, K is 0, H is 0, and the curved surface where K and H are located is a plane;
at the characteristic point of the nose root, K is 0, H is less than 0, and the curved surface where the nose root is located is saddle-shaped;
the left eye external canthus characteristic point, the left eye internal canthus characteristic point, the right eye internal canthus characteristic point and the right eye external canthus characteristic point, K is greater than 0, H is less than 0, and the curved surface is concave;
at the nose tip characteristic point, K is greater than 0, H is greater than 0, and the curved surface where the nose tip characteristic point is located is a convex surface;
the curve surface of the nose left characteristic point, the nose right characteristic point and the nose middle characteristic point is a concave surface type, wherein K is a preset standard Gaussian curvature of each characteristic point, and H is a preset standard plane curvature of each characteristic point;
the type obtaining module is used for obtaining the actual curved surface type of each characteristic point according to the actual Gaussian curvature and the actual plane curvature of each characteristic point;
and the type comparison module is used for comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point to judge whether the three-dimensional face image is from a living body.
4. The apparatus of claim 3, wherein the type comparison module comprises:
the comparison submodule is used for comparing the actual curved surface type of each characteristic point with the standard curved surface type of the characteristic point;
the judgment submodule is used for acquiring the number of the feature points of which the actual surface types completely accord with the corresponding standard surface types, and judging that the three-dimensional face image is from a living body if the number of the completely-accordant feature points is greater than or equal to a preset threshold value; and if the number of the completely matched feature points is less than a preset threshold value, judging that the three-dimensional face image is from a non-living body.
CN201610048479.5A 2016-01-25 2016-01-25 Improved three-dimensional human face in-vivo detection method and device Active CN105740778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610048479.5A CN105740778B (en) 2016-01-25 2016-01-25 Improved three-dimensional human face in-vivo detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610048479.5A CN105740778B (en) 2016-01-25 2016-01-25 Improved three-dimensional human face in-vivo detection method and device

Publications (2)

Publication Number Publication Date
CN105740778A CN105740778A (en) 2016-07-06
CN105740778B true CN105740778B (en) 2020-01-03

Family

ID=56247545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610048479.5A Active CN105740778B (en) 2016-01-25 2016-01-25 Improved three-dimensional human face in-vivo detection method and device

Country Status (1)

Country Link
CN (1) CN105740778B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548152A (en) * 2016-11-03 2017-03-29 厦门人脸信息技术有限公司 Near-infrared three-dimensional face tripper
CN107480586B (en) * 2017-07-06 2020-10-23 天津科技大学 Face characteristic point displacement-based biometric photo counterfeit attack detection method
CN107506752A (en) * 2017-09-18 2017-12-22 艾普柯微电子(上海)有限公司 Face identification device and method
CN108416291B (en) * 2018-03-06 2021-02-19 广州逗号智能零售有限公司 Face detection and recognition method, device and system
CN108389053B (en) * 2018-03-19 2021-10-29 广州逗号智能零售有限公司 Payment method, payment device, electronic equipment and readable storage medium
CN108566777A (en) * 2018-04-18 2018-09-21 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment
CN109145750A (en) * 2018-07-23 2019-01-04 华迅金安(北京)科技有限公司 A kind of driver identity rapid authentication method and system
CN110059579B (en) * 2019-03-27 2020-09-04 北京三快在线科技有限公司 Method and apparatus for in vivo testing, electronic device, and storage medium
CN110598571A (en) * 2019-08-15 2019-12-20 中国平安人寿保险股份有限公司 Living body detection method, living body detection device and computer-readable storage medium
CN110826535B (en) * 2019-12-02 2020-12-29 北京三快在线科技有限公司 Face recognition method, system and device
TWI761739B (en) 2019-12-10 2022-04-21 緯創資通股份有限公司 Live facial recognition system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN105138996A (en) * 2015-09-01 2015-12-09 北京上古视觉科技有限公司 Iris identification system with living body detecting function
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9679212B2 (en) * 2014-05-09 2017-06-13 Samsung Electronics Co., Ltd. Liveness testing methods and apparatuses and image processing methods and apparatuses

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN105138996A (en) * 2015-09-01 2015-12-09 北京上古视觉科技有限公司 Iris identification system with living body detecting function
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"LIVENESS DETECTION BASED ON 3D FACE SHAPE ANALYSIS";Andrea Lagorio et al;《2013 International Workshop on Biometrics and Forensics》;20130405;第1-4页,图1-3 *

Also Published As

Publication number Publication date
CN105740778A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN105335722B (en) Detection system and method based on depth image information
JP4653606B2 (en) Image recognition apparatus, method and program
CN105740775B (en) Three-dimensional face living body identification method and device
US9959454B2 (en) Face recognition device, face recognition method, and computer-readable recording medium
CN104933389B (en) Identity recognition method and device based on finger veins
CN105740781B (en) Three-dimensional human face living body detection method and device
JP2008198193A (en) Face authentication system, method, and program
JP5170094B2 (en) Spoofing detection system, spoofing detection method, and spoofing detection program
EP3241151A1 (en) An image face processing method and apparatus
CN111382592B (en) Living body detection method and apparatus
KR101818984B1 (en) Face Recognition System using Depth Information
KR101724971B1 (en) System for recognizing face using wide angle camera and method for recognizing face thereof
CN112220444B (en) Pupil distance measuring method and device based on depth camera
CN109948400A (en) It is a kind of to be able to carry out the smart phone and its recognition methods that face characteristic 3D is identified
CN110796101A (en) Face recognition method and system of embedded platform
CN112257641A (en) Face recognition living body detection method
US20070253598A1 (en) Image monitoring apparatus
JP2017182459A (en) Face image authentication device
KR101053253B1 (en) Apparatus and method for face recognition using 3D information
KR20150069799A (en) Method for certifying face and apparatus thereof
CN114894337A (en) Temperature measurement method and device for outdoor face recognition
JP3970573B2 (en) Facial image recognition apparatus and method
CN106991376A (en) With reference to the side face verification method and device and electronic installation of depth information
CN108009532A (en) Personal identification method and terminal based on 3D imagings

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 1, building 8, floor 802, 1

Applicant after: Beijing eye Intelligence Technology Co., Ltd.

Address before: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 1, building 8, floor 802, 1

Applicant before: Beijing Techshino Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220310

Address after: 071800 Beijing Tianjin talent home (Xincheng community), West District, Xiongxian Economic Development Zone, Baoding City, Hebei Province

Patentee after: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Patentee after: Beijing Eye Intelligent Technology Co., Ltd

Address before: 100085, 1 floor 8, 1 Street, ten Street, Haidian District, Beijing.

Patentee before: Beijing Eyes Intelligent Technology Co.,Ltd.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An improved method and device for 3D face living detection

Effective date of registration: 20220614

Granted publication date: 20200103

Pledgee: China Construction Bank Corporation Xiongxian sub branch

Pledgor: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Registration number: Y2022990000332

PE01 Entry into force of the registration of the contract for pledge of patent right