CN105740781B - Three-dimensional human face living body detection method and device - Google Patents

Three-dimensional human face living body detection method and device Download PDF

Info

Publication number
CN105740781B
CN105740781B CN201610048509.2A CN201610048509A CN105740781B CN 105740781 B CN105740781 B CN 105740781B CN 201610048509 A CN201610048509 A CN 201610048509A CN 105740781 B CN105740781 B CN 105740781B
Authority
CN
China
Prior art keywords
characteristic point
dimensional
face image
point
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610048509.2A
Other languages
Chinese (zh)
Other versions
CN105740781A (en
Inventor
孔勇
王玉瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Techshino Technology Co Ltd
Original Assignee
Beijing Techshino Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Techshino Technology Co Ltd filed Critical Beijing Techshino Technology Co Ltd
Priority to CN201610048509.2A priority Critical patent/CN105740781B/en
Publication of CN105740781A publication Critical patent/CN105740781A/en
Application granted granted Critical
Publication of CN105740781B publication Critical patent/CN105740781B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a method and a device for three-dimensional human face living body detection, belonging to the field of human face recognition, wherein the method comprises the following steps: acquiring a three-dimensional face image; selecting a first characteristic point and a second characteristic point of the three-dimensional face image, and acquiring three-dimensional coordinate information of the first characteristic point and the second characteristic point; fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point; calculating a distance value from the second characteristic point to a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point; and judging whether the three-dimensional face image is from a living body according to the distance value. The invention can judge whether the face image comes from a living body, has high identification accuracy and has robustness and stability of the identification result.

Description

Three-dimensional human face living body detection method and device
Technical Field
The invention relates to the field of biological recognition, in particular to a method and a device for detecting a three-dimensional human face living body
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. The method comprises a series of related technologies of collecting images or video streams containing human faces by using a camera or a camera, automatically detecting and tracking the human faces in the images, and further identifying the detected human faces.
Since the biometric information of a person cannot be kept secret in a strict sense, the attack on the biometric recognition system has never been stopped as in other authentication systems. Compared with other biological characteristics, the human face characteristics are most easily acquired, and a counterfeiter can acquire photos or videos of the face of the user through ways such as online searching and candid shooting, so as to implement fraudulent behaviors on the face recognition authentication system. The existing three-dimensional human face living body detection method is to extract human face characteristic points from an obtained three-dimensional human face image, and judge whether the obtained three-dimensional human face image comes from a living body or not by only utilizing the maximum depth difference of the three-dimensional characteristic point coordinates of the characteristic points, so that the accuracy is low, and the robustness and the stability are poor.
Disclosure of Invention
The invention provides a method and a device for detecting a three-dimensional face living body, which aim to solve the problems of low accuracy, poor robustness and poor stability of the existing three-dimensional face living body detection method.
In order to solve the above technical problems, the present invention provides the following technical solutions:
in one aspect, a method for detecting a three-dimensional human face living body is provided, which includes:
acquiring a three-dimensional face image;
selecting a first characteristic point and a second characteristic point of the three-dimensional face image, and acquiring three-dimensional coordinate information of the first characteristic point and the second characteristic point;
fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point;
calculating a distance value from the second characteristic point to a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point;
and judging whether the three-dimensional face image is from a living body or not according to the distance value.
In another aspect, an apparatus for three-dimensional human face liveness detection is provided, including:
the acquisition module is used for acquiring a three-dimensional face image;
the selecting module is used for selecting a first characteristic point and a second characteristic point of the three-dimensional face image;
the extraction module is used for acquiring three-dimensional coordinate information of the first characteristic point and the second characteristic point;
the processing module is used for fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point; calculating a distance value from the second characteristic point to a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point;
and the judging module is used for judging whether the three-dimensional face image is from a living body according to the distance value.
One of the technical solutions of the invention has the following advantages or beneficial effects:
the invention can judge whether the acquired face image comes from a living body. Firstly, acquiring a three-dimensional face image of a user to be identified/authenticated (the three-dimensional face image can further obtain depth information of the face image compared with a common two-dimensional face image); then, selecting a first characteristic point and a second characteristic point from the obtained three-dimensional face image, obtaining three-dimensional coordinate information of the first characteristic point and the second characteristic point, fitting a reference plane of the three-dimensional face image through the three-dimensional coordinate information of the first characteristic point, and calculating a distance value from the second characteristic point to the reference plane through the three-dimensional coordinate information of the second characteristic point; and finally, judging whether the three-dimensional face image is from a living body according to the distance value between the second characteristic point and the reference plane.
The invention has high precision. According to the invention, the spatial relationship between the first characteristic point and the second characteristic point is further calculated on the basis of utilizing the depth information of the first characteristic point and the second characteristic point on the three-dimensional face image, so that the accuracy of three-dimensional face recognition is further improved.
The method can adjust the selection of the first characteristic point and the second characteristic point according to the requirement of practical application, so that the stability and the robustness of the three-dimensional face recognition are higher.
In summary, the method for detecting the living human face can judge whether the human face image is a living human body, and has higher accuracy, robustness and stability in recognition.
Drawings
FIG. 1 is a flow chart of one embodiment of a method for three-dimensional face liveness detection of the present invention;
fig. 2 is a schematic diagram of each feature point of coordinate information given by a 3D camera in the embodiment of the present invention;
FIG. 3 is a schematic diagram of a side shape of a real human face according to an embodiment of the present invention;
FIG. 4 is a schematic side view of a deformed human face photograph according to an embodiment of the present invention;
FIG. 5 is a flowchart of another embodiment of a method for three-dimensional face liveness detection according to the present invention;
fig. 6 is a schematic diagram of an embodiment of a three-dimensional human face living body detection device according to the invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
In one aspect, an embodiment of the present invention provides a method for three-dimensional human face live detection, as shown in fig. 1, including:
step 101, obtaining a three-dimensional face image.
The face image acquisition is preferentially performed for a user to be recognized/authenticated by using an apparatus or device having a three-dimensional face image acquisition function (e.g., an acquisition device having a 3D camera).
102, selecting a first characteristic point and a second characteristic point in the three-dimensional face image, and acquiring three-dimensional coordinate information of the first characteristic point and the second characteristic point.
The common two-dimensional face image can only obtain two-dimensional coordinates of the feature points, the three-dimensional face image can obtain three-dimensional coordinates of the feature points, and the three-dimensional face image has the advantage of being capable of obtaining depth information of a shot face compared with the two-dimensional face image. After some 3D cameras shoot face images of people, three-dimensional coordinate information of a part of feature points in the shot faces can be directly given, and other feature points can be obtained through calculation. Fig. 2 is a label of 78 feature points of a human face, where the 78 feature points are obtained by invoking a human face feature point positioning algorithm, and are sequentially represented by the following symbols: point0, Point1, Point76, Point 77. When the three-dimensional coordinate system is selected, the direction of the 3D camera facing the user can be taken as the positive direction of the z-axis, the positive directions of the x-axis and the y-axis can be determined according to the coordinate system of the right hand, and of course, other ways of determining the coordinate system are also possible, the purpose is to give the three-dimensional coordinate information of the marked human face characteristic points, and the depth information of the obtained three-dimensional human face image is sequentially represented by the following three-dimensional coordinates: (x)0,y0,z0),(x1,y1,z1),…,(x76,y76,z76),(x77,y77,z77).。
And selecting a first feature point on the three-dimensional face image, wherein the first feature point is a feature point group which comprises a plurality of feature points on the three-dimensional face and is selected from all 78 feature points marked by the three-dimensional face image or other feature points. And acquiring the three-dimensional coordinate information of the corresponding characteristic point according to the selected first characteristic point.
And selecting a second feature point on the three-dimensional face image, wherein the second feature point is a feature point group which comprises a plurality of feature points on the three-dimensional face and is selected from each feature point on the central axis of the three-dimensional face image or other feature points. And acquiring the three-dimensional coordinate information of the corresponding characteristic point according to the selected second characteristic point.
And 103, fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point.
And fitting a reference plane of the shot three-dimensional face image according to the selected first characteristic point and the three-dimensional coordinate information of the first characteristic point. And calculating the correlation coefficient of the fitting reference plane by using a least square method and Matlab.
And 104, calculating a distance value from the second characteristic point to a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point.
After the second characteristic points are selected and the corresponding three-dimensional coordinate information of the second characteristic points is obtained, the distance value from each characteristic point in the second characteristic points to the fitted reference plane is calculated according to the coordinate information of each characteristic point in the second characteristic points.
And 105, judging whether the three-dimensional face image is from a living body according to the distance information.
After the face image is added with depth information (namely, a three-dimensional face image), the side information of the real living face is different from the side information of the photo image, and even if the photo is deformed, such as the way of bending the photo or changing the angle, the side information is different from the information of the side of the real face. The feature points on the central axis of the face are taken as second feature points, and a three-dimensional side image of a real face is shown in fig. 3, wherein the information shown by the three-dimensional side image is similar to the side of the real face, and the side information comprises concave-convex irregular curves, so that the characteristics of the nose and other regions can be clearly seen; fig. 4 a shows a side image of a three-dimensional face image taken by a hand-held face photograph, where the hand-held photograph has no bending deformation or has a small bending amplitude, a thick line represents a fitted reference plane, and a thin line represents information of a face side, so that it can be seen that a distance between two lines is small, and no irregular curve similar to a real face side is seen; in fig. 4 b, a side image of a three-dimensional face image taken by a hand-held face photograph after deformation and bending (the left and right edges of the hand-held photograph are bent inward or outward, and both are substantially the same), it can be seen that the side information represented by the thin line has a certain distance (i.e., depth information) from the reference plane, but the side information still appears as a straight line, which is not an irregular curve represented by the side of the real face; in fig. 4, c and d also respectively show side images of a three-dimensional face image taken by a hand-held face photograph after deformation and bending (the upper and lower edges of the hand-held photograph are bent inward or outward), it can be seen that the side information represented by the thin line has a certain distance (i.e. depth information) from the reference plane, and also has a curve characteristic, but the represented curve is a smooth transition curve, and is not a concave-convex irregular curve represented by the side of a real face. And judging whether the acquired three-dimensional face image is from a living body or not by using the distance value between the second characteristic point and the fitting reference plane according to the characteristics of the pictures. In addition, the shape characteristics of the side face of the human face can be well represented through the distance value from the second characteristic point to the fitting reference plane.
As another embodiment of the present invention, as shown in fig. 5, the method for three-dimensional face live detection includes:
step 201, acquiring a three-dimensional face image.
The face image acquisition is preferentially performed for a user to be recognized/authenticated by using an apparatus or device having a three-dimensional face image acquisition function (e.g., an acquisition device having a 3D camera).
Step 202, selecting a first feature point, a second feature point and a third feature point in the three-dimensional face image, and acquiring three-dimensional coordinate information of the first feature point, the second feature point and the third feature point.
The common two-dimensional face image can only obtain two-dimensional coordinates of the feature points, the three-dimensional face image can obtain three-dimensional coordinates of the feature points, and the three-dimensional face image has the advantage of being capable of obtaining depth information of a shot face compared with the two-dimensional face image.
And selecting a first feature point on the three-dimensional face image, wherein the first feature point is a feature point group which comprises a plurality of feature points on the three-dimensional face and is selected from all 78 feature points marked by the three-dimensional face image or other feature points. And acquiring the three-dimensional coordinate information of the corresponding characteristic point according to the selected first characteristic point.
And selecting a second feature point on the three-dimensional face image, wherein the second feature point is a feature point group which comprises a plurality of feature points on the three-dimensional face and is selected from each feature point on the central axis of the three-dimensional face image or other feature points. And acquiring the three-dimensional coordinate information of the corresponding characteristic point according to the selected second characteristic point.
And selecting a third feature point on the three-dimensional face image, wherein the third feature point is a feature point, and the third specific purpose is to represent the convex feature of the face, so that the feature points of relatively prominent face regions, such as lip regions, nose regions and the like, can be selected when the third feature point is selected. And acquiring the three-dimensional coordinate information of the corresponding characteristic point according to the selected third characteristic point.
And step 203, fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point.
And fitting a reference plane of the shot three-dimensional face image according to the selected first characteristic point and the three-dimensional coordinate information of the first characteristic point. And calculating the correlation coefficient of the fitting reference plane by using a least square method and Matlab.
And 204, calculating a distance value from the second characteristic point to a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point.
After the second characteristic points are selected and the corresponding three-dimensional coordinate information of the second characteristic points is obtained, the distance value from each characteristic point in the second characteristic points to the fitted reference plane is calculated according to the coordinate information of each characteristic point in the second characteristic points.
And step 205, calculating the included angle relationship between the straight line formed by the third characteristic point and the first characteristic point and the reference plane according to the three-dimensional coordinate information of the third characteristic point.
The third feature point is a selected specific feature point, the first feature point is a feature point group and comprises a plurality of feature points, so that the third feature point and each feature point in the first feature points can form a straight line, and then the included angle relationship between each straight line and the reference plane is obtained through calculation. The relationship of the included angle may be the size of the included angle between the formed straight line and the reference plane, or may be the sine value of the included angle, the cosine value of the included angle, etc. in order to better determine the relationship of the included angle, so that the relationship of the included angle is more obvious, the sine value of the straight line and the reference plane is preferentially used for representation.
And step 206, judging whether the three-dimensional face image is from a living body according to the relation between the distance value and the included angle.
The included angle relation between the straight line and the reference plane is calculated by using the straight line formed by the third feature point and the first feature point, so that the convex feature (namely, non-plane) of the face can be better described, and whether the acquired face image comes from a living body or not is further determined.
Preferably, the first feature Point in the above embodiment selects 10 feature points at the left and right cheeks in the three-dimensional face image, that is, Point53, Point54, Point55, Point56, Point57, Point65, Point66, Point67, Point68, and Point 69. The 10 feature points at the left and right cheeks were selected, and the fitting effect and stability of the reference plane were better than those of the combination of the other feature points. Further, 3 × 3 neighborhood points of the above 10 feature points at the cheek may also be selected, such that the fitted reference plane is described by 80 neighborhood points in common. A 3 × 3 neighborhood of 10 feature points on the cheek may be selected, and the 10 feature points may be combined to form 90 feature points, which together describe the fitted reference plane. And 3 × 3 neighborhood points of the feature points are selected to describe the reference plane, and compared with 10 feature points, the stability is further improved on the basis that the calculated amount is basically kept unchanged.
Preferably, the second feature Point in the above embodiment preferentially selects feature points on a central axis of the three-dimensional face image, where the feature points on the central axis of the face include several or all combinations of feature points of a bridge of the nose, a tip of the nose, a middle of a person, a middle of a lip, a middle of a chin, and the like, that is, several or all combinations of Point26, Point27, Point28, Point29, Point31, Point36, Point42, and Point 61. Further, a feature Point intermediate between Point0 and Point5 may also be included, where the feature Point is labeled Point78 (not shown in fig. 2). The 9 feature points collectively form a second feature point. The 9 personal face characteristic points have obvious concave-convex characteristics on the three-dimensional face image, and the side shape information of the face can be well represented through the distance values between the 9 characteristic points and the fitted reference plane. The second feature Point of the above embodiment may further select neighborhood points of the feature Point of the central axis of the three-dimensional face image, where the neighborhood points of the central axis feature Point of the three-dimensional face image include several or all combinations of neighborhood points of the feature points such as the bridge of the nose, the tip of the nose, the middle of the human body, the middle of the lip, and the middle of the chin, that is, several or all combinations of neighborhood points of Point26, Point27, Point28, Point29, Point31, Point36, Point42, and Point61, and the side shape information of the face may also be better represented by a distance value from the neighborhood Point of the central axis feature Point of the three-dimensional face image to the fitted reference plane.
Preferably, the third feature Point of the above embodiment preferentially selects a tip of nose feature Point, i.e., Point29, in the three-dimensional face image. The nose tip is the most prominent position in the three-dimensional face, and when the nose tip is selected as a third feature point, the convex feature of the face can be better represented by calculating the included angle relationship between a straight line formed by the nose tip and the first feature point and a reference plane, so that the accuracy of in-vivo detection is further improved. The third feature point of the above embodiment may also select other salient feature points in the three-dimensional face image, such as an upper lip middle point, a lower lip middle point, and the like. The third feature point of the above embodiment may also select a neighborhood point of a feature point of a protruding portion such as a nose tip feature point or an upper lip middle point in the three-dimensional face image, and the third feature point is mainly used to characterize the face convexity feature, so that an included angle relationship between a straight line formed by the selected third feature point and the first feature point and the reference plane is more obvious, and the face convexity feature is further characterized, thereby further determining whether the acquired face image is from a living body.
After the distance value from the second feature point of the three-dimensional face to the reference plane and the included angle relationship between the straight line formed by the third feature point and the first feature point and the reference plane are obtained through calculation, whether the acquired three-dimensional face image is from a living body can be judged by using the distance value and the included angle relationship, and a judgment embodiment is provided:
classifying the relation between the distance value and the included angle by using a pre-trained classifier;
and judging whether the acquired three-dimensional face image is from a living body according to the classification result.
In the embodiment, a classifier is adopted to judge whether a three-dimensional face image comes from a living body, a large amount of three-dimensional face sample data is needed to be used for obtaining the distance value and included angle relationship through calculation, and then the calculated distance value and included angle relationship of the three-dimensional face data is used for training the classifier, namely, a judgment standard is set; then, the obtained three-dimensional face image of the user to be authenticated/identified is calculated to obtain the distance value from the second characteristic point of the three-dimensional face image to the reference plane fitted by the first characteristic point and the included angle relation between the straight line formed by the third characteristic point and the first characteristic point and the reference plane; and finally, inputting the obtained relation between the distance value and the included angle into a trained classifier to obtain class output, and judging whether the three-dimensional face image is from a living body according to the class.
For example, the classifier is an SVM classifier, and the SVM classifier is obtained by training a large number of distance value and angle relation samples. The relation between the distance value and the included angle obtained according to the three-dimensional face image of the user to be authenticated/identified is input into the classifier, if the output result is 1, the user is a living body, and if the output result is-1, the user is a non-living body. The embodiment adopts the classifier to judge whether the three-dimensional face image is a living body, and the identification accuracy is further improved.
The above-described embodiment can determine whether the acquired three-dimensional face image is from a living body. The method comprises the steps of firstly obtaining a first characteristic point and a second characteristic point of a three-dimensional face image, obtaining three-dimensional coordinates of the first characteristic point and the second characteristic point, fitting a reference plane of the three-dimensional face image according to three-dimensional coordinate information of the first characteristic point, then calculating a distance value from the second characteristic point to the reference plane by using the three-dimensional coordinate information of the second characteristic point, and according to the distance value, on one hand, the convex characteristic of the three-dimensional face can be better represented, on the other hand, the shape characteristic of the side face of the three-dimensional face can also be represented, so that whether the obtained three-dimensional face image is from a living body or not is judged, effective anti-counterfeiting is carried out on planar two-dimensional deception means such as photos and videos. The first feature points preferably select 10 feature points at the left and right cheeks of the face and/or neighborhood points of 10 feature points at the left and right cheeks of the face, so that the stability and robustness of the face recognition result are further improved.
The invention is further illustrated by the following preferred embodiments:
step 301, starting a 3D camera to obtain a three-dimensional face image.
The existing algorithm is called to open the 3D camera, a face image is obtained, three-dimensional coordinate information of 78 feature points in the face image is obtained, and the 78 feature points are obtained by calling a face feature point positioning algorithm, as shown in FIG. 2. The 78 feature points are labeled with the following symbols in sequence: point0, Point1, Point76, Point 77.
Step 302, selecting a first feature point, a second feature point and a third feature point, and obtaining three-dimensional coordinate information of the first feature point, the second feature point and the third feature point.
In order to make the stability of the reference plane fitted by the first feature Point higher, the first feature Point preferably selects 3 × 3 neighborhood points of 10 feature points at the left and right cheeks of the human face, that is, 3 × 3 neighborhood points of Point53, Point54, Point55, Point56, Point57, Point65, Point66, Point67, Point68, and Point69, and adds the 10 feature points, so that 90 neighborhood points constitute the first feature Point, and three-dimensional coordinate information of the 90 neighborhood points is obtained.
In order to better represent the convex characteristics of the three-dimensional face and reflect the shape information of the side face of the face, the second characteristic Point preferentially selects characteristic points on the central axis of the face, namely, Point26, Point27, Point28, Point29, Point31, Point36, Point42 and Point61, the total of 8 characteristic points are added, namely, the middle characteristic points of Point0 and Point5 are marked as Point78, and for the convenience of expression of the following calculation formula, the three-dimensional coordinate information of Point78 is expressed by (x) coordinate information78,y78,z78) So that the second feature point takes a total of 9 feature points.
The nose tip feature Point is preferentially selected as the third feature Point, i.e., Point 29. The nose tip is the most prominent position in the three-dimensional face, and when the nose tip is selected as a third feature point, the convex feature of the face can be better represented by calculating the included angle relationship between a straight line formed by the nose tip and the first feature point and a reference plane, so that the accuracy of in-vivo detection is further improved.
And step 303, fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point.
And utilizing the obtained three-dimensional coordinate information of the 90 neighborhood points, and using the three-dimensional coordinate information of the 90 neighborhood points for fitting the reference plane. The three-dimensional coordinate information of 90 neighborhood points is used in turn (px)0,py0,pz0),(px1,py1,pz1),…,(px89,py89,pz89) Is shown.
Using the least squares method, the three-dimensional coordinates (px) of 90 points of the reference plane are described by the cheek0,py0,pz0),(px1,py1,pz1),…,(px89,py89,pz89) to fit a plane α, z α x + b y + c.
The process of calculating a, b, c is as follows: is provided with
Figure BDA0000913609740000111
Then
A*X=Z,
The three coefficients of the plane can be obtained by using Matlab as follows
X=A\Z or X=(ATA)-1ATZ.
And 304, calculating a distance value from the second characteristic point to a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point.
respectively calculating the distances from 9 feature points Point26, Point27, Point28, Point29, Point31, Point36, Point42, Point61 and Point78 in the second feature points to the reference plane alpha, wherein a calculation method is provided, and comprises the following steps:
if the distance between the ith characteristic point and the fitting plane is diThen, then
Figure BDA0000913609740000121
That is, the distance value from the 9 feature points to the reference plane is [ d ]26,d27,d28,d29,d31,d36,d42,d61,d78]。
And 305, calculating the included angle relationship between the straight line formed by the third characteristic point and the first characteristic point and the reference plane according to the three-dimensional coordinate information of the third characteristic point.
the relation of the included angles between 10 straight lines formed by 10 feature points selected by the nose Point29 and the first feature Point and the reference plane α is calculated, and here, the sine value of the included angle is preferably selected to represent the relation of the included angles, namely, the sine values of the included angles between 10 straight lines formed by the feature Point29, the feature points 53, the feature points 54, the feature points 55, the feature points 56, the feature points 57, the feature points 65, the feature points 66, the feature points 67, the feature points 68 and the feature points 69 and the reference plane α are calculated.
Let the straight lines composed of the nose Point29 and the above 10 feature points be the straight lines L1, L2, …, L10, respectively.
Let the distance from the characteristic Point29 to the jth characteristic Point be di,jI-29, j-53, 54, 55, 56, 57, 65, 66, 67, 68, 69; then
Figure BDA0000913609740000122
then, the sine values of the angles between the straight lines L1, L2, …, and L10 and the plane α can be represented by the following symbols:
Figure BDA0000913609740000123
Figure BDA0000913609740000124
Figure BDA0000913609740000125
namely, the sine value of the 10 included angles is:
[sin_L1,sin_L2,...,sin_L10.]
and step 206, judging whether the three-dimensional face image is from a living body according to the relation between the distance value and the included angle.
And inputting the 9 calculated distance values and 10 included angle sine values serving as 19-dimensional feature vectors into a trained SVM classifier, and judging whether the acquired three-dimensional face image is from a living body according to an output result. The output form of 1 and-1 is used herein to indicate the output result, where 1 represents that the determination result is a living body and-1 represents that the determination result is a non-living body.
In the field of Machine learning, SVM (Support Vector Machine) is a supervised learning model, which is commonly used for pattern recognition, classification, and regression analysis. SVMs are often used on two categories of problems.
Characteristic data of 5 thousands of live and non-live human faces are collected and calculated, and a classifier is trained by using an SVM training function svmtrain of Matlab.
Of these feature data, the training sample was 28000 parts (6000 parts of the living body and 22000 parts of the non-living body), the test sample was 24000 parts (4000 parts of the living body and 20000 parts of the non-living body), and the true face was marked as +1 and the false face as-1. The optimal parameters are selected during training, and in the parameters of the SVM training function svmtrain of Matlab, a Gaussian kernel function is set, and sigma is set to be 4.
In summary, in the practical application process, the embodiment can accurately judge whether the acquired three-dimensional face image is from a living body, and the stability and robustness of the identification process are higher.
On the other hand, an embodiment of the present invention further provides a device for three-dimensional human face live detection, as shown in fig. 6, including:
and the acquisition module 11 is used for acquiring a three-dimensional face image.
And the selecting module 12 is configured to select a first feature point and a second feature point of the three-dimensional face image.
And the extraction module 13 is configured to extract three-dimensional coordinate information of the first feature point and the second feature point.
The processing module 14 is used for fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point; and calculating the distance value from the second characteristic point to the reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point.
And the judging module 15 is configured to judge whether the three-dimensional face image is from a living body according to the distance value.
The device for detecting the living human face can judge whether the human face image comes from the living body or not, the recognition accuracy is high, and the recognition result has robustness and stability.
As another embodiment of the present invention, a three-dimensional face liveness detection apparatus includes:
and the acquisition module is used for acquiring a three-dimensional face image.
And the selecting module is used for selecting the first characteristic point, the second characteristic point and the third characteristic point of the three-dimensional face image.
And the extraction module is used for extracting the three-dimensional coordinate information of the first characteristic point, the second characteristic point and the third characteristic point.
The processing module is used for fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point; calculating a distance value from the second characteristic point to a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point; and calculating the relation of the included angle between the straight line formed by the third characteristic point and the first characteristic point and the reference plane according to the three-dimensional coordinate information of the third characteristic point.
And the judging module is used for judging whether the three-dimensional face image comes from a living body or not according to the included angle relation and the distance value.
Preferably, the selecting module preferentially selects the feature points of the left and right cheeks of the face and/or the neighborhood points of the feature points of the left and right cheeks of the face as first feature points, and selects the feature points of the left and right cheeks, so that the fitting effect and the stability of the reference plane are better compared with the combination of other feature points; further, a neighborhood point of the feature point on the cheek may be selected, or a combination of the neighborhood point of the feature point on the cheek and the feature on the cheek may be selected. The reference plane of the three-dimensional face image is obtained by fitting the three-dimensional coordinate information of the neighborhood points of the first characteristic points, and for the same characteristic point of the same person, the reference plane fitted by the neighborhood points of the first characteristic points has good stability and robustness, is not influenced by the posture of face acquisition and image noise, can better represent the integral depth information of the three-dimensional face image, and enables the accuracy of the recognition result to be higher.
Preferably, the selecting module preferentially selects feature points on a central axis of the human face as second feature points, wherein the feature points on the central axis of the human face comprise a plurality of or all of feature points of the eyebrow, the nose bridge, the nose tip, the middle of the human body, the middle of the lips, the middle of the chin and the like; neighborhood points of the feature points of the central axis of the three-dimensional face image can be selected, and the neighborhood points of the feature points of the central axis of the three-dimensional face image comprise several or all of the neighborhood points of the feature points of the eyebrow center, the nose bridge, the nose tip, the middle of the person, the middle of the lip, the middle of the chin and the like. The feature points on the axis of the face are used as second feature points and used for calculating the distance from the feature points to the reference plane, on one hand, the convex feature of the face can be better represented, on the other hand, the shape information of the side face of the face can be better described through the distance value, and the recognition accuracy is improved.
Preferably, the selection module preferentially selects the nose tip feature point in the face as a third feature point; neighborhood points of the nose tip characteristic points in the face can also be selected as third characteristic points. The nose tip is the most prominent position in the three-dimensional face, and when the nose tip is selected as a third feature point, the convex feature of the face can be better represented by calculating the included angle relationship between a straight line formed by the nose tip and the first feature point and a reference plane, so that the accuracy of in-vivo detection is further improved.
After the face reference plane information, the distance value from the second characteristic point to the reference plane and the included angle relationship between the straight line formed by the nose tip to the first characteristic point and the reference plane are obtained, whether the three-dimensional face image comes from a living body or not can be judged by using the distance value and the included angle relationship. An example of a decision is given here:
the judging module comprises:
the classification unit is used for classifying the relation between the distance value and the included angle by using a pre-trained classifier;
and the judging unit is used for judging whether the three-dimensional face image is a living body according to the classification result.
The three-dimensional face data is calculated to obtain the relation between the distance value and the included angle, and then the relation between the distance value and the included angle of the three-dimensional face data obtained by calculation is used for training a classifier, namely a judgment standard is set; then, the obtained three-dimensional face image of the user to be authenticated/identified is calculated to obtain the distance value from the second characteristic point of the three-dimensional face image to the reference plane fitted by the first characteristic point and the included angle relation between the straight line formed by the third characteristic point and the first characteristic point and the reference plane; and finally, inputting the obtained relation between the distance value and the included angle into a trained classifier to obtain class output, and judging whether the three-dimensional face image is from a living body according to the class.
For example, the classifier is an SVM classifier, and the SVM classifier is obtained by training a large number of distance value and angle relation samples. The relation between the distance value and the included angle obtained according to the three-dimensional face image of the user to be authenticated/identified is input into the classifier, if the output result is 1, the user is a living body, and if the output result is-1, the user is a non-living body. The embodiment adopts the classifier to judge whether the three-dimensional face image is a living body, and the identification accuracy is further improved.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (2)

1. A method for detecting a three-dimensional human face living body is characterized by comprising the following steps:
starting a 3D camera to obtain a three-dimensional face image;
selecting a first feature point, a second feature point and a third feature point of the three-dimensional face image, and acquiring three-dimensional coordinate information of the first feature point, the second feature point and the third feature point, wherein the first feature point is a feature point group and comprises left and right cheek feature points of a face, the second feature point comprises points on a central axis of the face and comprises all combinations of feature points of a bridge of the nose, a nose tip, a middle part of the person, a middle part of the lip and a middle part of the chin, and the third feature point is a feature point of the nose tip of the face;
fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point;
calculating the spatial relationship between the first characteristic point and the second characteristic point, and calculating the distance value from the second characteristic point to the reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point;
calculating the relation of the included angle between a straight line formed by the third characteristic point and the first characteristic point and the reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the third characteristic point;
judging whether the three-dimensional face image comes from a living body according to the included angle relation and the distance value, wherein the judging step comprises the following steps:
classifying the relation between the distance value and the included angle by using a pre-trained classifier;
and judging whether the acquired three-dimensional face image is from a living body according to the classification result.
2. A device for detecting a three-dimensional human face living body is characterized by comprising:
the acquisition module is used for acquiring a three-dimensional face image through a 3D camera;
the selecting module is used for selecting a first characteristic point, a second characteristic point and a third characteristic point of the three-dimensional face image, wherein the first characteristic point comprises left and right cheek characteristic points of the face, the second characteristic point comprises points on a central axis of the face and comprises all combinations of characteristic points of the bridge of the nose, the tip of the nose, the middle of the person, the middle of the lip and the middle of the chin, and the third characteristic point is a nose tip characteristic point of the face;
the extraction module is used for acquiring three-dimensional coordinate information of the first characteristic point and the second characteristic point and acquiring three-dimensional coordinate information of the third characteristic point;
the processing module is used for fitting a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the first characteristic point; calculating a distance value from the second characteristic point to a reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the second characteristic point; calculating the relation between the included angle between a straight line formed by the third characteristic point and the first characteristic point and the reference plane of the three-dimensional face image according to the three-dimensional coordinate information of the third characteristic point;
the judging module is used for judging whether the three-dimensional face image comes from a living body or not according to the included angle relation and the distance value, wherein the judging module classifies the distance value and the included angle relation by using a pre-trained classifier; and judging whether the acquired three-dimensional face image is from a living body according to the classification result.
CN201610048509.2A 2016-01-25 2016-01-25 Three-dimensional human face living body detection method and device Active CN105740781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610048509.2A CN105740781B (en) 2016-01-25 2016-01-25 Three-dimensional human face living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610048509.2A CN105740781B (en) 2016-01-25 2016-01-25 Three-dimensional human face living body detection method and device

Publications (2)

Publication Number Publication Date
CN105740781A CN105740781A (en) 2016-07-06
CN105740781B true CN105740781B (en) 2020-05-19

Family

ID=56247554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610048509.2A Active CN105740781B (en) 2016-01-25 2016-01-25 Three-dimensional human face living body detection method and device

Country Status (1)

Country Link
CN (1) CN105740781B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845405A (en) * 2017-01-20 2017-06-13 武汉仟品网络科技有限公司 A kind of method, device and electronic equipment that identity is recognized by Biological imaging
CN110443102B (en) * 2018-05-04 2022-05-24 北京眼神科技有限公司 Living body face detection method and device
CN109145750A (en) * 2018-07-23 2019-01-04 华迅金安(北京)科技有限公司 A kind of driver identity rapid authentication method and system
CN109389032B (en) * 2018-08-27 2020-06-12 北京三快在线科技有限公司 Picture authenticity determining method and device, electronic equipment and readable storage medium
CN111046703B (en) * 2018-10-12 2023-04-18 杭州海康威视数字技术股份有限公司 Face anti-counterfeiting detection method and device and multi-view camera
CN111368581A (en) * 2018-12-25 2020-07-03 浙江舜宇智能光学技术有限公司 Face recognition method based on TOF camera module, face recognition device and electronic equipment
CN109508702A (en) * 2018-12-29 2019-03-22 安徽云森物联网科技有限公司 A kind of three-dimensional face biopsy method based on single image acquisition equipment
CN109993863A (en) * 2019-02-20 2019-07-09 南通大学 A kind of access control system and its control method based on recognition of face
WO2021042375A1 (en) * 2019-09-06 2021-03-11 深圳市汇顶科技股份有限公司 Face spoofing detection method, chip, and electronic device
CN111571611B (en) * 2020-05-26 2021-09-21 广州纳丽生物科技有限公司 Facial operation robot track planning method based on facial and skin features
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930602A (en) * 2012-10-20 2013-02-13 西北大学 Tomography-image-based facial skin three-dimensional surface model reconstructing method
US9213885B1 (en) * 2004-10-22 2015-12-15 Carnegie Mellon University Object recognizer and detector for two-dimensional images using Bayesian network based classifier
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9213885B1 (en) * 2004-10-22 2015-12-15 Carnegie Mellon University Object recognizer and detector for two-dimensional images using Bayesian network based classifier
CN102930602A (en) * 2012-10-20 2013-02-13 西北大学 Tomography-image-based facial skin three-dimensional surface model reconstructing method
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于几何特征与深度数据的三维人脸识别;陈立生 等;《电脑知识与技术》;20130331;第9卷(第8期);第1864页第2段-1867页倒数第2段 *

Also Published As

Publication number Publication date
CN105740781A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
CN105740781B (en) Three-dimensional human face living body detection method and device
CN105574518B (en) Method and device for detecting living human face
CN105740779B (en) Method and device for detecting living human face
CN105740780B (en) Method and device for detecting living human face
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
JP5010905B2 (en) Face recognition device
CN101027678B (en) Single image based multi-biometric system and method
CN105205455B (en) The in-vivo detection method and system of recognition of face on a kind of mobile platform
Vijayan et al. Twins 3D face recognition challenge
CN104933389B (en) Identity recognition method and device based on finger veins
CN108182397B (en) Multi-pose multi-scale human face verification method
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN106446754A (en) Image identification method, metric learning method, image source identification method and devices
CN107480586B (en) Face characteristic point displacement-based biometric photo counterfeit attack detection method
Alheeti Biometric iris recognition based on hybrid technique
JP6071002B2 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
Bhanu et al. Human ear recognition by computer
CN110796101A (en) Face recognition method and system of embedded platform
US10915739B2 (en) Face recognition device, face recognition method, and computer readable storage medium
Krishneswari et al. A review on palm print verification system
Kour et al. Palmprint recognition system
Masaoud et al. A review paper on ear recognition techniques: models, algorithms and methods
CN112861588A (en) Living body detection method and device
Takeuchi et al. Multimodal soft biometrie verification by hand shape and handwriting motion in the air
Singh et al. Face liveness detection through face structure analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 1, building 8, floor 802, 1

Applicant after: BEIJING TECHSHINO TECHNOLOGY Co.,Ltd.

Address before: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 1, building 8, floor 802, 1

Applicant before: BEIJING TECHSHINO TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant