CN105740781A - Three-dimensional human face in-vivo detection method and device - Google Patents

Three-dimensional human face in-vivo detection method and device Download PDF

Info

Publication number
CN105740781A
CN105740781A CN201610048509.2A CN201610048509A CN105740781A CN 105740781 A CN105740781 A CN 105740781A CN 201610048509 A CN201610048509 A CN 201610048509A CN 105740781 A CN105740781 A CN 105740781A
Authority
CN
China
Prior art keywords
feature point
point
dimensional
dimensional face
face images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610048509.2A
Other languages
Chinese (zh)
Other versions
CN105740781B (en
Inventor
孔勇
王玉瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Techshino Technology Co Ltd
Original Assignee
Beijing Techshino Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Techshino Technology Co Ltd filed Critical Beijing Techshino Technology Co Ltd
Priority to CN201610048509.2A priority Critical patent/CN105740781B/en
Publication of CN105740781A publication Critical patent/CN105740781A/en
Application granted granted Critical
Publication of CN105740781B publication Critical patent/CN105740781B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a three-dimensional human face in-vivo detection method and device, and belongs to the field of human face recognition. The method comprises the following steps: acquiring a three-dimensional human face image; selecting a first feature point and a second feature point of the three-dimensional human face image, and acquiring three-dimensional coordinate information of the first feature point and the second feature point; fitting a reference plane of the three-dimensional human face image according to the three-dimensional coordinate information of the first feature point; and computing the distance value from the second feature point to the reference plane of the three-dimensional human face image according to the three-dimensional coordinate information of the second feature point; judging whether the three-dimensional human face image is from a living body according to the distance value. The method disclosed by the invention can be used for judging whether the human face image is from the living body, the recognition precision is high, and the recognition result has robustness and stability.

Description

A kind of method and apparatus of three-dimensional face In vivo detection
Technical field
The present invention relates to field of biological recognition, the method and apparatus particularly relating to a kind of three-dimensional face In vivo detection
Background technology
Recognition of face, the facial feature information being based on people carries out a kind of biological identification technology of identification.Contain image or the video flowing of face with video camera or camera collection, and automatically detect in the picture and track human faces, and then a series of correlation techniques that the face detected is identified.
Owing to the biological information of people can not accomplish proper secrecy, thus the same with other Verification Systems, and the attack that living creature characteristic recognition system is subject to never stopped.Compared with other biological feature, face characteristic is easiest to obtain, and adulterator can pass through approach acquisition user's facial photo or videos such as searching on the net, take on the sly, and then recognition of face Verification System is implemented dirty tricks.Existing three-dimensional face biopsy method is extraction human face characteristic point from acquired three-dimensional face images, carry out judging obtaining three-dimensional face images whether from live body merely with the depth capacity difference of characteristic point three-dimensional feature point coordinates, degree of accuracy is not high, robustness and less stable.
Summary of the invention
Not high in order to solve existing three-dimensional face biopsy method degree of accuracy, the problem present invention of robustness and less stable provides the method and apparatus of a kind of three-dimensional face In vivo detection, the present invention can interpolate that whether to be certified/identification user is live body, and the degree of accuracy of authentication/identification is high, and authentication/identification result has robustness and stability.
In order to solve above-mentioned technical problem, the present invention provides technical scheme as follows:
On the one hand, it is provided that a kind of method of three-dimensional face In vivo detection, including:
Obtain three-dimensional face images;
Choose fisrt feature point and the second feature point of described three-dimensional face images, obtain described fisrt feature point and the three-dimensional coordinate information of second feature point;
The datum plane of the three-dimensional coordinate information matching three-dimensional face images according to described fisrt feature point;
Three-dimensional coordinate information according to described second feature point calculates the second feature point distance value to the datum plane of described three-dimensional face images;
Judge that described three-dimensional face images is whether from live body according to described distance value.
On the other hand, it is provided that the device of a kind of three-dimensional face In vivo detection, including:
Acquisition module, is used for obtaining three-dimensional face images;
Choose module, for choosing fisrt feature point and the second feature point of described three-dimensional face images;
Extraction module, for obtaining described fisrt feature point and the three-dimensional coordinate information of second feature point;
Processing module, for the datum plane of the three-dimensional coordinate information matching three-dimensional face images according to described fisrt feature point;And the three-dimensional coordinate information according to second feature point calculates the second feature point distance value to the datum plane of described three-dimensional face images;
According to described distance value, judge module, for judging that described three-dimensional face images is whether from live body.
A technical scheme in foregoing invention technical scheme has the following advantages or beneficial effect:
The present invention can interpolate that acquired facial image is whether from live body.First the three-dimensional face images (three-dimensional face images compares common two-dimension human face image can obtain the depth information of facial image further) of to be identified/certification user is obtained;Then in acquired three-dimensional face images, choose fisrt feature point and second feature point and obtain fisrt feature point and the three-dimensional coordinate information of second feature point, by the datum plane of fisrt feature point three-dimensional coordinate information matching three-dimensional face images, calculate the second feature point distance value to datum plane by the three-dimensional coordinate information of second feature point;Judge that three-dimensional face figure is whether from live body finally according to the distance value of second feature point to datum plane.
Precision of the present invention is high.The present invention calculates the spatial relationship of fisrt feature point and second feature point further utilizing on the depth information basis of fisrt feature point and second feature point in three-dimensional face images so that the accuracy of three-dimensional face identification improves further.
The present invention can according to the needs of practical application, it is possible to fisrt feature point and choosing of second feature point are adjusted so that the stability of three-dimensional face identification, robustness are higher.
To sum up stating institute, the method for the face In vivo detection of the present invention can interpolate that whether facial image is live body, and identifies have more pinpoint accuracy, robustness and stability.
Accompanying drawing explanation
Fig. 1 is the flow chart of an embodiment of the method for the three-dimensional face In vivo detection of the present invention;
Fig. 2 is the schematic diagram of each characteristic point that 3D photographic head provides coordinate information in the embodiment of the present invention;
Fig. 3 is the schematic diagram of a real human face side view in the embodiment of the present invention;
Fig. 4 uses human face photo to carry out deformation trailing flank schematic shapes in the embodiment of the present invention;
Fig. 5 is the flow chart of another embodiment of the method for the three-dimensional face In vivo detection of the present invention;
Fig. 6 is the schematic diagram of an embodiment of the device of the three-dimensional face In vivo detection of the present invention.
Detailed description of the invention
For making the technical problem to be solved in the present invention, technical scheme and advantage clearly, it is described in detail below in conjunction with the accompanying drawings and the specific embodiments.
On the one hand, a kind of method that in the present invention, an embodiment provides three-dimensional face In vivo detection, as it is shown in figure 1, include:
Step 101, obtains three-dimensional face images.
The preferential acquisition using the equipment with three-dimensional face images acquisition function or device (as having the acquisition device of 3D photographic head) that the user of to be identified/certification carries out facial image.
Step 102, chooses the fisrt feature point in three-dimensional face images and second feature point, obtains the three-dimensional coordinate information of fisrt feature point and second feature point.
Common two-dimension human face image can only obtain the two-dimensional coordinate of characteristic point, and three-dimensional face images can obtain the three-dimensional coordinate of characteristic point, and three-dimensional face images is compared the advantage of two-dimension human face image and is in that, it is possible to obtains the depth information of shooting face.Some 3D photographic head are after the face image of shooting people, it is possible to directly giving the three-dimensional coordinate information of a part of characteristic point in the face that is taken, other characteristic points can by calculating to obtain it.Fig. 2 is the labelling of 78 characteristic points of face, and these 78 characteristic points are called facial modeling algorithm and obtained, and represent with following symbol successively: Point0, point1 ..., Point76, Point77.When three-dimensional coordinate ties up to selection, can using user oriented for the 3D photographic head direction positive direction as z-axis, the positive direction of x-axis and y-axis can be determined according to right-handed coordinate system, certainly other determine that the mode of coordinate system also may be used, purpose is in that the depth information of acquired three-dimensional face images is represented by following three-dimensional coordinate successively: (x to the human face characteristic point three-dimensional coordinate information with institute's labelling0, y0, z0), (x1, y1, z1) ..., (x76, y76, z76), (x77, y77, z77).。
Choosing fisrt feature point in three-dimensional face images, fisrt feature point is a feature point group, including the characteristic point on multiple three-dimensional faces, whole 78 characteristic points marked selected from three-dimensional face images or other characteristic points.The three-dimensional coordinate information of character pair point is obtained according to selected fisrt feature point.
Choosing second feature point in three-dimensional face images, second feature point is a feature point group, including the characteristic point on multiple three-dimensional faces, selected from each characteristic point on three-dimensional face images axis or other characteristic points.The three-dimensional coordinate information of character pair point is obtained according to selected second feature point.
Step 103, the datum plane according to the three-dimensional coordinate information matching three-dimensional face images of fisrt feature point.
According to selected fisrt feature point and fisrt feature point three-dimensional coordinate information, simulate the datum plane of the three-dimensional face images being taken.Method of least square and Matlab is utilized to calculate the correlation coefficient of matching datum plane.
Step 104, calculates the second feature point distance value to the datum plane of three-dimensional face images according to the three-dimensional coordinate information of second feature point.
After choosing second feature point and obtaining its corresponding three-dimensional coordinate information, calculate in second feature point each characteristic point to the distance value of the datum plane of matching according to the coordinate information of each characteristic point in second feature point.
According to described range information, step 105, judges that described three-dimensional face images is whether from live body.
After facial image increase depth information (i.e. three-dimensional face figure), the side-information of true living body faces and the side-information of photograph image are different, even if photo is deformed, as made photo bending or changing the modes such as angle, its side-information is also different from the information of real human face side.Using the characteristic point on face axis as second feature point, as Fig. 3 illustrates the three-dimensional side image of a real human face, information shown in it is close with real face side, and side-information contains concavo-convex irregular curve, it is clear that find out the feature in the regions such as nose;In Fig. 4, a illustrates the side image of the three-dimensional face images captured by a hand-held human face photo, hand-held photo does not occur bending and deformation or the amplitude that bends is only small, wherein thick line represents the datum plane simulated, fine rule then represents the information of face side, can be seen that the distance between two lines is only small, also can't see any irregular curve similar to real human face side;In Fig. 4, b illustrates one (left and right edges of hand-held photo is bent inwardly or outwardly through bending, both are substantially identical) the side image of the three-dimensional face images captured by hand-held human face photo, can be seen that the side-information distance datum plane that fine rule represents has had certain distance (i.e. depth information), but it still shows as straight line, the concavo-convex irregular curve that not real human face side shows;In Fig. 4, c and d respectively show a side image through the three-dimensional face images captured by the hand-held human face photo of bending (lower edges of hand-held photo is bent inwardly or outwardly) equally, can be seen that the side-information distance datum plane that fine rule represents has had certain distance (i.e. depth information), also it is provided with curve characteristic simultaneously, but the curve shown is for seamlessly transitting curve, the concavo-convex irregular curve that still not real human face side shows.Feature according to above-mentioned several photos, utilizes second feature point to the distance value between matching datum plane, judges that acquired three-dimensional face images is whether from live body.The shape facility of face side can be characterized preferably additionally by the distance value of second feature point to matching datum plane.
As an alternative embodiment of the invention, as it is shown in figure 5, the method for described three-dimensional face In vivo detection includes:
Step 201, obtains three-dimensional face images.
The preferential acquisition using the equipment with three-dimensional face images acquisition function or device (as having the acquisition device of 3D photographic head) that the user of to be identified/certification carries out facial image.
Step 202, chooses the fisrt feature point in three-dimensional face images, second feature point and third feature point, and obtains the three-dimensional coordinate information of fisrt feature point, second feature point and third feature point.
Common two-dimension human face image can only obtain the two-dimensional coordinate of characteristic point, and three-dimensional face images can obtain the three-dimensional coordinate of characteristic point, and three-dimensional face images is compared the advantage of two-dimension human face image and is in that, it is possible to obtains the depth information of shooting face.
Choosing fisrt feature point in three-dimensional face images, fisrt feature point is a feature point group, including the characteristic point on multiple three-dimensional faces, whole 78 characteristic points marked selected from three-dimensional face images or other characteristic points.The three-dimensional coordinate information of character pair point is obtained according to selected fisrt feature point.
Choosing second feature point in three-dimensional face images, second feature point is a feature point group, including the characteristic point on multiple three-dimensional faces, selected from each characteristic point on three-dimensional face images axis or other characteristic points.The three-dimensional coordinate information of character pair point is obtained according to selected second feature point.
Three-dimensional face images is chosen third feature point, third feature point is a characteristic point, and the 3rd specific main purpose is to characterize face male features, so can select the characteristic point in comparatively sockdolager's face region when choosing third feature point, such as lip region, nasal area etc..The three-dimensional coordinate information of character pair point is obtained according to selected third feature point.
Step 203, the datum plane according to the three-dimensional coordinate information matching three-dimensional face images of fisrt feature point.
According to selected fisrt feature point and fisrt feature point three-dimensional coordinate information, simulate the datum plane of the three-dimensional face images being taken.Method of least square and Matlab is utilized to calculate the correlation coefficient of matching datum plane.
Step 204, calculates the second feature point distance value to the datum plane of three-dimensional face images according to the three-dimensional coordinate information of second feature point.
After choosing second feature point and obtaining its corresponding three-dimensional coordinate information, calculate in second feature point each characteristic point to the distance value of the datum plane of matching according to the coordinate information of each characteristic point in second feature point.
Step 205, calculates the third feature point angled relationships to fisrt feature the formed straight line of point Yu datum plane according to the three-dimensional coordinate information of third feature point.
Third feature point is the special characteristic point chosen, fisrt feature point is a feature point group, comprising multiple characteristic point, such third feature point can make up straight line with each characteristic point in fisrt feature point, afterwards by calculating the angled relationships obtaining each straight line with said reference plane.Here angled relationships can be the size of formed straight line and the angle of datum plane, it can also be the cosine value etc. of the sine value of described angle, described angle, in order to determine angled relationships better so that angled relationships becomes apparent from, the preferential sine value using straight line and datum plane represents.
According to above-mentioned distance value and angled relationships, step 206, judges that three-dimensional face images is whether from live body.
Utilize the straight line of third feature point and fisrt feature point composition, calculate the angled relationships of straight line and datum plane, it is possible to describe the male features (i.e. on-plane surface) of face preferably, thus further determining that acquired facial image is whether from live body.
Preferably, the fisrt feature point of above-described embodiment chooses 10 characteristic points, i.e. Point53, Point54, Point55, Point56, Point57, and the Point65 at three-dimensional face images middle left and right cheek place, Point66, Point67, Point68, Point69.Selecting 10 characteristic points at left and right cheek place, the combination comparing other characteristic points is better for the fitting effect of datum plane and stability.Further, it is also possible to select 3 × 3 neighborhood points of 10 characteristic points in above-mentioned cheek place, so jointly described the datum plane of matching by 80 neighborhood points.3 × 3 neighborhood points of 10 characteristic points in above-mentioned cheek place can also be selected, and above-mentioned 10 characteristic points collectively constitute 90 characteristic points, jointly describe the datum plane of matching.3 × 3 neighborhood points choosing features described above point compare 10 characteristic points to describe datum plane, and on the basis that amount of calculation is held essentially constant, stability then promotes further.
Preferably, the second feature point of above-described embodiment preferentially chooses the characteristic point on three-dimensional face images axis, the described characteristic point on face axis several in characteristic point such as includes in the bridge of the nose, nose, people, in lip, in the middle part of chin or all combines, i.e. Point26, Point27, Point28, Point29, Point31, Point36, several in Point42, Point61 or all combine.Further, it is also possible to include the characteristic point in the middle of Point0 and Point5, here this characteristic point is labeled as Point78 (not shown in Fig. 2).Second feature point is collectively constituted by above-mentioned 9 characteristic points.Above-mentioned 9 human face characteristic points have comparatively significantly convex and concave feature in three-dimensional face images, by the distance value of above-mentioned 9 characteristic points to the datum plane of matching, it is possible to characterize the side view information of face preferably.The second feature point of above-described embodiment can also choose three-dimensional face images axis feature neighborhood of a point point, described three-dimensional face images axis feature neighborhood of a point point includes the bridge of the nose, nose, in people, in lip, several in feature neighborhood of a point point or all combine such as in the middle part of chin, i.e. Point26, Point27, Point28, Point29, Point31, Point36, Point42, several in the neighborhood point of Point61 or all combine, the side view information of face can also be characterized preferably by the distance value of three-dimensional face images axis feature neighborhood of a point point to the datum plane of matching.
Preferably, the third feature point of above-described embodiment preferentially chooses nose characteristic point in three-dimensional face images, i.e. Point29.Nose is position the most prominent in three-dimensional face, when choosing nose as third feature point, the angled relationships calculating nose and fisrt feature the formed straight line of point and datum plane can better characterize the male features of face, thus improving the accuracy of In vivo detection further.The third feature point of above-described embodiment can also choose other prominent features points in three-dimensional face images, such as upper lip intermediate point or lower lip intermediate point etc..The third feature point of above-described embodiment can also choose the feature neighborhood of a point point at the prominent position such as nose characteristic point or upper lip intermediate point in three-dimensional face images, the main purpose of third feature point is to characterize face male features, selected third feature point is made to become apparent from the angled relationships of datum plane with the straight line of fisrt feature point composition, characterize further the male features of face, thus further determining that acquired facial image is whether from live body.
Be calculated at obtaining three-dimensional face second feature point to the distance value of datum plane and third feature point to fisrt feature the formed straight line of the some angled relationships with datum plane after, namely available distance value and angled relationships judge that whether acquired three-dimensional face images is from live body, provide one here and judge embodiment:
Above-mentioned distance value and angled relationships are classified by the grader using training in advance good;
Judge that acquired three-dimensional face images is whether from live body according to classification results.
The present embodiment adopts grader to judge, and whether three-dimensional face images is from live body, need to use a large amount of three-dimensional face sample data to be computed obtaining distance value as above and angled relationships, after grader is trained by the distance value of three-dimensional face data and angled relationships that calculate gained, namely set criterion;Then by acquired to be certified/three-dimensional face images of user that identifies be calculated obtain three-dimensional face images second feature point to the distance value of the datum plane of fisrt feature point matching and third feature point to the angled relationships of fisrt feature the formed straight line of point Yu datum plane;Finally the distance value that obtains is inputted trained grader with angled relationships, obtain classification output, judge that three-dimensional face images is whether from live body according to classification.
Such as, grader is SVM classifier, and SVM classifier uses a large amount of distance values and angled relationships sample to be trained obtaining.By distance value acquired for the three-dimensional face images according to be certified/identification user and angled relationships input grader, if output result is 1, then it is live body, if output result is-1, being then non-living body, the expression-form of certain data result can be other, is merely illustrative a kind of implementation.The present embodiment adopts grader to judge whether three-dimensional face images is live body, further increases the degree of accuracy of identification.
Above-described embodiment can interpolate that acquired three-dimensional face images is whether from live body.First fisrt feature point and the second feature point of three-dimensional face images are obtained, and obtain fisrt feature point and second feature point three-dimensional coordinate, three-dimensional coordinate information according to fisrt feature point simulates a datum plane of three-dimensional face images, the three-dimensional coordinate information of recycling second feature point calculates the second feature point distance value to said reference plane, the male features of three-dimensional face can be better characterized on the one hand according to distance value, the shape facility of the side of three-dimensional face can also be characterized on the other hand, thus judging that acquired three-dimensional face images is whether from live body, to photo, the planar frauds such as video carry out effectively false proof, improve the accuracy of recognition of face.10 characteristic points at the preferred face of fisrt feature point left and right cheek place and/or 10 feature neighborhood of a point points at the left and right cheek place of face, further increase stability and the robustness of face recognition result.
With a preferred embodiment, present invention is expanded on further below:
Step 301, opens 3D photographic head, obtains a three-dimensional face images.
Calling existing algorithm to open 3D photographic head, obtain a facial image, obtain the three-dimensional coordinate information of 78 characteristic points in facial image, these 78 characteristic points are called facial modeling algorithm and are obtained, as shown in Figure 2.With following symbol, 78 characteristic points are indicated successively: Point0, Point1 ..., Point76, Point77.
Step 302, chooses fisrt feature point, second feature point and third feature point, obtains the three-dimensional coordinate information of fisrt feature point, second feature point and third feature point.
In order to the stability making the datum plane of fisrt feature point matching is higher, fisrt feature point preferentially chooses 3 × 3 neighborhood points of 10 characteristic points at the left and right cheek place of face, i.e. Point53 here, Point54, Point55, Point56, Point57 and Point65, Point66, Point67, Point68,3 × 3 neighborhood points of Point69, plus described 10 characteristic points, totally 90 neighborhood point composition fisrt feature points so, and obtain the three-dimensional coordinate information of above-mentioned 90 neighborhood points.
The shape information of male features and reflection face side in order to characterize three-dimensional face better, second feature point preferentially chooses the characteristic point on face axis here, i.e. Point26, Point27, Point28, Point29, Point31, Point36, Point42, Point61, amounts to 8 characteristic points, adds place between the eyebrows characteristic point, the i.e. intermediate features point of Point0 and Point5, it is labeled as Point78, in order to the statement of following computing formula is convenient, the three-dimensional coordinate information (x of Point7878, y78, z78) represent, such second feature point amounts to and chooses 9 characteristic points.
Prioritizing selection nose characteristic point is as third feature point, i.e. Point29.Nose is position the most prominent in three-dimensional face, when choosing nose as third feature point, the angled relationships calculating nose and fisrt feature the formed straight line of point and datum plane can better characterize the male features of face, thus improving the accuracy of In vivo detection further.
Step 303, the datum plane according to the three-dimensional coordinate information matching three-dimensional face images of fisrt feature point.
Utilize the three-dimensional coordinate information of 90 the neighborhood points obtained, and use the three-dimensional coordinate information matching for datum plane of 90 neighborhood points.The three-dimensional coordinate information of 90 neighborhood points is successively with (px0, py0, pz0), (px1, py1, pz1) ..., (px89, py89, pz89). represent.
Utilize method of least square, cheek describe the three-dimensional coordinate (px of 90 points of datum plane0, py0, pz0), (px1, py1, pz1) ..., (px89, py89, pz89) simulate plane α: a z=α * x+b*y+c.
The process calculating a, b, c is as follows: set
A = px 0 py 1 1 px 1 py 1 1 . . . . . . . . . px 88 py 88 1 px 89 py 89 1 , X = a b c , Z = pz 0 pz 1 . . . pz 88 pz 89 .
Then
A*X=Z,
Matlab is utilized to do three coefficients obtaining plane calculated as below
X=A ZorX=(ATA)-1ATZ.
Step 304, calculates the second feature point distance value to the datum plane of three-dimensional face images according to the three-dimensional coordinate information of second feature point.
Calculate 9 characteristic point Point26, Point27, Point28, Point29, Point31, Point36, Point42, Poinr61, Point78 in second feature point respectively and, to the distance of datum plane α, provide a kind of computational methods here, as follows:
If note ith feature point is d to the distance of fit Planei, then
d i = | a * x i + b * y i - z i + c | a 2 + b 2 + 1
Namely above-mentioned 9 characteristic points are [d to the distance value of datum plane26, d27, d28, d29, d31, d36, d42, d61, d78]。
Step 305, calculates the third feature point angled relationships to fisrt feature the formed straight line of point Yu datum plane according to the three-dimensional coordinate information of third feature point.
Calculating the relation of 10 straight lines that 10 characteristic points selected by nose Point29 and fisrt feature point form and the angle of datum plane α, preferably angle sine value represents angled relationships here.The angle sine value of 10 straight lines that namely characteristic point Point29 and characteristic point Point53, Point54, Point55, Point56, Point57, Point65, Point66, Point67, Point68, Point69 form and datum plane α.
If the straight line of nose Point29 and above-mentioned 10 characteristic points composition is straight line L1, L2 respectively ..., L10.
Note characteristic point Point29 is d to the distance of jth characteristic pointI, j, i=29, j=53,54,55,56,57,65,66,67,68,69;Then
d i , j = ( x i - x j ) 2 + ( y i - y j ) 2 + ( z i - z j ) 2
Then, straight line L1, L2 ..., L10 can represent with following mark respectively with the sine value of the angle of plane α respectively:
i n _ L 1 = d 29 d 29 , 53 , sin _ L 2 = d 29 d 29 , 54 , sin _ L 3 = d 29 d 29 , 55 ,
sin _ L 4 = d 29 d 29 , 56 , sin _ L 5 = d 29 d 29 , 57 , sin _ L 6 = d 29 d 29 , 65 ,
sin _ L 7 = d 29 d 29 , 66 , sin _ L 8 = d 29 d 29 , 67 , sin _ L 9 = d 29 d 29 , 68 , sin _ L 10 = d 29 d 29 , 69 .
Namely above-mentioned 10 angle sine values are:
[sin_L1, sin_L2 ..., sin_L10.]
According to above-mentioned distance value and angled relationships, step 206, judges that described three-dimensional face images is whether from live body.
Using above-mentioned 9 distance values calculated and 10 angle sine values as 19 dimensional feature vectors, input in the trained SVM classifier completed, judge that acquired three-dimensional face images is whether from live body according to output result.Output form used here as 1 and-1 indicates output result, and wherein 1 to represent judged result be live body, and-1 to represent judged result be non-living body.
In machine learning field, SVM (support vector machine, SupportVectorMachine) is a learning model having supervision, is commonly used to carry out pattern recognition, classification and regression analysis.SVM is frequently used in two class problems.
Gather and calculated the characteristic of more than 50,000 part of live body and non-living body face, train function svmtrain to train grader with the SVM of Matlab.
In these characteristics, training sample is 28000 parts (wherein live body 6000 parts, non-living bodies 22000 parts), test sample be 24000 parts (wherein live body 4000 parts, non-living body 20000 parts), and labelling true man's face is+1, dummy's face is-1.Choose the parameter of the best when training, train in the parameter of function svmtrain at the SVM of Matlab, set and take gaussian kernel function and be provided with sigma=4.
In sum, the present embodiment is in the process of practical application, it is possible to accurately judges that whether the three-dimensional face images obtained is from live body, and identifies that process stability and robustness are higher.
On the other hand, the embodiment of the present invention also provides for the device of a kind of three-dimensional face In vivo detection, as shown in Figure 6, and including:
Acquisition module 11, is used for obtaining three-dimensional face images.
Choose module 12, for choosing fisrt feature point and the second feature point of described three-dimensional face images.
Extraction module 13, for extracting described fisrt feature point and the three-dimensional coordinate information of second feature point.
Processing module 14, for the datum plane of the three-dimensional coordinate information matching three-dimensional face images according to described fisrt feature point;And the three-dimensional coordinate information according to second feature point calculates the second feature point distance value to the datum plane of described three-dimensional face images.
According to described distance value, judge module 15, for judging that described three-dimensional face images is whether from live body.
The device of the face In vivo detection of the embodiment of the present invention can interpolate that whether facial image is from live body, and the degree of accuracy identified is high, and recognition result has robustness and stability.
As an alternative embodiment of the invention, three-dimensional face living body detection device includes:
Acquisition module, is used for obtaining three-dimensional face images.
Choose module, for choosing the fisrt feature point of described three-dimensional face images, second feature point and third feature point.
Extraction module, for extracting the three-dimensional coordinate information of described fisrt feature point, second feature point and third feature point.
Processing module, for the datum plane of the three-dimensional coordinate information matching three-dimensional face images according to described fisrt feature point;Three-dimensional coordinate information according to second feature point calculates the second feature point distance value to the datum plane of described three-dimensional face images;And the three-dimensional coordinate information according to third feature point calculates the third feature point angled relationships to described fisrt feature the formed straight line of point Yu described datum plane.
According to described angled relationships and described distance value, judge module, for judging that described three-dimensional face images is whether from live body.
Preferably, choose module and preferentially choose face left and right cheek characteristic point and/or the left and right cheek feature neighborhood of a point point of face as fisrt feature point, selecting the characteristic point at left and right cheek place, the combination comparing other characteristic points is better for the fitting effect of datum plane and stability;Further, it is also possible to select above-mentioned cheek place feature neighborhood of a point point, it is also possible to select the combination of above-mentioned cheek place feature neighborhood of a point point and cheek place feature.The datum plane of three-dimensional face images is obtained by the three-dimensional coordinate information matching of fisrt feature neighborhood of a point point, same characteristic point for same person, by the datum plane of fisrt feature neighborhood of a point point matching, there is good stability and robustness, not by the impact of the posture of face collection and picture noise, more can characterize the overall depth information of three-dimensional face images so that recognition result degree of accuracy is higher.
Preferably, choosing characteristic point that module preferentially chooses on face axis as second feature point, the described characteristic point on face axis several in characteristic point such as includes in the place between the eyebrows bridge of the nose, nose, people, in lip, in the middle part of chin or all combines;Can also choosing three-dimensional face images axis feature neighborhood of a point point, described three-dimensional face images axis feature neighborhood of a point point several in feature neighborhood of a point point such as includes in place between the eyebrows, the bridge of the nose, nose, people, in lip, in the middle part of chin or all combines.On face axis, characteristic point is as second feature point, for calculating the characteristic point distance to datum plane, can better characterizing face male features on the one hand, another side can pass through distance value and better describe the shape information of face side, improves the accuracy identified.
Preferably, choose module and preferentially choose in face nose characteristic point as third feature point;Nose feature neighborhood of a point point can also be selected in face as third feature point.Nose is position the most prominent in three-dimensional face, when choosing nose as third feature point, the angled relationships calculating nose and fisrt feature the formed straight line of point and datum plane can better characterize the male features of face, thus improving the accuracy of In vivo detection further.
Obtaining face datum plane information, after straight line that second feature point forms to distance value and the nose of datum level to the fisrt feature point angled relationships with datum plane, it is possible to whether three-dimensional face images is from live body to use distance value and angled relationships to judge.Here an embodiment judged is provided:
Judge module includes:
Taxon, adjusts the distance value for the good grader that uses training in advance and angled relationships is classified;
According to classification results, judging unit, for judging whether three-dimensional face images is live body.
Three-dimensional face data are computed obtaining distance value as above and angled relationships, after grader is trained by the distance value of three-dimensional face data and angled relationships that calculate gained, namely set criterion;Then by acquired to be certified/three-dimensional face images of user that identifies be calculated obtain three-dimensional face images second feature point to the distance value of the datum plane of fisrt feature point matching and third feature point to the angled relationships of fisrt feature the formed straight line of point Yu datum plane;Finally the distance value that obtains is inputted trained grader with angled relationships, obtain classification output, judge that three-dimensional face images is whether from live body according to classification.
Such as, grader is SVM classifier, and SVM classifier uses a large amount of distance values and angled relationships sample to be trained obtaining.By distance value acquired for the three-dimensional face images according to be certified/identification user and angled relationships input grader, if output result is 1, then it is live body, if output result is-1, being then non-living body, the expression-form of certain data result can be other, is merely illustrative a kind of implementation.The present embodiment adopts grader to judge whether three-dimensional face images is live body, further increases the degree of accuracy of identification.
The above is the preferred embodiment of the present invention; it should be pointed out that, for those skilled in the art, under the premise without departing from principle of the present invention; can also making some improvements and modifications, these improvements and modifications also should be regarded as protection scope of the present invention.

Claims (10)

1. the method for a three-dimensional face In vivo detection, it is characterised in that including:
Obtain three-dimensional face images;
Choose fisrt feature point and the second feature point of described three-dimensional face images, obtain described fisrt feature point and the three-dimensional coordinate information of second feature point;
The datum plane of the three-dimensional coordinate information matching three-dimensional face images according to described fisrt feature point;
Three-dimensional coordinate information according to described second feature point calculates the second feature point distance value to the datum plane of described three-dimensional face images;
Judge that described three-dimensional face images is whether from live body according to described distance value.
2. the method for three-dimensional face In vivo detection according to claim 1, it is characterised in that also include:
Choose the third feature point of described three-dimensional face images, obtain the three-dimensional coordinate information of described third feature point;
Three-dimensional coordinate information according to third feature point calculates the third feature point angled relationships to described fisrt feature the formed straight line of point Yu described datum plane;
Described judge that whether described three-dimensional face images from live body is according to described distance value: judge that described three-dimensional face images is whether from live body according to described angled relationships and described distance value.
3. the method for three-dimensional face In vivo detection according to claim 1 and 2, it is characterised in that described fisrt feature point comprises face left and right cheek characteristic point and/or the left and right cheek feature neighborhood of a point point of face.
4. the method for three-dimensional face In vivo detection according to claim 1 and 2, it is characterised in that described second feature point comprises characteristic point on face axis.
5. the method for three-dimensional face In vivo detection according to claim 2, it is characterised in that described third feature point is face nose characteristic point.
6. the device of a three-dimensional face In vivo detection, it is characterised in that including:
Acquisition module, is used for obtaining three-dimensional face images;
Choose module, for choosing fisrt feature point and the second feature point of described three-dimensional face images;
Extraction module, for obtaining described fisrt feature point and the three-dimensional coordinate information of second feature point;
Processing module, for the datum plane of the three-dimensional coordinate information matching three-dimensional face images according to described fisrt feature point;And the three-dimensional coordinate information according to second feature point calculates the second feature point distance value to the datum plane of described three-dimensional face images;
According to described distance value, judge module, for judging that described three-dimensional face images is whether from live body.
7. the device of three-dimensional face In vivo detection according to claim 6, it is characterised in that
Described module of choosing is additionally operable to choose the third feature point of described three-dimensional face images;
Described extraction module is additionally operable to obtain the three-dimensional coordinate information of described third feature point;
Described processing module is additionally operable to the three-dimensional coordinate information according to third feature point and calculates the third feature point angled relationships to described fisrt feature the formed straight line of point Yu described datum plane;
Then, according to described angled relationships and described distance value, described judge module is for judging that described three-dimensional face images is whether from live body.
8. the device of the three-dimensional face In vivo detection according to claim 6 or 7, it is characterised in that
Fisrt feature point comprises face left and right cheek characteristic point and/or the left and right cheek feature neighborhood of a point point of face.
9. the device of the three-dimensional face In vivo detection according to claim 6 or 7, it is characterised in that
Second feature point comprises characteristic point on face axis.
10. the device of three-dimensional face In vivo detection according to claim 7, it is characterised in that
Third feature point is face nose characteristic point.
CN201610048509.2A 2016-01-25 2016-01-25 Three-dimensional human face living body detection method and device Active CN105740781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610048509.2A CN105740781B (en) 2016-01-25 2016-01-25 Three-dimensional human face living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610048509.2A CN105740781B (en) 2016-01-25 2016-01-25 Three-dimensional human face living body detection method and device

Publications (2)

Publication Number Publication Date
CN105740781A true CN105740781A (en) 2016-07-06
CN105740781B CN105740781B (en) 2020-05-19

Family

ID=56247554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610048509.2A Active CN105740781B (en) 2016-01-25 2016-01-25 Three-dimensional human face living body detection method and device

Country Status (1)

Country Link
CN (1) CN105740781B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845405A (en) * 2017-01-20 2017-06-13 武汉仟品网络科技有限公司 A kind of method, device and electronic equipment that identity is recognized by Biological imaging
CN109145750A (en) * 2018-07-23 2019-01-04 华迅金安(北京)科技有限公司 A kind of driver identity rapid authentication method and system
CN109389032A (en) * 2018-08-27 2019-02-26 北京三快在线科技有限公司 Determination method, apparatus, electronic equipment and the readable storage medium storing program for executing of picture authenticity
CN109508702A (en) * 2018-12-29 2019-03-22 安徽云森物联网科技有限公司 A kind of three-dimensional face biopsy method based on single image acquisition equipment
CN109993863A (en) * 2019-02-20 2019-07-09 南通大学 A kind of access control system and its control method based on recognition of face
CN110443102A (en) * 2018-05-04 2019-11-12 北京眼神科技有限公司 Living body faces detection method and device
CN111046703A (en) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 Face anti-counterfeiting detection method and device and multi-view camera
CN111368581A (en) * 2018-12-25 2020-07-03 浙江舜宇智能光学技术有限公司 Face recognition method based on TOF camera module, face recognition device and electronic equipment
CN111571611A (en) * 2020-05-26 2020-08-25 广州纳丽生物科技有限公司 Facial operation robot track planning method based on facial and skin features
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112711968A (en) * 2019-10-24 2021-04-27 浙江舜宇智能光学技术有限公司 Face living body detection method and system
CN112997185A (en) * 2019-09-06 2021-06-18 深圳市汇顶科技股份有限公司 Face living body detection method, chip and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930602A (en) * 2012-10-20 2013-02-13 西北大学 Tomography-image-based facial skin three-dimensional surface model reconstructing method
US9213885B1 (en) * 2004-10-22 2015-12-15 Carnegie Mellon University Object recognizer and detector for two-dimensional images using Bayesian network based classifier
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9213885B1 (en) * 2004-10-22 2015-12-15 Carnegie Mellon University Object recognizer and detector for two-dimensional images using Bayesian network based classifier
CN102930602A (en) * 2012-10-20 2013-02-13 西北大学 Tomography-image-based facial skin three-dimensional surface model reconstructing method
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈立生 等: "基于几何特征与深度数据的三维人脸识别", 《电脑知识与技术》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845405A (en) * 2017-01-20 2017-06-13 武汉仟品网络科技有限公司 A kind of method, device and electronic equipment that identity is recognized by Biological imaging
CN110443102B (en) * 2018-05-04 2022-05-24 北京眼神科技有限公司 Living body face detection method and device
CN110443102A (en) * 2018-05-04 2019-11-12 北京眼神科技有限公司 Living body faces detection method and device
CN109145750A (en) * 2018-07-23 2019-01-04 华迅金安(北京)科技有限公司 A kind of driver identity rapid authentication method and system
CN109389032A (en) * 2018-08-27 2019-02-26 北京三快在线科技有限公司 Determination method, apparatus, electronic equipment and the readable storage medium storing program for executing of picture authenticity
CN109389032B (en) * 2018-08-27 2020-06-12 北京三快在线科技有限公司 Picture authenticity determining method and device, electronic equipment and readable storage medium
CN111046703A (en) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 Face anti-counterfeiting detection method and device and multi-view camera
CN111046703B (en) * 2018-10-12 2023-04-18 杭州海康威视数字技术股份有限公司 Face anti-counterfeiting detection method and device and multi-view camera
CN111368581A (en) * 2018-12-25 2020-07-03 浙江舜宇智能光学技术有限公司 Face recognition method based on TOF camera module, face recognition device and electronic equipment
CN109508702A (en) * 2018-12-29 2019-03-22 安徽云森物联网科技有限公司 A kind of three-dimensional face biopsy method based on single image acquisition equipment
CN109993863A (en) * 2019-02-20 2019-07-09 南通大学 A kind of access control system and its control method based on recognition of face
CN112997185A (en) * 2019-09-06 2021-06-18 深圳市汇顶科技股份有限公司 Face living body detection method, chip and electronic equipment
CN112711968A (en) * 2019-10-24 2021-04-27 浙江舜宇智能光学技术有限公司 Face living body detection method and system
CN111571611A (en) * 2020-05-26 2020-08-25 广州纳丽生物科技有限公司 Facial operation robot track planning method based on facial and skin features
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment

Also Published As

Publication number Publication date
CN105740781B (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN105740781A (en) Three-dimensional human face in-vivo detection method and device
CN105574518B (en) Method and device for detecting living human face
CN105740780B (en) Method and device for detecting living human face
CN105023010B (en) A kind of human face in-vivo detection method and system
CN108717531B (en) Human body posture estimation method based on Faster R-CNN
CN101398886B (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN105740779A (en) Method and device for human face in-vivo detection
CN101558996B (en) Gait recognition method based on orthogonal projection three-dimensional reconstruction of human motion structure
JP5873442B2 (en) Object detection apparatus and object detection method
US8977010B2 (en) Method for discriminating between a real face and a two-dimensional image of the face in a biometric detection process
US9117138B2 (en) Method and apparatus for object positioning by using depth images
CN104933389B (en) Identity recognition method and device based on finger veins
CN106919941A (en) A kind of three-dimensional finger vein identification method and system
CN105956582A (en) Face identifications system based on three-dimensional data
CN108182397B (en) Multi-pose multi-scale human face verification method
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
Ma et al. Using b-spline curves for hand recognition
Tuceryan et al. A framework for estimating probability of a match in forensic bite mark identification
US20200065564A1 (en) Method for determining pose and for identifying a three-dimensional view of a face
CN109117726A (en) A kind of identification authentication method, device, system and storage medium
Panetta et al. Unrolling post-mortem 3D fingerprints using mosaicking pressure simulation technique
CN116883472B (en) Face nursing system based on face three-dimensional image registration
CN106156739A (en) A kind of certificate photo ear detection analyzed based on face mask and extracting method
CN104951767A (en) Three-dimensional face recognition technology based on correlation degree
WO2021235440A1 (en) Method and device for acquiring movement feature amount using skin information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 1, building 8, floor 802, 1

Applicant after: BEIJING TECHSHINO TECHNOLOGY Co.,Ltd.

Address before: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 1, building 8, floor 802, 1

Applicant before: BEIJING TECHSHINO TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant