The method and apparatus of face In vivo detection
Technical field
The present invention relates to bio-identification and image processing field, particularly a kind of method and apparatus of face In vivo detection.
Background technology
The information-based fast development of modern society, people also pay close attention to the security of personal information simultaneously more enjoying the information-based convenience brought.The mode being come guarantee information and identity security by the biological characteristic that people is intrinsic is approved more and more widely, becomes the first-selection of system login, authentication, management of public safety gradually.And in numerous biometrics identification technologies, face recognition technology has non-imposed and untouchable because of it, user is made not need to coordinate, without the need to contacting acquisition and the identification that can complete facial image under unconscious state with equipment, the development through decades has been widely used in the occasions such as public safety monitoring, work attendance gate inhibition and entry and exit.Along with application is constantly general, the potential safety hazard of existing face identification system also displays gradually, utilize the photo in photo, mobile phone or the pad printed or video etc. can rush across the security perimeter of face identification system, cause serious harm, impact society stabilizes.Therefore, increase efficient face In vivo detection in face identification system and become the effective way solving this potential safety hazard.
Human face in-vivo detection method conventional is at present the strategy based on man-machine interaction, system sends instruction indicating user and does corresponding actions (as: close one's eyes, open one's mouth) according to instruction, detect whether for real human face with this, but this kind of method needs the height of user to coordinate, user experience is poor, identifying is long, makes the non-imposed feature of face recognition technology no longer outstanding.Also the method by increasing sensing device on face recognition device is had to carry out face In vivo detection, as: thermal infrared inductor, body weight inductor etc., but these method Detection results are not good, are easy to be cracked by lawless person.Recent years, along with the development of three-dimensional face recognition technology, the three-dimensional data information of face has also been applied in face In vivo detection, but usually only utilize the depth data in face three-dimensional data information, mainly detect by calculating depth capacity difference, its accuracy is lower, and application exists certain limitation.Therefore, urgent need searching is a kind of quick and precisely, the human face in-vivo detection method of convenience and high-efficiency, overcomes above-mentioned deficiency, solves the technical matters that those skilled in the art are urgently to be resolved hurrily.
Summary of the invention
For overcoming the deficiencies in the prior art, the object of the present invention is to provide a kind of method and apparatus of face In vivo detection, can convenience and high-efficiency judge whether three-dimensional face images comes from live body, detection speed is fast, and degree of accuracy is high, strong adaptability.
Technical scheme provided by the invention is as follows:
On the one hand, provide a kind of method of face In vivo detection, comprising:
Obtain three-dimensional face images;
Described three-dimensional face images is extracted the three-dimensional coordinate information of human face characteristic point, and the posture feature information of three-dimensional face images;
The three-dimensional coordinate information of described human face characteristic point is utilized to calculate the face characteristic information representing face characteristic;
The face characteristic information of described expression face characteristic and the posture feature information of three-dimensional face images is utilized to judge whether described three-dimensional face images comes from live body.
On the other hand, provide a kind of device of face In vivo detection, comprising:
Acquisition module, for obtaining three-dimensional face images;
Extraction module, for extracting the three-dimensional coordinate information of human face characteristic point in described three-dimensional face images, and the posture feature information of three-dimensional face images;
Computing module, calculates for utilizing the three-dimensional coordinate information of described human face characteristic point the face characteristic information representing face characteristic;
Judge module, the posture feature information for the face characteristic information and three-dimensional face images that utilize described expression face characteristic judges whether described three-dimensional face images comes from live body.
The present invention has following beneficial effect:
The present invention can judge whether three-dimensional face images comes from live body.First three-dimensional face images is obtained, then in three-dimensional face images, extract the three-dimensional coordinate information of human face characteristic point, and the posture feature information of three-dimensional face images, utilize the three-dimensional coordinate information of human face characteristic point to calculate the face characteristic information representing face characteristic afterwards, finally utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body.
The method convenience and high-efficiency of face In vivo detection of the present invention, only need obtain the three-dimensional face images of user, and coordinate without the need to user and do other actions, and shorten the time of whole testing process, speed is fast.
The precision of method of face In vivo detection of the present invention is high, utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body, compared to utilizing merely, the detection method degree of accuracy of the single features such as depth capacity difference feature is high, strong adaptability.
In sum, the method for face In vivo detection of the present invention can judge whether three-dimensional face images comes from live body, detection method convenience and high-efficiency, and speed is fast, and degree of accuracy is high, strong adaptability.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of an embodiment of the method for face In vivo detection of the present invention;
Fig. 2 is an illustrational process flow diagram of the embodiment of the method for face In vivo detection of the present invention;
Fig. 3 is another illustrational process flow diagram of the embodiment of the method for face In vivo detection of the present invention;
Fig. 4 is the schematic diagram of a selection mode of human face characteristic point in the embodiment of the present invention one;
Fig. 5 is the schematic diagram of a selection mode of human face characteristic point in the embodiment of the present invention two;
Fig. 6 is the schematic diagram of a selection mode of human face characteristic point in the embodiment of the present invention three;
Fig. 7 is that the human eye sight of face of the present invention follows the tracks of schematic diagram, and wherein, Fig. 7 a and 7b is that the human eye sight of living body faces follows the tracks of schematic diagram; Fig. 7 c and 7d is that the human eye sight of non-living body face follows the tracks of schematic diagram;
Fig. 8 is the schematic diagram of an embodiment of the device of face In vivo detection of the present invention;
Fig. 9 is an illustrational schematic diagram of the device embodiment of face In vivo detection of the present invention;
Figure 10 is another illustrational schematic diagram of the device embodiment of face In vivo detection of the present invention.
Embodiment
For making the technical problem to be solved in the present invention, technical scheme and advantage clearly, be described in detail below in conjunction with the accompanying drawings and the specific embodiments.
On the one hand, the embodiment of the present invention provides a kind of method of face In vivo detection, as shown in Figure 1, comprising:
Step 101: obtain three-dimensional face images.
In this step, preferably use three-dimensional face images to obtain equipment and acquire three-dimensional face images.
Step 102: the three-dimensional coordinate information extracting human face characteristic point in three-dimensional face images, and the posture feature information of three-dimensional face images.
The facial image that ordinary two dimensional face obtains equipment acquisition is two dimensional surface facial image, the two-dimensional coordinate of unique point can only be obtained, i.e. (the x of unique point, y) coordinate figure, and the acquisition of three-dimensional face images acquisition equipment is three-dimensional face images, the three-dimensional coordinate of unique point can be obtained, comprise horizontal coordinate, vertical coordinate and depth coordinate, i.e. (the x of unique point, y, z) coordinate figure, the depth coordinate of facial image is added compared to two dimensional image, i.e. z value, such as: it can be by two infrared induction cameras and Infrared laser emission device that three-dimensional face images obtains equipment, or utilize an infrared induction camera, a colored induction camera and Infrared laser emission device are formed, imitate the principle of parallax of human eye, gather facial image simultaneously, follow the trail of the track of Infrared, the depth coordinate of three-dimensional face images is calculated by triangle polyester fibre principle.
The three-dimensional coordinate information of the human face characteristic point extracted in this step is the range information of three dimensional coordinate space, with the initial point that a camera of three-dimensional face images acquisition equipment is three dimensional coordinate space, the positive dirction of the user oriented direction of equipment as z-axis is obtained using three-dimensional face images, the positive dirction of x-axis and y-axis can be determined according to right-handed coordinate system, three dimensional coordinate space is set up with this, by being converted to unique point on facial image in three dimensional coordinate space relative to the range information of true origin, namely obtain the three-dimensional coordinate information of human face characteristic point.
In this step, the posture feature information of three-dimensional face images refers to the posture feature information of a frame three-dimensional face images and/or the posture feature change information of two frame three-dimensional face images, as deviation post, angle of inclination, the anglec of rotation etc.
Step 103: utilize the three-dimensional coordinate information of human face characteristic point to calculate the face characteristic information representing face characteristic.
Face characteristic information refers to the face characteristic information of a frame three-dimensional face images and/or the face characteristic change information of two frame three-dimensional face images.Because the characteristic information on facial image is more, comprise nose, eyes, face, cheek, several main region such as eyebrow, each region is made up of multiple unique point again, the three-dimensional coordinate information of the human face characteristic point therefore extracted is also more, and may cause there is more interfere information and noise in these three-dimensional coordinate informations because human face posture change or light environment affect, if directly utilize certain some three-dimensional coordinate information to carry out face In vivo detection merely, the face characteristic that may cannot characterize three-dimensional face images well because of it causes detection to there is relatively large deviation, produce flase drop, detect accurately low.Utilize the three-dimensional coordinate information of human face characteristic point to calculate the face characteristic information representing face characteristic, can effectively remove interfere information and noise, face characteristic information better can characterize face characteristic, avoids flase drop, improves and detects degree of accuracy.Face characteristic information can be multidimensional characteristic vectors by calculating or other forms of characteristic.
Step 104: utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body.
The embodiment of the present invention can judge whether three-dimensional face images comes from live body.First three-dimensional face images is obtained, then in three-dimensional face images, extract the three-dimensional coordinate information of human face characteristic point, and the posture feature information of three-dimensional face images, utilize the three-dimensional coordinate information of human face characteristic point to calculate the face characteristic information representing face characteristic afterwards, finally utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body.
The method convenience and high-efficiency of embodiment of the present invention face In vivo detection.Only need obtain the three-dimensional face images of user, coordinate without the need to user and do other actions, and shorten the time of whole testing process, speed is fast.
The precision of method of face In vivo detection of the present invention is high, strong adaptability.The three-dimensional coordinate information of human face characteristic point is utilized to calculate the face characteristic information representing face characteristic, can effectively remove interfere information and noise, characterize face characteristic more accurately, avoid flase drop, improve and detect degree of accuracy, and utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body, compared to utilizing merely, the detection method degree of accuracy of the single features such as depth capacity difference feature is high, strong adaptability.
In sum, the method for embodiment of the present invention face In vivo detection can judge whether three-dimensional face images comes from live body, detection method convenience and high-efficiency, and speed is fast, and degree of accuracy is high, strong adaptability.
One as above-described embodiment illustrates, as shown in Figure 2, step 104 comprises:
Step 1041: the expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information;
In this step, the expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information, union feature information can be multivariate joint probability proper vector or other forms of union feature data.The information of union feature information is comprehensive, and can characterize the face characteristic of three-dimensional face images more accurately, because union feature information is a kind of comprehensive characteristics information, its adaptability is stronger.
Step 1042: whether three-dimensional face images comes from live body to utilize union feature information to judge.
First the expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information in the embodiment of the present invention, whether three-dimensional face images comes from live body to utilize union feature information to judge afterwards.The method utilizing union feature information to carry out judging refers to the entirety of face characteristic information and posture feature information being combined as one, considers two kinds of characteristic information factors simultaneously, carries out comprehensive descision, and characteristic information is more comprehensive, and adaptability is stronger, and degree of accuracy is higher.
Another kind as above-described embodiment illustrates, as shown in Figure 3, step 104 comprises:
Step 1041 ': utilize the posture feature information of three-dimensional face images to judge whether three-dimensional face images comes from live body, if proceed to next step;
Step 1042 ': utilize and represent that the face characteristic information of face characteristic judges whether three-dimensional face images comes from live body.
First utilize the posture feature information of three-dimensional face images to judge whether three-dimensional face images comes from live body in the embodiment of the present invention, if not, then directly obtain final detection result, judge that three-dimensional face images comes from non-living body; If so, then recycling represents that the face characteristic information of face characteristic judges whether three-dimensional face images comes from live body and do to detect further.The method of the face In vivo detection of the present embodiment first does preliminary judgement by the posture feature information of three-dimensional face images, detecting obviously is not the three-dimensional face images coming from live body, recycling represents that the face characteristic information of face characteristic does meticulousr judgement afterwards, effectively can improve the efficiency of In vivo detection, improve detection speed and degree of accuracy.The present embodiment is a kind of example, also can first utilize the face characteristic information representing face characteristic to do preliminary judgement, utilizes the posture feature information of three-dimensional face images to do to judge further afterwards, is not limited to order described in above-described embodiment.
Preferably, in the embodiment of the present invention, the posture feature information of three-dimensional face images is the crab angle of three-dimensional face images, the angle of pitch and roll angle, wherein: crab angle refers to the anglec of rotation of whole facial image relative to y-axis in three-dimensional coordinate system; The angle of pitch refers to the anglec of rotation of whole facial image relative to x-axis in three-dimensional coordinate system; Roll angle refers to the anglec of rotation of whole facial image relative to z-axis in three-dimensional coordinate system.
Further, in the embodiment of the present invention, human face characteristic point comprises multiple unique points in one or more regions in face eyes, nose and face region.Unique point due to facial image is subject to the impact of ambient lighting factor; often have the noise spot of some instability; impact is to a certain degree caused on the judgement of In vivo detection; and eyes, nose and face three regions lay respectively at the upper bottom of face; the principal character of face can be represented; and eyes and nose are face respectively caves in most and the most outstanding two regions; there is good stability; multiple unique points in one or more regions therefore in preferred face eyes, nose and face region characterize face characteristic, and stability is strong.
Further, utilize the three-dimensional coordinate information of human face characteristic point to calculate in the embodiment of the present invention to represent the face characteristic information of face characteristic to can be a little in-a distance value, point-face distance value, line-face angle value, line-line angle angle value one or several, its mid point is unique point, face is the plane be made up of unique point, and line is the straight line of unique point formation between two.Or face characteristic information can be degree of depth difference, human eye sight trace information etc. between the unique point by calculating.
Be described in detail with the method for three preferred embodiments to face In vivo detection of the present invention below:
Embodiment one:
Step 1), open three-dimensional face images collecting device, obtain a three-dimensional face images.
Step 2), extract the three-dimensional coordinate information of human face characteristic point of three-dimensional face images, and the posture feature information of three-dimensional face images, wherein human face characteristic point is 15 unique points in eyes, nose and face region, as shown in Figure 4, posture feature information is the crab angle of three dimensional coordinate space, the angle of pitch and roll angle.
Fig. 4 gives the mark of 78 unique points of three-dimensional face images, and (these 78 unique points can be that three-dimensional face images collecting device directly provides, also can be calculated by three-dimensional face images), represent with following symbol successively: Point0, Point1, ..., Point76, Point77; Three-dimensional coordinate information represents with following symbol successively: (x
0, y
0, z
0), (x
1, y
1, z
1) ..., (x
76, y
76, z
76), (x
77, y
77, z
77).
These 78 unique points are divided into 5 regions, namely
, there are 16 unique points brow region: Point0, Point1 ..., Point9, Point70 ..., Point75;
Eye areas, has 18 unique points: Point10, Point11 ..., Point25, Point76, Point77;
Nasal area, has 7 unique points: Point26, Point27 ..., Point32;
, there are 20 unique points in face region: Point33, Point34 ..., Point52;
, there are 17 unique points cheek region: Point53, Point54 ..., Point69;
We find that can characterize the best region of living body faces is nose, next is eyes and face, finally eyebrow and cheek region, thus preferred nose, eyes and face region amount to 15 unique points (as Fig. 4 black circles mark), and extract their three-dimensional coordinate information.15 unique points chosen are 6 unique point Point10, Point14, Point18 of eye areas respectively, Point22, Point76, Point77,7 unique point Point26, Point27, Point28 of nasal area, Point29, Point30, Point31, Point32,2 unique point Point33, the Point39 in face region, the three-dimensional coordinate information of these 15 unique points is represented by following symbol successively: (x
10, y
10, z
10), (x
14, y
14, z
14) ..., (x
77, y
77, z
77).With unique point Point31, Point76, Point77 for summit, construct a triangle, and obtain the three-dimensional information (x of these three points
31, y
31, z
31), (x
76, y
76, z
76), (x
77, y
77, z
77).
The posture feature information of three-dimensional face images can be that three-dimensional face images collecting device directly provides, also can be calculated by three-dimensional face images, represent with following symbol successively: (yaw, pitch, roll), numerical value unit is degree (°).
Step 3), utilize the three-dimensional coordinate information of 2 unique points of 7 unique points of nasal area in step 2,6 unique points of eye areas and mouth region, calculate 21 dimensional features that represents current face.
Concrete computation process is as follows:
First, utilize least square method, determine a plane β by three unique point Point26, Point30, Point32:
z=a*x+b*y+c.
The process calculating a, b, c is as follows, if
I.e. A*X=Z,
So utilize Matlab to do following calculating, three coefficients of plane β can be obtained, namely
X=A Z or X=(A
ta)
(-1)a
tz
Secondly, calculate unique point Point27, Point28, Point29, Point31 to the distance of plane β, be expressed as follows:
Marking i-th unique point to the distance of a jth unique point is dist
i,j, then
Here, we ask for unique point Point29 and Point10, distance between Point14, Point18, Point22, Point26, Point30, Point32, and unique point Point31 and Point26, distance between Point30, Point32, totally 10 eigenwerts:
dist
29,10,dist
29,14,dist
29,18,dist
29,22,dist
29,26,dist
29,30,dist
29,32,dist
31,26,dist
31,30,dist
31,32.
Afterwards, the straight line of three articles of mistakes the 29th unique point and the sine value of plane β angle is asked:
If be straight line L1 by the straight line of the 29th unique point and the 26th unique point decision,
If be straight line L2 by the straight line of the 29th unique point and the 30th unique point decision,
If be straight line L3 by the straight line of the 29th unique point and the 32nd unique point decision,
If be straight line L4 by the straight line of the 28th unique point and the 26th unique point decision,
If be straight line L5 by the straight line of the 28th unique point and the 30th unique point decision,
If be straight line L6 by the straight line of the 28th unique point and the 32nd unique point decision,
If be straight line L7 by the straight line of the 31st unique point and the 26th unique point decision,
If be straight line L8 by the straight line of the 31st unique point and the 30th unique point decision,
If be straight line L9 by the straight line of the 31st unique point and the 32nd unique point decision,
Then the sine value of the angle of L1, L2, L3, L4, L5, L6, L7, L8, L9 and plane β can represent with following mark respectively:
Calculate the sine value by the 29th unique point and the straight line L7 of the 28th unique point decision and the angle of plane β:
Now, 10 eigenwerts of getting back [sin_L1, sin_L2 ..., sin_L10].
Then calculate the angle theta between straight line L2 that the straight line L1 that forms of unique point Point31, Point76 and unique point Point31, Point77 form respectively, be calculated as follows:
L1:(x
31-x
76,y
31-y
76,z
31-z
76)
L2:(x
77-x
31,y
77-y
31,z
77-z
31)
Angle θ can distinguish living body faces and non-living body face well, particularly when adopting photograph print to be carried out the situation of faking by the means bending photo.Due to the rigid structure of face, the angle θ of living body faces substantially remains unchanged within the scope of certain attitude, and the angle θ of bending photo then larger change can occur along with the increase of degree of crook.
Finally, the numerical value calculated is synthesized a proper vector above, obtain 10+10+1=21 and tie up face characteristic.
Step 4), the face characteristic information of step 2 and step 3 and posture feature information are merged into the union feature of one 24 dimension.
Obtain the attitude information (yaw, pitch, roll) of current three-dimensional face images in step 2, calculate in integrating step 3 21 is face characteristic, finally obtains 24 dimension union features, i.e. [sign_d
1, sign_d
2, sign_d
3, distance
1..., θ, yaw, pitch, roll] represent a face as union feature.
Step 5), 24 dimension union features of the expression face that utilizes step 4 to obtain judge whether three-dimensional face images comes from live body.
Be input in the SVM classifier trained by 24 dimension union features, the result according to exporting judges whether to come from live body.If the result exported is+1, be then live body, if the result exported is-1, be then non-living body.
In machine learning field, SVM (support vector machine, SupportVectorMachine) is a learning model having supervision, is commonly used to carry out pattern-recognition, classification and regretional analysis.SVM is through being commonly used in two class problems.
By gathering and having calculated the characteristic of nearly 30,000 parts of live bodies and non-living body face, function svmtrain is trained to carry out training classifier with the SVM of Matlab.
In these characteristics, training sample is 16000 parts (wherein live body 6000 parts, non-living bodies 10000 parts), and test sample book is 12500 parts (wherein live body 4000 parts, non-living bodies 8500 parts), and marking true man's face is+1, dummy's face is-1.Choosing best parameter when training, training in the parameter of function svmtrain at the SVM of Matlab, setting and take gaussian kernel function, and be provided with sigma=4.
The face characteristic information representing face characteristic is a point-distance value, point-face distance value, line-face angle value and line-line angle angle value to utilize the three-dimensional coordinate information of human face characteristic point to calculate, its mid point is unique point, face is the plane be made up of unique point, line is the straight line that forms of unique point between two, and wherein one or more can be used as face characteristic information.Use an a little-distance value, point-face distance value, line-face angle value and line-line angle angle value as face characteristic information in the present embodiment one, make face characteristic information more rich and varied, more can characterize face characteristic, improve accuracy of detection.
The method of the face In vivo detection in the present embodiment one calculates the face characteristic information representing face characteristic by 15 unique points choosing facial image region, it is only a kind of example, be not limited to the above 15 unique point, can be more or less, be also not limited to above-mentioned choosing method.
Embodiment two:
Step 1), open three-dimensional face images collecting device, obtain a three-dimensional face images.
Step 2), extract the three-dimensional coordinate information of human face characteristic point of three-dimensional face images, and the posture feature information of three-dimensional face images, wherein human face characteristic point is 8 unique points of eyes and nasal area, as shown in Figure 5, posture feature information is the crab angle of three dimensional coordinate space, the angle of pitch and roll angle.
Because eyes and nose are that face caves in and the most outstanding two regions most respectively, there is good stability, the three-dimensional feature of three-dimensional face images can be embodied well, therefore preferred 6 unique point Point10 of eye areas, Point14, Point18, Point22, Point76, Point77,2 unique point Point26, Point29 of nasal area, the three-dimensional coordinate information of these 8 unique points is represented by following symbol successively:
(x
10,y
10,z
10),(x
14,y
14,z
14),...,(x
77,y
77,z
77)
The posture feature information of three-dimensional face images can be that three-dimensional face images collecting device directly provides, also can be calculated by three-dimensional face images, represent with following symbol successively: (yaw, pitch, roll), numerical value unit is degree (°).
Step 3), utilize the three-dimensional coordinate information of 6 unique points of eye areas in step 2 and 2 unique points of nasal area, calculate the 7 dimension face characteristics that represents current face.
First, obtain the depth coordinate information in the three-dimensional coordinate information of 8 unique point Point10, Point14, Point18, Point26, Point22, Point76, Point77, Point29, i.e. the z value of each point, is expressed as follows respectively:
Point10→z
10Point14→z
14Point18→z
18Point26→z
26
Point
22→z
22Point76→z
76Point77→z
77Point29→z
29
Then, calculate the depth coordinate information of unique point Point29 and the depth coordinate information gap of other 7 unique point Point10, Point14, Point18, Point26, Point22, Point76, Point77, be expressed as follows respectively:
dist
29,10=z
29-z
10dist
29,14=z
29-z
14
dist
29,18=z
29-z
18dist
29,26=z
29-z
26
dist
29,22=z
29-z
22dist
29,76=z
29-z
76dist
29,77=z
29-z
77
Finally, 7 depth difference eigenwerts, that is, [dist are obtained
29,10, dist
29,14, dist
29,18, dist
29,26, dist
29,22, dist
29,76, dist
29,77].
Step 4), the face characteristic information of step 2 and step 3 and posture feature information are merged into the union feature of one 10 dimension;
Obtain the attitude information (yaw, pitch, roll) of current three-dimensional face images in step 2, the 7 dimension face characteristics calculated in integrating step 3, finally obtain 10 dimension union feature, i.e. [dist
29,10, dist
29,14, dist
29,18, dist
29,26, dist
29,22, dist
29,76, dist
29,77, yaw, pitch, roll] and represent a face as union feature.
Step 5), 10 dimension union features of the expression face that utilizes step 4 to obtain judge whether three-dimensional face images comes from live body.
Be input in the SVM classifier trained by 10 dimension union features, the result according to exporting judges whether to come from live body.If the result exported is+1, be then live body, if the result exported is-1, be then non-living body.
The method of the face In vivo detection in the present embodiment two calculates the face characteristic information representing face characteristic by 8 unique points choosing facial image region, it is only a kind of example, be not limited to the above 8 unique point, can be more or less, be also not limited to above-mentioned choosing method.
Embodiment three:
Step 1), open three-dimensional face images collecting device, obtain a three-dimensional face images.
Step 2), extract the three-dimensional coordinate information of human face characteristic point of three-dimensional face images, and the posture feature information of three-dimensional face images, wherein human face characteristic point is 2 unique points of eye areas, as shown in Figure 6, i.e. eye pupil unique point, posture feature information is the crab angle of three dimensional coordinate space, the angle of pitch and roll angle.
Because face is when keeping attitude constant, general eye pupil also can change, as sight line change, therefore eye pupil unique point has good dynamic variation characteristic, live body characteristic can be embodied, therefore preferred 2 unique point Point76, the Point77 at eye areas pupil place, the three-dimensional coordinate information of these 2 unique points is represented by following symbol successively:
(x
76,y
76,z
76),(x
77,y
77,z
77)
The posture feature information of three-dimensional face images can be that three-dimensional face images collecting device directly provides, also can be calculated by three-dimensional face images, represent with following symbol successively: (yaw, pitch, roll), numerical value unit is degree (°).
Step 3), utilize the three-dimensional coordinate information of 2 unique points of eye areas in step 2, calculate the three-dimensional feature representing human eye sight trace information.
Because gazing direction of human eyes during pupil position difference is also different, the change in location of pupil center therefore can be utilized to judge direction of visual lines.Gather the positional information of eye pupil when the facial image faced obtains facing when first supposing head still, using the standard comparable data as eye pupil position, when head rotates, also need to set up coordinate compensation mechanism.
The three-dimensional coordinate information of eye areas 2 unique point Point76, Point77, is expressed as follows respectively:
Point76→(x
76,y
76,z
76)Point77→(x
77,y
77,z
77)
Utilize unique point Point76, the three-dimensional coordinate information of Point77 calculates eye sight line trace information, it is expressed as (GazeX, GazeY, Gaze θ), wherein GazeX is the horizontal coordinate information being mapped to eye sight line position in three-dimensional face images through coordinate transform, wherein GazeY is the vertical coordinate information being mapped to eye sight line position in three-dimensional face images through coordinate transform, Gaze θ is mapped to through coordinate transform the angle information that eye sight line in three-dimensional face images stares, coordinate transform herein maps and to refer to the three-dimensional coordinate information of three dimensional coordinate space (range information relative to initial point) through mapping transformation on two-dimension human face image, wherein the two-dimensional coordinate system of two-dimension human face image with the summit in the upper left corner for initial point, for the imageing sensor of 2,000,000 pixels, its horizontal coordinate is 1920 to the maximum, vertical coordinate is 1080 to the maximum.
Step 4), utilize the posture feature information of three-dimensional face images in step 2 to judge whether three-dimensional face images comes from live body, if then proceed to next step.
First utilize the posture feature information (yaw, pitch, roll) of three-dimensional face images tentatively to judge whether three-dimensional face images comes from live body, posture feature information is the posture feature change information of two frame facial images here.Owing to obtaining in the three-dimensional face images process of living body faces, face can not be in completely motionless state, always there is slight change, thus the posture feature information of three-dimensional face images also can change; But not living body faces as: photograph prints etc., its human face posture does not change substantially, and its posture feature information also can not change.Therefore, tentatively can judge whether to come from live body by the posture feature information of three-dimensional face images, if not, then directly obtain the final detection result of non-living body; If so, be then transferred in step 5 and do further to judge, this kind of detection method improves detection efficiency.
Be input in the SVM classifier trained by the posture feature information (yaw, pitch, roll) of three-dimensional face images, the result according to exporting judges whether to come from live body.If the result exported is-1, be then non-living body; If the result exported is+1, is then live body, and exports the change difference of posture feature information, angular difference of namely going off course, pitching angular difference and rolling angular difference.
Step 5), the three-dimensional feature of the human eye sight trace information that utilizes step 3 to obtain judges whether three-dimensional face images comes from live body.
Human eye sight trace information is the change information of human eye sight position, is the change difference of the human eye sight trace information of two frame three-dimensional face images.Found by great many of experiments, due to the dirigibility of live body, when attitudes vibration is little, i.e. posture feature information (yaw, pitch, roll) situation of subtle change occurs, and human eye sight trace information can have greatly changed, and is also (GazeX, GazeY, Gaze θ) value have change by a relatively large margin, when attitudes vibration is large, i.e. posture feature information (yaw, pitch, roll) situation of larger change occurs, and human eye sight trace information can keep change very little, even almost constant; Non-living body is as then different in photograph print etc., as posture feature information (yaw, pitch, roll) situation of subtle change occurs, and human eye sight trace information does not change or subtle change occurs, as posture feature information (yaw, pitch, roll) situation of larger change occurs, and human eye sight trace information is bound to, and great changes will take place, is illustrated in fig. 7 shown below.The posture feature information of the three-dimensional face images of Fig. 7 a living body faces is (yaw, pitch, roll)=(2 ,-9,0), its human eye sight trace information (GazeX, GazeY, Gaze θ)=(805,585,-16), its human eye sight is positioned at the lower right corner place of three-dimensional face images, as shown in rectangle frame; The posture feature information of the three-dimensional face images of Fig. 7 b living body faces is (yaw, pitch, roll)=(2 ,-10,0), its human eye sight trace information (GazeX, GazeY, Gaze θ)=(450,567,45), its human eye sight is positioned at the lower right-hand corner of three-dimensional face images, as shown in rectangle frame; Can be found out by contrast, the posture feature information generation subtle change (subtle change of angle of pitch generation here) of living body faces, and can find out that larger change occurs human eye sight trace information, changes to the lower left corner from the lower right corner from Fig. 7 a and Fig. 7 b.The posture feature information (yaw, pitch, roll)=(4 ,-12 of the three-dimensional face images of Fig. 7 c non-living body face,-1), its human eye sight tracing positional information (GazeX, GazeY, Gaze θ)=(1068,564,27), as shown in rectangle frame; Posture feature information (the yaw of the three-dimensional face images of Fig. 7 d non-living body face, pitch, roll)=(-6 ,-12,10), its human eye sight tracing positional information (GazeX, GazeY, Gaze θ)=(1093,178,72), as shown in rectangle frame; Can be found out by contrast, there is larger change (larger change all occurs for crab angle, the angle of pitch and roll angle) here in the posture feature information of non-living body face, and can find out that larger change occurs human eye sight trace information, changes to the lower left corner from the lower right corner from Fig. 7 c and Fig. 7 d.
The training of sorter can be carried out in two kinds of situation according to above information, the first situation is: at attitude (yaw, pitch, when roll) there is less change, also the driftage angular difference of the attitude information of two frame three-dimensional face images before and after namely limiting, pitching angular difference and rolling angular difference are in a certain scope, now set a threshold k 1 (parameter obtained when threshold k 1 is training classifier C1), if the human eye sight trace information difference (GazeX1-GazeX2 of front and back two frame three-dimensional face images, GazeY1-GazeY2, Gaze θ 1-Gaze θ 2) be greater than threshold k 1, it is then live body, otherwise be non-living body, so utilize positive and negative sample training to generate sorter C1, the second situation is: at attitude (yaw, pitch, roll) under there is more cataclysmal situation, also the driftage angular difference of the attitude information of front and back two frame three-dimensional face images is namely detected, pitching angular difference and rolling angular difference exceed certain limit, now reset a threshold k 2 (parameter obtained when threshold k 2 is training classifier C2), if the human eye sight trace information difference (GazeX1-GazeX2 of front and back two frame, GazeY1-GazeY2, Gaze θ 1-Gaze θ 2) be less than threshold k 2, it is then live body, otherwise be non-living body, so utilize positive and negative sample training to generate sorter C2, assembled classifier is formed by logical combination.
By the three-dimensional feature of the human eye sight trace information of three-dimensional face images input assembled classifier, the result according to exporting judges whether to come from live body.If the result exported is-1, be then non-living body; If the result exported is+1, be then live body.
Utilize the three-dimensional feature of current human eye sight trace information to judge whether three-dimensional face images comes from live body further, effectively can improve the efficiency of In vivo detection, improve detection speed and degree of accuracy.
On the other hand, the embodiment of the present invention provides a kind of device of face In vivo detection, as Fig. 8, comprising:
Acquisition module 11, for obtaining three-dimensional face images;
Extraction module 12, for extracting the three-dimensional coordinate information of human face characteristic point in three-dimensional face images, and the posture feature information of three-dimensional face images;
Computing module 13, calculates for utilizing the three-dimensional coordinate information of human face characteristic point the face characteristic information representing face characteristic;
For utilizing, judge module 14, represents that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body.
The device of embodiment of the present invention face In vivo detection can judge whether three-dimensional face images comes from live body, convenience and high-efficiency, and speed is fast, and degree of accuracy is high, strong adaptability.
One as above-described embodiment illustrates, as shown in Figure 9, judge module 14 comprises further:
Merge cells 141, for merging into a union feature information by the expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images;
First judging unit 142, judges for utilizing union feature information whether three-dimensional face images comes from live body.
The expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information by merge cells by the device of embodiment of the present invention face In vivo detection, whether three-dimensional face images comes from live body to utilize union feature information to judge by the first judging unit again, characteristic information is comprehensive, strong adaptability, degree of accuracy is high.
Another kind as above-described embodiment illustrates, as shown in Figure 10, judge module 14 comprises further:
Second judging unit 141 ', for utilizing the posture feature information of three-dimensional face images to judge whether three-dimensional face images comes from live body, if proceed to the 3rd judging unit;
For utilizing, 3rd judging unit 142 ', represents that the face characteristic information of face characteristic judges whether three-dimensional face images comes from live body.
First the device of embodiment of the present invention face In vivo detection utilizes the posture feature information of three-dimensional face images to judge whether three-dimensional face images comes from live body by the second judging unit, if not, then directly obtain final detection result, represent that the face characteristic information of face characteristic judges whether three-dimensional face images comes from live body and do to detect further if then utilized by the 3rd judging unit again.The present embodiment first does preliminary judgement by the second judging unit, detecting obviously is not the three-dimensional face images coming from live body, do meticulousr judgement by the 3rd judging unit more afterwards, effectively can improve the efficiency of In vivo detection, improve detection speed and degree of accuracy.The present embodiment is a kind of example, and the second judging unit can be utilize the face characteristic information representing face characteristic to do preliminary judgement, and the 3rd judges it can is utilize the posture feature information of three-dimensional face images to do to judge further, is not limited to described in above-described embodiment.
The above is the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the prerequisite not departing from principle of the present invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.