CN105574518A - Method and device for human face living detection - Google Patents

Method and device for human face living detection Download PDF

Info

Publication number
CN105574518A
CN105574518A CN201610048002.7A CN201610048002A CN105574518A CN 105574518 A CN105574518 A CN 105574518A CN 201610048002 A CN201610048002 A CN 201610048002A CN 105574518 A CN105574518 A CN 105574518A
Authority
CN
China
Prior art keywords
face
dimensional
information
face characteristic
face images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610048002.7A
Other languages
Chinese (zh)
Other versions
CN105574518B (en
Inventor
王玉瑶
孔勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eye Intelligent Technology Co Ltd
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Techshino Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Techshino Technology Co Ltd filed Critical Beijing Techshino Technology Co Ltd
Priority to CN201610048002.7A priority Critical patent/CN105574518B/en
Publication of CN105574518A publication Critical patent/CN105574518A/en
Application granted granted Critical
Publication of CN105574518B publication Critical patent/CN105574518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention discloses a method and a device for human face living detection, wherein the method and the device belong to the fields of biometric authentication and image processing. The method for human face living detection comprises the steps of acquiring a three-dimensional human face image; extracting three-dimensional coordinate information of human face characteristic points and gesture characteristic information of the three-dimensional human face image on the three-dimensional human face image; computing by means of the three-dimensional coordinate information of the human face characteristic points for obtaining human face characteristic information which represents the human face characteristic; and determining whether the three-dimensional human face mage comes from a living person by means of the human face characteristic information which represents the human face characteristic and the gesture characteristic information of the three-dimensional human face image. The method for human face living detection can determine whether the three-dimensional human face image comes from the living person and furthermore has advantages of high convenience, high efficiency, high speed, high accuracy and high adaptability.

Description

The method and apparatus of face In vivo detection
Technical field
The present invention relates to bio-identification and image processing field, particularly a kind of method and apparatus of face In vivo detection.
Background technology
The information-based fast development of modern society, people also pay close attention to the security of personal information simultaneously more enjoying the information-based convenience brought.The mode being come guarantee information and identity security by the biological characteristic that people is intrinsic is approved more and more widely, becomes the first-selection of system login, authentication, management of public safety gradually.And in numerous biometrics identification technologies, face recognition technology has non-imposed and untouchable because of it, user is made not need to coordinate, without the need to contacting acquisition and the identification that can complete facial image under unconscious state with equipment, the development through decades has been widely used in the occasions such as public safety monitoring, work attendance gate inhibition and entry and exit.Along with application is constantly general, the potential safety hazard of existing face identification system also displays gradually, utilize the photo in photo, mobile phone or the pad printed or video etc. can rush across the security perimeter of face identification system, cause serious harm, impact society stabilizes.Therefore, increase efficient face In vivo detection in face identification system and become the effective way solving this potential safety hazard.
Human face in-vivo detection method conventional is at present the strategy based on man-machine interaction, system sends instruction indicating user and does corresponding actions (as: close one's eyes, open one's mouth) according to instruction, detect whether for real human face with this, but this kind of method needs the height of user to coordinate, user experience is poor, identifying is long, makes the non-imposed feature of face recognition technology no longer outstanding.Also the method by increasing sensing device on face recognition device is had to carry out face In vivo detection, as: thermal infrared inductor, body weight inductor etc., but these method Detection results are not good, are easy to be cracked by lawless person.Recent years, along with the development of three-dimensional face recognition technology, the three-dimensional data information of face has also been applied in face In vivo detection, but usually only utilize the depth data in face three-dimensional data information, mainly detect by calculating depth capacity difference, its accuracy is lower, and application exists certain limitation.Therefore, urgent need searching is a kind of quick and precisely, the human face in-vivo detection method of convenience and high-efficiency, overcomes above-mentioned deficiency, solves the technical matters that those skilled in the art are urgently to be resolved hurrily.
Summary of the invention
For overcoming the deficiencies in the prior art, the object of the present invention is to provide a kind of method and apparatus of face In vivo detection, can convenience and high-efficiency judge whether three-dimensional face images comes from live body, detection speed is fast, and degree of accuracy is high, strong adaptability.
Technical scheme provided by the invention is as follows:
On the one hand, provide a kind of method of face In vivo detection, comprising:
Obtain three-dimensional face images;
Described three-dimensional face images is extracted the three-dimensional coordinate information of human face characteristic point, and the posture feature information of three-dimensional face images;
The three-dimensional coordinate information of described human face characteristic point is utilized to calculate the face characteristic information representing face characteristic;
The face characteristic information of described expression face characteristic and the posture feature information of three-dimensional face images is utilized to judge whether described three-dimensional face images comes from live body.
On the other hand, provide a kind of device of face In vivo detection, comprising:
Acquisition module, for obtaining three-dimensional face images;
Extraction module, for extracting the three-dimensional coordinate information of human face characteristic point in described three-dimensional face images, and the posture feature information of three-dimensional face images;
Computing module, calculates for utilizing the three-dimensional coordinate information of described human face characteristic point the face characteristic information representing face characteristic;
Judge module, the posture feature information for the face characteristic information and three-dimensional face images that utilize described expression face characteristic judges whether described three-dimensional face images comes from live body.
The present invention has following beneficial effect:
The present invention can judge whether three-dimensional face images comes from live body.First three-dimensional face images is obtained, then in three-dimensional face images, extract the three-dimensional coordinate information of human face characteristic point, and the posture feature information of three-dimensional face images, utilize the three-dimensional coordinate information of human face characteristic point to calculate the face characteristic information representing face characteristic afterwards, finally utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body.
The method convenience and high-efficiency of face In vivo detection of the present invention, only need obtain the three-dimensional face images of user, and coordinate without the need to user and do other actions, and shorten the time of whole testing process, speed is fast.
The precision of method of face In vivo detection of the present invention is high, utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body, compared to utilizing merely, the detection method degree of accuracy of the single features such as depth capacity difference feature is high, strong adaptability.
In sum, the method for face In vivo detection of the present invention can judge whether three-dimensional face images comes from live body, detection method convenience and high-efficiency, and speed is fast, and degree of accuracy is high, strong adaptability.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of an embodiment of the method for face In vivo detection of the present invention;
Fig. 2 is an illustrational process flow diagram of the embodiment of the method for face In vivo detection of the present invention;
Fig. 3 is another illustrational process flow diagram of the embodiment of the method for face In vivo detection of the present invention;
Fig. 4 is the schematic diagram of a selection mode of human face characteristic point in the embodiment of the present invention one;
Fig. 5 is the schematic diagram of a selection mode of human face characteristic point in the embodiment of the present invention two;
Fig. 6 is the schematic diagram of a selection mode of human face characteristic point in the embodiment of the present invention three;
Fig. 7 is that the human eye sight of face of the present invention follows the tracks of schematic diagram, and wherein, Fig. 7 a and 7b is that the human eye sight of living body faces follows the tracks of schematic diagram; Fig. 7 c and 7d is that the human eye sight of non-living body face follows the tracks of schematic diagram;
Fig. 8 is the schematic diagram of an embodiment of the device of face In vivo detection of the present invention;
Fig. 9 is an illustrational schematic diagram of the device embodiment of face In vivo detection of the present invention;
Figure 10 is another illustrational schematic diagram of the device embodiment of face In vivo detection of the present invention.
Embodiment
For making the technical problem to be solved in the present invention, technical scheme and advantage clearly, be described in detail below in conjunction with the accompanying drawings and the specific embodiments.
On the one hand, the embodiment of the present invention provides a kind of method of face In vivo detection, as shown in Figure 1, comprising:
Step 101: obtain three-dimensional face images.
In this step, preferably use three-dimensional face images to obtain equipment and acquire three-dimensional face images.
Step 102: the three-dimensional coordinate information extracting human face characteristic point in three-dimensional face images, and the posture feature information of three-dimensional face images.
The facial image that ordinary two dimensional face obtains equipment acquisition is two dimensional surface facial image, the two-dimensional coordinate of unique point can only be obtained, i.e. (the x of unique point, y) coordinate figure, and the acquisition of three-dimensional face images acquisition equipment is three-dimensional face images, the three-dimensional coordinate of unique point can be obtained, comprise horizontal coordinate, vertical coordinate and depth coordinate, i.e. (the x of unique point, y, z) coordinate figure, the depth coordinate of facial image is added compared to two dimensional image, i.e. z value, such as: it can be by two infrared induction cameras and Infrared laser emission device that three-dimensional face images obtains equipment, or utilize an infrared induction camera, a colored induction camera and Infrared laser emission device are formed, imitate the principle of parallax of human eye, gather facial image simultaneously, follow the trail of the track of Infrared, the depth coordinate of three-dimensional face images is calculated by triangle polyester fibre principle.
The three-dimensional coordinate information of the human face characteristic point extracted in this step is the range information of three dimensional coordinate space, with the initial point that a camera of three-dimensional face images acquisition equipment is three dimensional coordinate space, the positive dirction of the user oriented direction of equipment as z-axis is obtained using three-dimensional face images, the positive dirction of x-axis and y-axis can be determined according to right-handed coordinate system, three dimensional coordinate space is set up with this, by being converted to unique point on facial image in three dimensional coordinate space relative to the range information of true origin, namely obtain the three-dimensional coordinate information of human face characteristic point.
In this step, the posture feature information of three-dimensional face images refers to the posture feature information of a frame three-dimensional face images and/or the posture feature change information of two frame three-dimensional face images, as deviation post, angle of inclination, the anglec of rotation etc.
Step 103: utilize the three-dimensional coordinate information of human face characteristic point to calculate the face characteristic information representing face characteristic.
Face characteristic information refers to the face characteristic information of a frame three-dimensional face images and/or the face characteristic change information of two frame three-dimensional face images.Because the characteristic information on facial image is more, comprise nose, eyes, face, cheek, several main region such as eyebrow, each region is made up of multiple unique point again, the three-dimensional coordinate information of the human face characteristic point therefore extracted is also more, and may cause there is more interfere information and noise in these three-dimensional coordinate informations because human face posture change or light environment affect, if directly utilize certain some three-dimensional coordinate information to carry out face In vivo detection merely, the face characteristic that may cannot characterize three-dimensional face images well because of it causes detection to there is relatively large deviation, produce flase drop, detect accurately low.Utilize the three-dimensional coordinate information of human face characteristic point to calculate the face characteristic information representing face characteristic, can effectively remove interfere information and noise, face characteristic information better can characterize face characteristic, avoids flase drop, improves and detects degree of accuracy.Face characteristic information can be multidimensional characteristic vectors by calculating or other forms of characteristic.
Step 104: utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body.
The embodiment of the present invention can judge whether three-dimensional face images comes from live body.First three-dimensional face images is obtained, then in three-dimensional face images, extract the three-dimensional coordinate information of human face characteristic point, and the posture feature information of three-dimensional face images, utilize the three-dimensional coordinate information of human face characteristic point to calculate the face characteristic information representing face characteristic afterwards, finally utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body.
The method convenience and high-efficiency of embodiment of the present invention face In vivo detection.Only need obtain the three-dimensional face images of user, coordinate without the need to user and do other actions, and shorten the time of whole testing process, speed is fast.
The precision of method of face In vivo detection of the present invention is high, strong adaptability.The three-dimensional coordinate information of human face characteristic point is utilized to calculate the face characteristic information representing face characteristic, can effectively remove interfere information and noise, characterize face characteristic more accurately, avoid flase drop, improve and detect degree of accuracy, and utilize and represent that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body, compared to utilizing merely, the detection method degree of accuracy of the single features such as depth capacity difference feature is high, strong adaptability.
In sum, the method for embodiment of the present invention face In vivo detection can judge whether three-dimensional face images comes from live body, detection method convenience and high-efficiency, and speed is fast, and degree of accuracy is high, strong adaptability.
One as above-described embodiment illustrates, as shown in Figure 2, step 104 comprises:
Step 1041: the expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information;
In this step, the expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information, union feature information can be multivariate joint probability proper vector or other forms of union feature data.The information of union feature information is comprehensive, and can characterize the face characteristic of three-dimensional face images more accurately, because union feature information is a kind of comprehensive characteristics information, its adaptability is stronger.
Step 1042: whether three-dimensional face images comes from live body to utilize union feature information to judge.
First the expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information in the embodiment of the present invention, whether three-dimensional face images comes from live body to utilize union feature information to judge afterwards.The method utilizing union feature information to carry out judging refers to the entirety of face characteristic information and posture feature information being combined as one, considers two kinds of characteristic information factors simultaneously, carries out comprehensive descision, and characteristic information is more comprehensive, and adaptability is stronger, and degree of accuracy is higher.
Another kind as above-described embodiment illustrates, as shown in Figure 3, step 104 comprises:
Step 1041 ': utilize the posture feature information of three-dimensional face images to judge whether three-dimensional face images comes from live body, if proceed to next step;
Step 1042 ': utilize and represent that the face characteristic information of face characteristic judges whether three-dimensional face images comes from live body.
First utilize the posture feature information of three-dimensional face images to judge whether three-dimensional face images comes from live body in the embodiment of the present invention, if not, then directly obtain final detection result, judge that three-dimensional face images comes from non-living body; If so, then recycling represents that the face characteristic information of face characteristic judges whether three-dimensional face images comes from live body and do to detect further.The method of the face In vivo detection of the present embodiment first does preliminary judgement by the posture feature information of three-dimensional face images, detecting obviously is not the three-dimensional face images coming from live body, recycling represents that the face characteristic information of face characteristic does meticulousr judgement afterwards, effectively can improve the efficiency of In vivo detection, improve detection speed and degree of accuracy.The present embodiment is a kind of example, also can first utilize the face characteristic information representing face characteristic to do preliminary judgement, utilizes the posture feature information of three-dimensional face images to do to judge further afterwards, is not limited to order described in above-described embodiment.
Preferably, in the embodiment of the present invention, the posture feature information of three-dimensional face images is the crab angle of three-dimensional face images, the angle of pitch and roll angle, wherein: crab angle refers to the anglec of rotation of whole facial image relative to y-axis in three-dimensional coordinate system; The angle of pitch refers to the anglec of rotation of whole facial image relative to x-axis in three-dimensional coordinate system; Roll angle refers to the anglec of rotation of whole facial image relative to z-axis in three-dimensional coordinate system.
Further, in the embodiment of the present invention, human face characteristic point comprises multiple unique points in one or more regions in face eyes, nose and face region.Unique point due to facial image is subject to the impact of ambient lighting factor; often have the noise spot of some instability; impact is to a certain degree caused on the judgement of In vivo detection; and eyes, nose and face three regions lay respectively at the upper bottom of face; the principal character of face can be represented; and eyes and nose are face respectively caves in most and the most outstanding two regions; there is good stability; multiple unique points in one or more regions therefore in preferred face eyes, nose and face region characterize face characteristic, and stability is strong.
Further, utilize the three-dimensional coordinate information of human face characteristic point to calculate in the embodiment of the present invention to represent the face characteristic information of face characteristic to can be a little in-a distance value, point-face distance value, line-face angle value, line-line angle angle value one or several, its mid point is unique point, face is the plane be made up of unique point, and line is the straight line of unique point formation between two.Or face characteristic information can be degree of depth difference, human eye sight trace information etc. between the unique point by calculating.
Be described in detail with the method for three preferred embodiments to face In vivo detection of the present invention below:
Embodiment one:
Step 1), open three-dimensional face images collecting device, obtain a three-dimensional face images.
Step 2), extract the three-dimensional coordinate information of human face characteristic point of three-dimensional face images, and the posture feature information of three-dimensional face images, wherein human face characteristic point is 15 unique points in eyes, nose and face region, as shown in Figure 4, posture feature information is the crab angle of three dimensional coordinate space, the angle of pitch and roll angle.
Fig. 4 gives the mark of 78 unique points of three-dimensional face images, and (these 78 unique points can be that three-dimensional face images collecting device directly provides, also can be calculated by three-dimensional face images), represent with following symbol successively: Point0, Point1, ..., Point76, Point77; Three-dimensional coordinate information represents with following symbol successively: (x 0, y 0, z 0), (x 1, y 1, z 1) ..., (x 76, y 76, z 76), (x 77, y 77, z 77).
These 78 unique points are divided into 5 regions, namely
, there are 16 unique points brow region: Point0, Point1 ..., Point9, Point70 ..., Point75;
Eye areas, has 18 unique points: Point10, Point11 ..., Point25, Point76, Point77;
Nasal area, has 7 unique points: Point26, Point27 ..., Point32;
, there are 20 unique points in face region: Point33, Point34 ..., Point52;
, there are 17 unique points cheek region: Point53, Point54 ..., Point69;
We find that can characterize the best region of living body faces is nose, next is eyes and face, finally eyebrow and cheek region, thus preferred nose, eyes and face region amount to 15 unique points (as Fig. 4 black circles mark), and extract their three-dimensional coordinate information.15 unique points chosen are 6 unique point Point10, Point14, Point18 of eye areas respectively, Point22, Point76, Point77,7 unique point Point26, Point27, Point28 of nasal area, Point29, Point30, Point31, Point32,2 unique point Point33, the Point39 in face region, the three-dimensional coordinate information of these 15 unique points is represented by following symbol successively: (x 10, y 10, z 10), (x 14, y 14, z 14) ..., (x 77, y 77, z 77).With unique point Point31, Point76, Point77 for summit, construct a triangle, and obtain the three-dimensional information (x of these three points 31, y 31, z 31), (x 76, y 76, z 76), (x 77, y 77, z 77).
The posture feature information of three-dimensional face images can be that three-dimensional face images collecting device directly provides, also can be calculated by three-dimensional face images, represent with following symbol successively: (yaw, pitch, roll), numerical value unit is degree (°).
Step 3), utilize the three-dimensional coordinate information of 2 unique points of 7 unique points of nasal area in step 2,6 unique points of eye areas and mouth region, calculate 21 dimensional features that represents current face.
Concrete computation process is as follows:
First, utilize least square method, determine a plane β by three unique point Point26, Point30, Point32:
z=a*x+b*y+c.
The process calculating a, b, c is as follows, if
I.e. A*X=Z,
So utilize Matlab to do following calculating, three coefficients of plane β can be obtained, namely
X=A Z or X=(A ta) (-1)a tz
Secondly, calculate unique point Point27, Point28, Point29, Point31 to the distance of plane β, be expressed as follows:
Marking i-th unique point to the distance of a jth unique point is dist i,j, then
Here, we ask for unique point Point29 and Point10, distance between Point14, Point18, Point22, Point26, Point30, Point32, and unique point Point31 and Point26, distance between Point30, Point32, totally 10 eigenwerts:
dist 29,10,dist 29,14,dist 29,18,dist 29,22,dist 29,26,dist 29,30,dist 29,32,dist 31,26,dist 31,30,dist 31,32.
Afterwards, the straight line of three articles of mistakes the 29th unique point and the sine value of plane β angle is asked:
If be straight line L1 by the straight line of the 29th unique point and the 26th unique point decision,
If be straight line L2 by the straight line of the 29th unique point and the 30th unique point decision,
If be straight line L3 by the straight line of the 29th unique point and the 32nd unique point decision,
If be straight line L4 by the straight line of the 28th unique point and the 26th unique point decision,
If be straight line L5 by the straight line of the 28th unique point and the 30th unique point decision,
If be straight line L6 by the straight line of the 28th unique point and the 32nd unique point decision,
If be straight line L7 by the straight line of the 31st unique point and the 26th unique point decision,
If be straight line L8 by the straight line of the 31st unique point and the 30th unique point decision,
If be straight line L9 by the straight line of the 31st unique point and the 32nd unique point decision,
Then the sine value of the angle of L1, L2, L3, L4, L5, L6, L7, L8, L9 and plane β can represent with following mark respectively:
Calculate the sine value by the 29th unique point and the straight line L7 of the 28th unique point decision and the angle of plane β:
Now, 10 eigenwerts of getting back [sin_L1, sin_L2 ..., sin_L10].
Then calculate the angle theta between straight line L2 that the straight line L1 that forms of unique point Point31, Point76 and unique point Point31, Point77 form respectively, be calculated as follows:
L1:(x 31-x 76,y 31-y 76,z 31-z 76)
L2:(x 77-x 31,y 77-y 31,z 77-z 31)
Angle θ can distinguish living body faces and non-living body face well, particularly when adopting photograph print to be carried out the situation of faking by the means bending photo.Due to the rigid structure of face, the angle θ of living body faces substantially remains unchanged within the scope of certain attitude, and the angle θ of bending photo then larger change can occur along with the increase of degree of crook.
Finally, the numerical value calculated is synthesized a proper vector above, obtain 10+10+1=21 and tie up face characteristic.
Step 4), the face characteristic information of step 2 and step 3 and posture feature information are merged into the union feature of one 24 dimension.
Obtain the attitude information (yaw, pitch, roll) of current three-dimensional face images in step 2, calculate in integrating step 3 21 is face characteristic, finally obtains 24 dimension union features, i.e. [sign_d 1, sign_d 2, sign_d 3, distance 1..., θ, yaw, pitch, roll] represent a face as union feature.
Step 5), 24 dimension union features of the expression face that utilizes step 4 to obtain judge whether three-dimensional face images comes from live body.
Be input in the SVM classifier trained by 24 dimension union features, the result according to exporting judges whether to come from live body.If the result exported is+1, be then live body, if the result exported is-1, be then non-living body.
In machine learning field, SVM (support vector machine, SupportVectorMachine) is a learning model having supervision, is commonly used to carry out pattern-recognition, classification and regretional analysis.SVM is through being commonly used in two class problems.
By gathering and having calculated the characteristic of nearly 30,000 parts of live bodies and non-living body face, function svmtrain is trained to carry out training classifier with the SVM of Matlab.
In these characteristics, training sample is 16000 parts (wherein live body 6000 parts, non-living bodies 10000 parts), and test sample book is 12500 parts (wherein live body 4000 parts, non-living bodies 8500 parts), and marking true man's face is+1, dummy's face is-1.Choosing best parameter when training, training in the parameter of function svmtrain at the SVM of Matlab, setting and take gaussian kernel function, and be provided with sigma=4.
The face characteristic information representing face characteristic is a point-distance value, point-face distance value, line-face angle value and line-line angle angle value to utilize the three-dimensional coordinate information of human face characteristic point to calculate, its mid point is unique point, face is the plane be made up of unique point, line is the straight line that forms of unique point between two, and wherein one or more can be used as face characteristic information.Use an a little-distance value, point-face distance value, line-face angle value and line-line angle angle value as face characteristic information in the present embodiment one, make face characteristic information more rich and varied, more can characterize face characteristic, improve accuracy of detection.
The method of the face In vivo detection in the present embodiment one calculates the face characteristic information representing face characteristic by 15 unique points choosing facial image region, it is only a kind of example, be not limited to the above 15 unique point, can be more or less, be also not limited to above-mentioned choosing method.
Embodiment two:
Step 1), open three-dimensional face images collecting device, obtain a three-dimensional face images.
Step 2), extract the three-dimensional coordinate information of human face characteristic point of three-dimensional face images, and the posture feature information of three-dimensional face images, wherein human face characteristic point is 8 unique points of eyes and nasal area, as shown in Figure 5, posture feature information is the crab angle of three dimensional coordinate space, the angle of pitch and roll angle.
Because eyes and nose are that face caves in and the most outstanding two regions most respectively, there is good stability, the three-dimensional feature of three-dimensional face images can be embodied well, therefore preferred 6 unique point Point10 of eye areas, Point14, Point18, Point22, Point76, Point77,2 unique point Point26, Point29 of nasal area, the three-dimensional coordinate information of these 8 unique points is represented by following symbol successively:
(x 10,y 10,z 10),(x 14,y 14,z 14),...,(x 77,y 77,z 77
The posture feature information of three-dimensional face images can be that three-dimensional face images collecting device directly provides, also can be calculated by three-dimensional face images, represent with following symbol successively: (yaw, pitch, roll), numerical value unit is degree (°).
Step 3), utilize the three-dimensional coordinate information of 6 unique points of eye areas in step 2 and 2 unique points of nasal area, calculate the 7 dimension face characteristics that represents current face.
First, obtain the depth coordinate information in the three-dimensional coordinate information of 8 unique point Point10, Point14, Point18, Point26, Point22, Point76, Point77, Point29, i.e. the z value of each point, is expressed as follows respectively:
Point10→z 10Point14→z 14Point18→z 18Point26→z 26
Point 22→z 22Point76→z 76Point77→z 77Point29→z 29
Then, calculate the depth coordinate information of unique point Point29 and the depth coordinate information gap of other 7 unique point Point10, Point14, Point18, Point26, Point22, Point76, Point77, be expressed as follows respectively:
dist 29,10=z 29-z 10dist 29,14=z 29-z 14
dist 29,18=z 29-z 18dist 29,26=z 29-z 26
dist 29,22=z 29-z 22dist 29,76=z 29-z 76dist 29,77=z 29-z 77
Finally, 7 depth difference eigenwerts, that is, [dist are obtained 29,10, dist 29,14, dist 29,18, dist 29,26, dist 29,22, dist 29,76, dist 29,77].
Step 4), the face characteristic information of step 2 and step 3 and posture feature information are merged into the union feature of one 10 dimension;
Obtain the attitude information (yaw, pitch, roll) of current three-dimensional face images in step 2, the 7 dimension face characteristics calculated in integrating step 3, finally obtain 10 dimension union feature, i.e. [dist 29,10, dist 29,14, dist 29,18, dist 29,26, dist 29,22, dist 29,76, dist 29,77, yaw, pitch, roll] and represent a face as union feature.
Step 5), 10 dimension union features of the expression face that utilizes step 4 to obtain judge whether three-dimensional face images comes from live body.
Be input in the SVM classifier trained by 10 dimension union features, the result according to exporting judges whether to come from live body.If the result exported is+1, be then live body, if the result exported is-1, be then non-living body.
The method of the face In vivo detection in the present embodiment two calculates the face characteristic information representing face characteristic by 8 unique points choosing facial image region, it is only a kind of example, be not limited to the above 8 unique point, can be more or less, be also not limited to above-mentioned choosing method.
Embodiment three:
Step 1), open three-dimensional face images collecting device, obtain a three-dimensional face images.
Step 2), extract the three-dimensional coordinate information of human face characteristic point of three-dimensional face images, and the posture feature information of three-dimensional face images, wherein human face characteristic point is 2 unique points of eye areas, as shown in Figure 6, i.e. eye pupil unique point, posture feature information is the crab angle of three dimensional coordinate space, the angle of pitch and roll angle.
Because face is when keeping attitude constant, general eye pupil also can change, as sight line change, therefore eye pupil unique point has good dynamic variation characteristic, live body characteristic can be embodied, therefore preferred 2 unique point Point76, the Point77 at eye areas pupil place, the three-dimensional coordinate information of these 2 unique points is represented by following symbol successively:
(x 76,y 76,z 76),(x 77,y 77,z 77)
The posture feature information of three-dimensional face images can be that three-dimensional face images collecting device directly provides, also can be calculated by three-dimensional face images, represent with following symbol successively: (yaw, pitch, roll), numerical value unit is degree (°).
Step 3), utilize the three-dimensional coordinate information of 2 unique points of eye areas in step 2, calculate the three-dimensional feature representing human eye sight trace information.
Because gazing direction of human eyes during pupil position difference is also different, the change in location of pupil center therefore can be utilized to judge direction of visual lines.Gather the positional information of eye pupil when the facial image faced obtains facing when first supposing head still, using the standard comparable data as eye pupil position, when head rotates, also need to set up coordinate compensation mechanism.
The three-dimensional coordinate information of eye areas 2 unique point Point76, Point77, is expressed as follows respectively:
Point76→(x 76,y 76,z 76)Point77→(x 77,y 77,z 77)
Utilize unique point Point76, the three-dimensional coordinate information of Point77 calculates eye sight line trace information, it is expressed as (GazeX, GazeY, Gaze θ), wherein GazeX is the horizontal coordinate information being mapped to eye sight line position in three-dimensional face images through coordinate transform, wherein GazeY is the vertical coordinate information being mapped to eye sight line position in three-dimensional face images through coordinate transform, Gaze θ is mapped to through coordinate transform the angle information that eye sight line in three-dimensional face images stares, coordinate transform herein maps and to refer to the three-dimensional coordinate information of three dimensional coordinate space (range information relative to initial point) through mapping transformation on two-dimension human face image, wherein the two-dimensional coordinate system of two-dimension human face image with the summit in the upper left corner for initial point, for the imageing sensor of 2,000,000 pixels, its horizontal coordinate is 1920 to the maximum, vertical coordinate is 1080 to the maximum.
Step 4), utilize the posture feature information of three-dimensional face images in step 2 to judge whether three-dimensional face images comes from live body, if then proceed to next step.
First utilize the posture feature information (yaw, pitch, roll) of three-dimensional face images tentatively to judge whether three-dimensional face images comes from live body, posture feature information is the posture feature change information of two frame facial images here.Owing to obtaining in the three-dimensional face images process of living body faces, face can not be in completely motionless state, always there is slight change, thus the posture feature information of three-dimensional face images also can change; But not living body faces as: photograph prints etc., its human face posture does not change substantially, and its posture feature information also can not change.Therefore, tentatively can judge whether to come from live body by the posture feature information of three-dimensional face images, if not, then directly obtain the final detection result of non-living body; If so, be then transferred in step 5 and do further to judge, this kind of detection method improves detection efficiency.
Be input in the SVM classifier trained by the posture feature information (yaw, pitch, roll) of three-dimensional face images, the result according to exporting judges whether to come from live body.If the result exported is-1, be then non-living body; If the result exported is+1, is then live body, and exports the change difference of posture feature information, angular difference of namely going off course, pitching angular difference and rolling angular difference.
Step 5), the three-dimensional feature of the human eye sight trace information that utilizes step 3 to obtain judges whether three-dimensional face images comes from live body.
Human eye sight trace information is the change information of human eye sight position, is the change difference of the human eye sight trace information of two frame three-dimensional face images.Found by great many of experiments, due to the dirigibility of live body, when attitudes vibration is little, i.e. posture feature information (yaw, pitch, roll) situation of subtle change occurs, and human eye sight trace information can have greatly changed, and is also (GazeX, GazeY, Gaze θ) value have change by a relatively large margin, when attitudes vibration is large, i.e. posture feature information (yaw, pitch, roll) situation of larger change occurs, and human eye sight trace information can keep change very little, even almost constant; Non-living body is as then different in photograph print etc., as posture feature information (yaw, pitch, roll) situation of subtle change occurs, and human eye sight trace information does not change or subtle change occurs, as posture feature information (yaw, pitch, roll) situation of larger change occurs, and human eye sight trace information is bound to, and great changes will take place, is illustrated in fig. 7 shown below.The posture feature information of the three-dimensional face images of Fig. 7 a living body faces is (yaw, pitch, roll)=(2 ,-9,0), its human eye sight trace information (GazeX, GazeY, Gaze θ)=(805,585,-16), its human eye sight is positioned at the lower right corner place of three-dimensional face images, as shown in rectangle frame; The posture feature information of the three-dimensional face images of Fig. 7 b living body faces is (yaw, pitch, roll)=(2 ,-10,0), its human eye sight trace information (GazeX, GazeY, Gaze θ)=(450,567,45), its human eye sight is positioned at the lower right-hand corner of three-dimensional face images, as shown in rectangle frame; Can be found out by contrast, the posture feature information generation subtle change (subtle change of angle of pitch generation here) of living body faces, and can find out that larger change occurs human eye sight trace information, changes to the lower left corner from the lower right corner from Fig. 7 a and Fig. 7 b.The posture feature information (yaw, pitch, roll)=(4 ,-12 of the three-dimensional face images of Fig. 7 c non-living body face,-1), its human eye sight tracing positional information (GazeX, GazeY, Gaze θ)=(1068,564,27), as shown in rectangle frame; Posture feature information (the yaw of the three-dimensional face images of Fig. 7 d non-living body face, pitch, roll)=(-6 ,-12,10), its human eye sight tracing positional information (GazeX, GazeY, Gaze θ)=(1093,178,72), as shown in rectangle frame; Can be found out by contrast, there is larger change (larger change all occurs for crab angle, the angle of pitch and roll angle) here in the posture feature information of non-living body face, and can find out that larger change occurs human eye sight trace information, changes to the lower left corner from the lower right corner from Fig. 7 c and Fig. 7 d.
The training of sorter can be carried out in two kinds of situation according to above information, the first situation is: at attitude (yaw, pitch, when roll) there is less change, also the driftage angular difference of the attitude information of two frame three-dimensional face images before and after namely limiting, pitching angular difference and rolling angular difference are in a certain scope, now set a threshold k 1 (parameter obtained when threshold k 1 is training classifier C1), if the human eye sight trace information difference (GazeX1-GazeX2 of front and back two frame three-dimensional face images, GazeY1-GazeY2, Gaze θ 1-Gaze θ 2) be greater than threshold k 1, it is then live body, otherwise be non-living body, so utilize positive and negative sample training to generate sorter C1, the second situation is: at attitude (yaw, pitch, roll) under there is more cataclysmal situation, also the driftage angular difference of the attitude information of front and back two frame three-dimensional face images is namely detected, pitching angular difference and rolling angular difference exceed certain limit, now reset a threshold k 2 (parameter obtained when threshold k 2 is training classifier C2), if the human eye sight trace information difference (GazeX1-GazeX2 of front and back two frame, GazeY1-GazeY2, Gaze θ 1-Gaze θ 2) be less than threshold k 2, it is then live body, otherwise be non-living body, so utilize positive and negative sample training to generate sorter C2, assembled classifier is formed by logical combination.
By the three-dimensional feature of the human eye sight trace information of three-dimensional face images input assembled classifier, the result according to exporting judges whether to come from live body.If the result exported is-1, be then non-living body; If the result exported is+1, be then live body.
Utilize the three-dimensional feature of current human eye sight trace information to judge whether three-dimensional face images comes from live body further, effectively can improve the efficiency of In vivo detection, improve detection speed and degree of accuracy.
On the other hand, the embodiment of the present invention provides a kind of device of face In vivo detection, as Fig. 8, comprising:
Acquisition module 11, for obtaining three-dimensional face images;
Extraction module 12, for extracting the three-dimensional coordinate information of human face characteristic point in three-dimensional face images, and the posture feature information of three-dimensional face images;
Computing module 13, calculates for utilizing the three-dimensional coordinate information of human face characteristic point the face characteristic information representing face characteristic;
For utilizing, judge module 14, represents that the face characteristic information of face characteristic and the posture feature information of three-dimensional face images judge whether three-dimensional face images comes from live body.
The device of embodiment of the present invention face In vivo detection can judge whether three-dimensional face images comes from live body, convenience and high-efficiency, and speed is fast, and degree of accuracy is high, strong adaptability.
One as above-described embodiment illustrates, as shown in Figure 9, judge module 14 comprises further:
Merge cells 141, for merging into a union feature information by the expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images;
First judging unit 142, judges for utilizing union feature information whether three-dimensional face images comes from live body.
The expression face characteristic information of face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information by merge cells by the device of embodiment of the present invention face In vivo detection, whether three-dimensional face images comes from live body to utilize union feature information to judge by the first judging unit again, characteristic information is comprehensive, strong adaptability, degree of accuracy is high.
Another kind as above-described embodiment illustrates, as shown in Figure 10, judge module 14 comprises further:
Second judging unit 141 ', for utilizing the posture feature information of three-dimensional face images to judge whether three-dimensional face images comes from live body, if proceed to the 3rd judging unit;
For utilizing, 3rd judging unit 142 ', represents that the face characteristic information of face characteristic judges whether three-dimensional face images comes from live body.
First the device of embodiment of the present invention face In vivo detection utilizes the posture feature information of three-dimensional face images to judge whether three-dimensional face images comes from live body by the second judging unit, if not, then directly obtain final detection result, represent that the face characteristic information of face characteristic judges whether three-dimensional face images comes from live body and do to detect further if then utilized by the 3rd judging unit again.The present embodiment first does preliminary judgement by the second judging unit, detecting obviously is not the three-dimensional face images coming from live body, do meticulousr judgement by the 3rd judging unit more afterwards, effectively can improve the efficiency of In vivo detection, improve detection speed and degree of accuracy.The present embodiment is a kind of example, and the second judging unit can be utilize the face characteristic information representing face characteristic to do preliminary judgement, and the 3rd judges it can is utilize the posture feature information of three-dimensional face images to do to judge further, is not limited to described in above-described embodiment.
The above is the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the prerequisite not departing from principle of the present invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (11)

1. a method for face In vivo detection, is characterized in that,
Obtain three-dimensional face images;
Described three-dimensional face images is extracted the three-dimensional coordinate information of human face characteristic point, and the posture feature information of three-dimensional face images;
The three-dimensional coordinate information of described human face characteristic point is utilized to calculate the face characteristic information representing face characteristic;
The face characteristic information of described expression face characteristic and the posture feature information of three-dimensional face images is utilized to judge whether described three-dimensional face images comes from live body.
2. the method for face In vivo detection according to claim 1, is characterized in that, the step whether described three-dimensional face images comes from live body comprises to utilize the face characteristic information of described expression face characteristic and the posture feature information of three-dimensional face images to judge:
The face characteristic information of described expression face characteristic and the posture feature information of three-dimensional face images are merged into a union feature information;
Described union feature information is utilized to judge whether described three-dimensional face images comes from live body.
3. the method for face In vivo detection according to claim 1, is characterized in that, the step whether described three-dimensional face images comes from live body comprises to utilize the face characteristic information of described expression face characteristic and the posture feature information of three-dimensional face images to judge:
The posture feature information of described three-dimensional face images is utilized to judge whether described three-dimensional face images comes from live body, if proceed to next step;
Utilize and represent that the face characteristic information of described face characteristic judges whether described three-dimensional face images comes from live body.
4., according to the method for the arbitrary described face In vivo detection of claim 1-3, it is characterized in that, the posture feature information of described three-dimensional face images is the crab angle of three-dimensional face images, the angle of pitch and roll angle.
5., according to the method for the arbitrary described face In vivo detection of claim 1-3, it is characterized in that, described human face characteristic point comprises multiple unique points in one or more regions in face eyes, nose and face region.
6. according to the method for the arbitrary described face In vivo detection of claim 1-3, it is characterized in that, utilize the three-dimensional coordinate information of described human face characteristic point to calculate to represent that the face characteristic information of face characteristic is a point-distance value, one or several in point-face distance value, line-face angle value, line-line angle angle value, wherein said point is unique point, described the plane for being made up of unique point, described line is the straight line of unique point formation between two.
7. according to the method for the arbitrary described face In vivo detection of claim 1-3, it is characterized in that, utilizing the three-dimensional coordinate information of described human face characteristic point to calculate the face characteristic information representing face characteristic is the degree of depth difference between unique point.
8. according to the method for the arbitrary described face In vivo detection of claim 1-3, it is characterized in that, utilizing the three-dimensional coordinate information of described human face characteristic point to calculate the face characteristic information representing face characteristic is human eye sight trace information.
9. a device for face In vivo detection, is characterized in that, comprising:
Acquisition module, for obtaining three-dimensional face images;
Extraction module, for extracting the three-dimensional coordinate information of human face characteristic point in described three-dimensional face images, and the posture feature information of three-dimensional face images;
Computing module, calculates for utilizing the three-dimensional coordinate information of described human face characteristic point the face characteristic information representing face characteristic;
Judge module, the posture feature information for the face characteristic information and three-dimensional face images that utilize described expression face characteristic judges whether described three-dimensional face images comes from live body.
10. the device of face In vivo detection according to claim 9, is characterized in that, described judge module comprises:
Merge cells, for merging into a union feature information by the face characteristic information of described expression face characteristic and the posture feature information of three-dimensional face images;
For utilizing described union feature information, first judging unit, judges whether described three-dimensional face images comes from live body.
The device of 11. face In vivo detection according to claim 9, it is characterized in that, described judge module comprises:
Second judging unit, for utilizing the posture feature information of described three-dimensional face images to judge whether described three-dimensional face images comes from live body, if proceed to the 3rd judging unit;
For utilizing the face characteristic information of described expression face characteristic, 3rd judging unit, judges whether described three-dimensional face images comes from live body.
CN201610048002.7A 2016-01-25 2016-01-25 Method and device for detecting living human face Active CN105574518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610048002.7A CN105574518B (en) 2016-01-25 2016-01-25 Method and device for detecting living human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610048002.7A CN105574518B (en) 2016-01-25 2016-01-25 Method and device for detecting living human face

Publications (2)

Publication Number Publication Date
CN105574518A true CN105574518A (en) 2016-05-11
CN105574518B CN105574518B (en) 2020-02-21

Family

ID=55884626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610048002.7A Active CN105574518B (en) 2016-01-25 2016-01-25 Method and device for detecting living human face

Country Status (1)

Country Link
CN (1) CN105574518B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106066696A (en) * 2016-06-08 2016-11-02 华南理工大学 The sight tracing compensated based on projection mapping correction and point of fixation under natural light
CN107454335A (en) * 2017-08-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107580268A (en) * 2017-08-04 2018-01-12 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN107590429A (en) * 2017-07-20 2018-01-16 阿里巴巴集团控股有限公司 The method and device verified based on eyeprint feature
CN107958236A (en) * 2017-12-28 2018-04-24 深圳市金立通信设备有限公司 The generation method and terminal of recognition of face sample image
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection
CN108171211A (en) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 Biopsy method and device
CN108171158A (en) * 2017-12-27 2018-06-15 北京迈格威科技有限公司 Biopsy method, device, electronic equipment and storage medium
CN108446638A (en) * 2018-03-21 2018-08-24 广东欧珀移动通信有限公司 Auth method, device, storage medium and electronic equipment
CN108960097A (en) * 2018-06-22 2018-12-07 维沃移动通信有限公司 A kind of method and device obtaining face depth information
CN109190528A (en) * 2018-08-21 2019-01-11 厦门美图之家科技有限公司 Biopsy method and device
CN109190522A (en) * 2018-08-17 2019-01-11 浙江捷尚视觉科技股份有限公司 A kind of biopsy method based on infrared camera
WO2019200574A1 (en) * 2018-04-18 2019-10-24 深圳阜时科技有限公司 Identity authentication method, identity authentication device, and electronic apparatus
WO2019200572A1 (en) * 2018-04-18 2019-10-24 深圳阜时科技有限公司 Identity authentication method, identity authentication device, and electronic apparatus
WO2019200571A1 (en) * 2018-04-18 2019-10-24 深圳阜时科技有限公司 Identity authentication method, identity authentication device, and electronic apparatus
CN110598571A (en) * 2019-08-15 2019-12-20 中国平安人寿保险股份有限公司 Living body detection method, living body detection device and computer-readable storage medium
WO2020029572A1 (en) * 2018-08-10 2020-02-13 浙江宇视科技有限公司 Human face feature point detection method and device, equipment and storage medium
CN110991231A (en) * 2019-10-28 2020-04-10 支付宝(杭州)信息技术有限公司 Living body detection method and device, server and face recognition equipment
CN111079470A (en) * 2018-10-18 2020-04-28 杭州海康威视数字技术股份有限公司 Method and device for detecting living human face
CN111160251A (en) * 2019-12-30 2020-05-15 支付宝实验室(新加坡)有限公司 Living body identification method and device
CN111209870A (en) * 2020-01-09 2020-05-29 杭州涂鸦信息技术有限公司 Binocular living body camera rapid registration method, system and device thereof
CN111259739A (en) * 2020-01-09 2020-06-09 浙江工业大学 Human face pose estimation method based on 3D human face key points and geometric projection
CN111639582A (en) * 2020-05-26 2020-09-08 清华大学 Living body detection method and apparatus
CN111723761A (en) * 2020-06-28 2020-09-29 杭州海康威视系统技术有限公司 Method and device for determining abnormal face image and storage medium
CN111768476A (en) * 2020-07-07 2020-10-13 北京中科深智科技有限公司 Expression animation redirection method and system based on grid deformation
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
WO2021042375A1 (en) * 2019-09-06 2021-03-11 深圳市汇顶科技股份有限公司 Face spoofing detection method, chip, and electronic device
CN113033254A (en) * 2019-12-24 2021-06-25 深圳市万普拉斯科技有限公司 Body appearance recognition method, device, terminal and computer readable storage medium
TWI790459B (en) * 2020-07-09 2023-01-21 立普思股份有限公司 Infrared recognition method for human body

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141770A1 (en) * 2008-12-05 2010-06-10 Sony Corporation Imaging apparatus and imaging method
CN102054291A (en) * 2009-11-04 2011-05-11 厦门市美亚柏科信息股份有限公司 Method and device for reconstructing three-dimensional face based on single face image
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN105138967A (en) * 2015-08-05 2015-12-09 三峡大学 Living body detection method and apparatus based on active state of human eye region
CN105224924A (en) * 2015-09-29 2016-01-06 小米科技有限责任公司 Living body faces recognition methods and device
CN105260726A (en) * 2015-11-11 2016-01-20 杭州海量信息技术有限公司 Interactive video in vivo detection method based on face attitude control and system thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141770A1 (en) * 2008-12-05 2010-06-10 Sony Corporation Imaging apparatus and imaging method
CN102054291A (en) * 2009-11-04 2011-05-11 厦门市美亚柏科信息股份有限公司 Method and device for reconstructing three-dimensional face based on single face image
CN105138967A (en) * 2015-08-05 2015-12-09 三峡大学 Living body detection method and apparatus based on active state of human eye region
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN105224924A (en) * 2015-09-29 2016-01-06 小米科技有限责任公司 Living body faces recognition methods and device
CN105260726A (en) * 2015-11-11 2016-01-20 杭州海量信息技术有限公司 Interactive video in vivo detection method based on face attitude control and system thereof

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106066696B (en) * 2016-06-08 2019-05-14 华南理工大学 Sight tracing under natural light based on projection mapping correction and blinkpunkt compensation
CN106066696A (en) * 2016-06-08 2016-11-02 华南理工大学 The sight tracing compensated based on projection mapping correction and point of fixation under natural light
CN107590429A (en) * 2017-07-20 2018-01-16 阿里巴巴集团控股有限公司 The method and device verified based on eyeprint feature
CN107580268A (en) * 2017-08-04 2018-01-12 歌尔科技有限公司 A kind of head pose detection method, device and earphone
CN107454335A (en) * 2017-08-31 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN108171158A (en) * 2017-12-27 2018-06-15 北京迈格威科技有限公司 Biopsy method, device, electronic equipment and storage medium
CN108171158B (en) * 2017-12-27 2022-05-17 北京迈格威科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN107958236A (en) * 2017-12-28 2018-04-24 深圳市金立通信设备有限公司 The generation method and terminal of recognition of face sample image
CN108171211A (en) * 2018-01-19 2018-06-15 百度在线网络技术(北京)有限公司 Biopsy method and device
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection
CN108446638A (en) * 2018-03-21 2018-08-24 广东欧珀移动通信有限公司 Auth method, device, storage medium and electronic equipment
CN108446638B (en) * 2018-03-21 2021-08-24 Oppo广东移动通信有限公司 Identity authentication method and device, storage medium and electronic equipment
WO2019200574A1 (en) * 2018-04-18 2019-10-24 深圳阜时科技有限公司 Identity authentication method, identity authentication device, and electronic apparatus
WO2019200572A1 (en) * 2018-04-18 2019-10-24 深圳阜时科技有限公司 Identity authentication method, identity authentication device, and electronic apparatus
WO2019200571A1 (en) * 2018-04-18 2019-10-24 深圳阜时科技有限公司 Identity authentication method, identity authentication device, and electronic apparatus
CN108960097A (en) * 2018-06-22 2018-12-07 维沃移动通信有限公司 A kind of method and device obtaining face depth information
US11475708B2 (en) 2018-08-10 2022-10-18 Zhejiang Uniview Technologies Co., Ltd. Face feature point detection method and device, equipment and storage medium
WO2020029572A1 (en) * 2018-08-10 2020-02-13 浙江宇视科技有限公司 Human face feature point detection method and device, equipment and storage medium
CN109190522A (en) * 2018-08-17 2019-01-11 浙江捷尚视觉科技股份有限公司 A kind of biopsy method based on infrared camera
CN109190528A (en) * 2018-08-21 2019-01-11 厦门美图之家科技有限公司 Biopsy method and device
CN109190528B (en) * 2018-08-21 2021-11-30 厦门美图之家科技有限公司 Living body detection method and device
CN111079470A (en) * 2018-10-18 2020-04-28 杭州海康威视数字技术股份有限公司 Method and device for detecting living human face
CN111079470B (en) * 2018-10-18 2023-08-22 杭州海康威视数字技术股份有限公司 Method and device for detecting human face living body
CN110598571A (en) * 2019-08-15 2019-12-20 中国平安人寿保险股份有限公司 Living body detection method, living body detection device and computer-readable storage medium
WO2021042375A1 (en) * 2019-09-06 2021-03-11 深圳市汇顶科技股份有限公司 Face spoofing detection method, chip, and electronic device
CN112997185A (en) * 2019-09-06 2021-06-18 深圳市汇顶科技股份有限公司 Face living body detection method, chip and electronic equipment
CN110991231A (en) * 2019-10-28 2020-04-10 支付宝(杭州)信息技术有限公司 Living body detection method and device, server and face recognition equipment
CN110991231B (en) * 2019-10-28 2022-06-14 支付宝(杭州)信息技术有限公司 Living body detection method and device, server and face recognition equipment
CN113033254A (en) * 2019-12-24 2021-06-25 深圳市万普拉斯科技有限公司 Body appearance recognition method, device, terminal and computer readable storage medium
CN111160251B (en) * 2019-12-30 2023-05-02 支付宝实验室(新加坡)有限公司 Living body identification method and device
CN111160251A (en) * 2019-12-30 2020-05-15 支付宝实验室(新加坡)有限公司 Living body identification method and device
CN111209870A (en) * 2020-01-09 2020-05-29 杭州涂鸦信息技术有限公司 Binocular living body camera rapid registration method, system and device thereof
CN111259739A (en) * 2020-01-09 2020-06-09 浙江工业大学 Human face pose estimation method based on 3D human face key points and geometric projection
CN111639582A (en) * 2020-05-26 2020-09-08 清华大学 Living body detection method and apparatus
CN111639582B (en) * 2020-05-26 2023-10-10 清华大学 Living body detection method and equipment
CN111723761A (en) * 2020-06-28 2020-09-29 杭州海康威视系统技术有限公司 Method and device for determining abnormal face image and storage medium
CN111723761B (en) * 2020-06-28 2023-08-11 杭州海康威视系统技术有限公司 Method, device and storage medium for determining abnormal face image
CN111768476A (en) * 2020-07-07 2020-10-13 北京中科深智科技有限公司 Expression animation redirection method and system based on grid deformation
TWI790459B (en) * 2020-07-09 2023-01-21 立普思股份有限公司 Infrared recognition method for human body
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment

Also Published As

Publication number Publication date
CN105574518B (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN105574518A (en) Method and device for human face living detection
CN111460962B (en) Face recognition method and face recognition system for mask
Gu et al. Feature points extraction from faces
CN105138954B (en) A kind of image automatic screening inquiry identifying system
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
CN103440479B (en) A kind of method and system for detecting living body human face
CN108764058B (en) Double-camera face in-vivo detection method based on thermal imaging effect
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN105740780B (en) Method and device for detecting living human face
CN105740779B (en) Method and device for detecting living human face
CN105740781B (en) Three-dimensional human face living body detection method and device
CN106355138A (en) Face recognition method based on deep learning and key features extraction
CN103810491B (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN104504856A (en) Fatigue driving detection method based on Kinect and face recognition
CN102902986A (en) Automatic gender identification system and method
CN110008913A (en) The pedestrian's recognition methods again merged based on Attitude estimation with viewpoint mechanism
CN107480586B (en) Face characteristic point displacement-based biometric photo counterfeit attack detection method
CN103413119A (en) Single sample face recognition method based on face sparse descriptors
CN109460704A (en) A kind of fatigue detection method based on deep learning, system and computer equipment
CN105138967B (en) Biopsy method and device based on human eye area active state
CN107292907A (en) A kind of method to following target to be positioned and follow equipment
CN104680154B (en) A kind of personal identification method merged based on face characteristic and palm print characteristics
CN110796101A (en) Face recognition method and system of embedded platform
CN108537143A (en) A kind of face identification method and system based on key area aspect ratio pair
CN107103271A (en) A kind of method for detecting human face

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 1, building 8, floor 802, 1

Applicant after: Beijing eye Intelligence Technology Co., Ltd.

Address before: 100085 Beijing, Haidian District, No. ten on the ground floor, No. 1, building 8, floor 802, 1

Applicant before: Beijing Techshino Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220307

Address after: 071800 Beijing Tianjin talent home (Xincheng community), West District, Xiongxian Economic Development Zone, Baoding City, Hebei Province

Patentee after: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Patentee after: Beijing Eye Intelligent Technology Co., Ltd

Address before: 100085, 1 floor 8, 1 Street, ten Street, Haidian District, Beijing.

Patentee before: Beijing Eyes Intelligent Technology Co.,Ltd.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method and device for detecting human face in vivo

Effective date of registration: 20220614

Granted publication date: 20200221

Pledgee: China Construction Bank Corporation Xiongxian sub branch

Pledgor: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Registration number: Y2022990000332

PE01 Entry into force of the registration of the contract for pledge of patent right