CN109271950A - A kind of human face in-vivo detection method based on mobile phone forward sight camera - Google Patents

A kind of human face in-vivo detection method based on mobile phone forward sight camera Download PDF

Info

Publication number
CN109271950A
CN109271950A CN201811139901.3A CN201811139901A CN109271950A CN 109271950 A CN109271950 A CN 109271950A CN 201811139901 A CN201811139901 A CN 201811139901A CN 109271950 A CN109271950 A CN 109271950A
Authority
CN
China
Prior art keywords
key point
mobile phone
face
point
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811139901.3A
Other languages
Chinese (zh)
Other versions
CN109271950B (en
Inventor
周曦
周牧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yuncong Artificial Intelligence Technology Co Ltd
Yuncong Technology Group Co Ltd
Original Assignee
Guangzhou Yuncong Artificial Intelligence Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yuncong Artificial Intelligence Technology Co Ltd filed Critical Guangzhou Yuncong Artificial Intelligence Technology Co Ltd
Priority to CN201811139901.3A priority Critical patent/CN109271950B/en
Publication of CN109271950A publication Critical patent/CN109271950A/en
Application granted granted Critical
Publication of CN109271950B publication Critical patent/CN109271950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

A kind of human face in-vivo detection method based on mobile phone forward sight camera, using following steps, step 1: facial image being acquired by camera device of mobile phone, face is detected;Step 2: identification module extracts face key point from the facial image of acquisition;The key point for meeting corner feature is chosen in face key point as calibration point, specially selectes main points point, respectively four canthus and two corners of the mouths;Step 3: optical flow tracking being carried out to calibration point, the calibration point of optical flow tracking failure is filtered by ransac algorithm.The present invention is based on structure light devices to the In vivo detection of face progress unaware, and detection speed is fast and accuracy rate is high.

Description

A kind of human face in-vivo detection method based on mobile phone forward sight camera
Technical field
The present invention relates to field of face identification, and in particular to a kind of face In vivo detection side based on mobile phone forward sight camera Method.
Background technique
With the rise of artificial intelligence trend, face recognition technology in real life using more and more extensive, The important means identified as personal identification.But along with the convenience and friendly of recognition of face, face identification system is also deposited In the spoofed risk of user identity, criminal utilizes the print paper comprising facial information, mask, or passes through broadcasting The means such as electronic photo, video can achieve the purpose for forging other people identity.Therefore, the In vivo detection technology based on face is got over Come more important, becomes the essential module of face identification system.Existing face In vivo detection technology is according to In vivo detection Process can be divided into two classes:
(1) formula In vivo detection: by requiring user to complete blink, open one's mouth, the simple movement such as rotary head, or reading one The modes such as section random digit achieve the purpose that distinguish real human face and photograph print, video playback.In recent years, as face closes At the development of technology, become relatively easily, to bring such detection mode huge by the video that a photo synthesizes any movement Big threat.In addition, the In vivo detection of formula the problems such as also Consumer's Experience is bad, and the verification time is long.
(2) non-formula In vivo detection: by camera real-time capture user's face image, and the skills such as machine learning are utilized Art analyzes the information such as texture, the face background of image, distinguish true face and paper, mask, video playing face.This Better user experience of the kind In vivo detection mode relative to (1), but it is easy the shadow by usage scenario, camera hardware performance Ring, scene illumination is undesirable, in the lower situation of resolution ratio of camera head, true man's experience is often bad;And with electrical screen The development of curtain display technology, high definition play video more close to the textural characteristics of true face, to such In vivo detection technology It also proposed huge challenge.
(3) based on the biopsy method of structure light: this method emits specific infrared light pattern by structure light device And depth image is rebuild, near-infrared image, the depth map in user's face region are just acquired the case where being not required to user action cooperation Picture and visible images by the feature extraction of three dimensions, fusion, and compare, carry out In vivo detection, have user experience It is good, it is highly-safe, the advantages that scene strong robustness.But structure light equipment cost is higher, and volume is larger, in current cell phone apparatus On be not available.
With the increase in demand of mobile-phone payment and various mobile phone terminal identification verification functions, mobile phone terminal recognition of face and living body inspection The frequency of use of method of determining and calculating is no less than other kinds of application.Requirement of the mobile phone terminal to face In vivo detection system is: not changing Existing hardware, the program space is small, and operational efficiency is high, easy to use, is not easy to crack.
Summary of the invention
In view of the deficiencies of the prior art, the present invention proposes a kind of face In vivo detection sides based on mobile phone forward sight camera Method, specific technical solution are as follows: a kind of human face in-vivo detection method based on mobile phone forward sight camera, it is characterised in that:
Using following steps,
Step 1: facial image being acquired by camera device of mobile phone, face is detected;
Step 2: identification module extracts face key point from the facial image of acquisition;
The key point for meeting corner feature is chosen in face key point as calibration point, specially selectes main points Point, respectively four canthus and two corners of the mouths;
Step 3: to calibration point carry out optical flow tracking, to optical flow tracking failure calibration point by ransac algorithm come Filter;
Step 4: deliberate action instruction is provided on mobile phone, user is according to the deliberate action instruction translation hand on mobile phone Machine, while head still is kept, in mobile phone initialization, the 3D structure ginseng of the preparatory typing of user is preserved in mobile phone database Number is used as reference data;
For processing module in mobile phone translation motion, interception very n framing bit extracts n frame in the human face photo of different perspectives, n > 1 The key point of every picture in picture calculates facial image using parallaxometer using structure from motion algorithm 3D structure;
Step 5: processing module compares the 3D structure come out by disparity computation with reference data, if, data It coincide, then determines to be detected as living body, otherwise, it is determined that being data falsification.
Further: the whole ransac constraint tracking of key point includes following mode in the step 3:
Assuming that the t0 moment has key point { P1, P2 ... Pn }, the t1 moment by optical flow tracking to position correspond to Q1, Q2 ... Qn }, it is set with reference time t, then in t1-to < t;
Transformation relation from P to Q: Wi* (F*Pi+B=Qi), F (2*2), B (2* can be described with an affine transformation 1), i=1,2 ... n, Wi are the weights of i-th of key point, are initialized as 1;
6 unknown numbers are shared, as long as the number n of key point is greater than 6, this equation group can be acquired with least square method Square error minimal solution, and then their predicted value { G1, G2 ... Gn } is calculated again;
If there is a small amount of optical flow tracking to fail in 1 to n key point, it is assumed that it is k-th, then Qk and Gk difference comparsion is big, Then the weight Wk for turning k down, then repeats process above, until the weight of point k is less than setting value.
Further: forward sight camera internal reference is demarcated, determines user face key point 3D structure determination, specifically,
It is provided with distance threshold, if user is first typing face information, user is made of mobile phone against face The movement of Multi-orientation multi-angle, structure from motion algorithm extract the key point coordinate of each frame, any two frame it Between the distance between same key point be key point distance, any all key point sum of the distance of two frames are overall distance;
Be provided with key frame set, if between any one frame of selection and each element of key frame set it is whole away from From threshold value is all larger than, then the frame is added in key frame set, structure from motion algorithm utilizes key frame collection After conjunction restores key point 3D structure, complete to key point 3D compound with regular structure;
Key point 3D compound with regular structure specifically,
The 3D structure that structure from motion algorithm restores is dimensionless, needs exist for fixing its size, Dimension parameter lambda is adjusted, the distance between left eye angle and the right eye angle of right eye of left eye are 1 decimeter;
Using the left eye angle of eye and the right eye angle line center of right eye as origin, the direction from left eye angle to right eye angle is X-direction, two corners of the mouth central points to the direction of origin are Y direction, then determine Z-direction by right-handed coordinate system principle, are constructed RT transformation snaps to key point 3D location information in the coordinate system of the right hand, is then stored in database, obtains regular Key point 3D structure after change.
The invention has the benefit that the present invention is based on structure light devices to the In vivo detection of face progress unaware, inspection Degree of testing the speed is fast and accuracy rate is high.It can be applied to various bright dark situations and to the higher scene of requirement of real-time.Compared to it His In vivo detection technology, the present invention improve the defend ability for all kinds of attack stage properties, and safety is higher, especially for The recall rate of high definition print paper, mask or high definition screen shot is high, greatly reduces the risk of face identification system.
Detailed description of the invention
Fig. 1 is work flow diagram of the invention.
Specific embodiment
The preferred embodiments of the present invention will be described in detail with reference to the accompanying drawing, so that advantages and features of the invention energy It is easier to be readily appreciated by one skilled in the art, so as to make a clearer definition of the protection scope of the present invention.
A kind of human face in-vivo detection method based on mobile phone forward sight camera as shown in Figure 1:, it is characterised in that:
Using following steps,
Step 1: facial image being acquired by camera device of mobile phone, face is detected;
Step 2: identification module extracts face key point from the facial image of acquisition;
The key point for meeting corner feature is chosen in face key point as calibration point, specially selectes main points Point, respectively four canthus and two corners of the mouths;
Step 3: to calibration point carry out optical flow tracking, to optical flow tracking failure calibration point by ransac algorithm come Filter;
Step 4: deliberate action instruction is provided on mobile phone, user is according to the deliberate action instruction translation hand on mobile phone Machine, while head still is kept, in mobile phone initialization, the 3D structure ginseng of the preparatory typing of user is preserved in mobile phone database Number is used as reference data;
For processing module in mobile phone translation motion, interception very n framing bit extracts n frame in the human face photo of different perspectives, n > 1 The key point of every picture in picture calculates facial image using parallaxometer using structure from motion algorithm 3D structure;
Step 5: processing module compares the 3D structure come out by disparity computation with reference data, if, data It coincide, then determines to be detected as living body, otherwise, it is determined that being data falsification.
The position that some in face key point are put is more accurate, such as canthus, the corners of the mouth, eyebrows, nostril.The position of some points is not It is too accurate, if certain points on the bridge of the nose do not have apparent gray feature as foundation, only interpolation as a result, removing such point. Only retain those key points for being typically compliant with corner feature.In addition, the point on upper eyelid will also remove, to avoid caused by blink Interference.By taking four canthus and two corners of the mouth this main points points as an example, if wearing glasses, optical distortion can occur for canthus, Canthus point can be replaced with the point on glasses, can also be replaced with the point on eyebrow.
Algorithm can intercept the human face photo of many frame different perspectivess in translation motion, due to mobile phone and face distance than It is relatively close, so not needing to translate very big amplitude, also it is enough to generate the parallax that can differentiate key point 3D structure.To collected The key point of plurality of pictures calculates its 3D structure using structure from motion algorithm.User can be initial Change software, when typing face information, by the algorithm of Lu Bang and redundancy, obtains the stable 3D structural parameters of himself, next time As long as this restoration result is compared with typing information.This strategy is it is possible to prevente effectively from against static photograph The forgery behavior of piece or unmatched video cell phone, because will not be consistent by the 3D structure that disparity computation comes out.
The scheme moved using head still mobile phone, rather than the scheme of the motionless head twisting of mobile phone, because the latter is easy puppet It makes, if recording the video on one section of twisting head of user in advance, may be obtained from the circle of friends of other side, motion video, As long as then mobile phone is put against screen to an appropriate position, it is possible to form false but qualified 3D effect.But The scheme moved using head still mobile phone, mobile phone, which is exactly utilized, to be moved freely, and the characteristics of built-in six axis gyroscope. Gyroscope will record the acceleration and angular acceleration parameter of lower mobile phone, and algorithm can be known by the tracking to face key point Its motion state, and then the anti-motion state for releasing mobile phone, the state that this state must be detected with gyroscope matches, no Then think there is the deception suspicion for forging video.User is when carrying out true conjunction rule operation, as long as the head of oneself keeps quiet Only, so that it may ensure percent of pass close to 100%.And the movement forged is difficult to have kept with the movement prerecorded on video Complete consistent, especially mobile in mobile phone fast speed, or more complicated translation mode is used, such as circle, 8 fonts, ten When font moves, forgery behavior is difficult to succeed.Meanwhile the present invention can increase the identification of a motor pattern, by system with Machine designated movement mode or motor pattern set by user, to increase security of system.
Can by move freely with built-in gyroscope, to devise the face living body checkschema specifically for mobile phone terminal. Using two connected effects brought by translation mobile phone, translation generates the 3D structure that parallax is used to calculate face key point, translation Movement itself matches with mobile phone Motions of Gyroscope parameter, solves target respectively and forges behavior and movement forgery behavior.Commonly People, which is difficult to combine both forgery behaviors, to implement to have no weak point, so the program has very high anti-crack ability.
Tracking is constrained for the whole ransac of key point:
Assuming that the t0 moment has key point { P1, P2 ... Pn }, the t1 moment by optical flow tracking to position correspond to Q1, Q2 ... Qn }, then in the case where the time difference is not very big, the transformation from P to Q can be described with an affine transformation and is closed System: Wi* (F*Pi+B=Qi), F (2*2), B (2*1), i=1,2 ... n, Wi are the weights of i-th of key point, are initialized as 1. 6 unknown numbers are shared, as long as the number n of key point is greater than 6, this equation group can acquire mean square error with least square method most It goes to the lavatory, and then calculates their predicted value { G1, G2 ... Gn } again.If there is a small amount of optical flow tracking to lose in 1 to n key point It loses, it is assumed that be k-th, then Qk and Gk difference comparsion is big, then turns the weight Wk of k down, then repeats process above.Several wheels Iteration is got off, and the weight for tracking the point k of failure can be smaller and smaller, smaller and smaller to whole influence, and its predicted position Gk It is more accurate to come, and key point is detected around Gk, so that it may reduce search range, improve detection speed.In addition, if there is Apparent before and after frames linking is unsmooth, optical flow tracking can become it is very chaotic and can not successful match, thus preliminary identification target Continuity and authenticity.
For Structure from motion algorithm:
The calibration of forward sight camera internal reference.This step can be configured when mobile phone dispatches from the factory, can also be in software initialization When done by structure from motion algorithm.It is not herein the core technology of this patent, and most of at present Mobile phone forward sight camera is all non-distortion, and internal reference is fairly simple.
User face key point 3D structure determination:
This step is also the core of structure from motion algorithm.Generally in user's typing face, initialize soft It is done when part, to improve speed when live living body verifying.When first typing face information, made in many ways of mobile phone against face The sliding of position multi-angle.Algorithm can automatically extract out the key point two dimensional image coordinate of each frame, then select those in entirety Apart from bigger several vertical frame dimension quality critical point information.Method is exactly a certain frame being newly selected into, it is necessary to be selected into history its The point point matching distance average value of the key point of his frame is greater than threshold value, after such a period of time, will accumulate enough numbers High-quality key point frame.
Key point 3D compound with regular structure.Due to structure from motion algorithm restore 3D structure be it is nondimensional, It needs exist for fixing its size, adjusts dimension parameter lambda, make the left eye angle of left eye, the distance between right eye and right eye angle are 1 Decimetre.It then is X-direction, two corners of the mouths from left eye angle to the direction at right eye angle using two tail of the eye line centers as origin Central point is Y direction to the direction of origin, then by normal right-handed coordinate system principle determines Z-direction, which is directed towards The back side of head, it constructs suitable RT transformation and snaps to key point 3D location information in coordinate system as described above, then store Get up.This coordinate system is known as the regular coordinate system of face.
Live key point 3D structure alignment.In In vivo detection site of deployment, scene is determined with same method above The 3D structure of face, only the quantity of crucial point frame can be a little less, and algorithm iteration number can be a little less, and randomly how extensive They are equally done Regularization transformation, are then compared with the regular 3D structure of typing in library by multiple 3D structure several times.Choosing Suitable threshold value is selected to match to receive or refuse.
Motions of Gyroscope characteristic parameter matching method are as follows:
Camera internal reference and face key point 3D structure determined in face typing, will when live In vivo detection Standard key point 3D structure, the position key point 2D and camera internal reference on image are as input, using in opencv SolvePnp () function, so that it may posture of each frame camera relative to the regular coordinate system of face is recovered, is indicated with R, T, R is spin matrix, and T is translational movement.A kind of relatively simple kinematic parameter alignment schemes are described below:
Guaranteeing that head is vertical when live In vivo detection as far as possible, holding mobile phone is vertical as far as possible when mobile phone translates and angle is constant, Then the angular velocity component of mobile phone itself can be ignored, and R's varies less, and only consider the variation of T.Pass through continuous three frame The variation of T value calculates mobile phone in the three-dimensional acceleration A of each frame, carries out Kalman filtering appropriate again by the sequence that A is formed, drops Low observation noise.
Mobile phone gyroscope synchronizes the three-dimensional acceleration information B for detecting mobile phone.Since the angle of mobile phone in translation motion is most Amount remains unchanged, therefore by the low-pass filtering ingredient to entire period section, it is estimated that gravity direction.Pass through seat appropriate Mark transformation, gravity direction is transformed to and is aligned with the Y-axis negative direction of the regular coordinate system of face, which is acted on B, so It subtracts gravitational acceleration component again afterwards, obtains that B ', B ' and A are substantially aligned in Y direction, and difficulty of matching decreases, at Y points They are compared in advance in amount.B ' and A are first calculated in the mean variance of Y-component, mean value is then individually subtracted, divided by standard deviation, Normalization alignment, then matches the one-dimensional sequence after normalization, designs suitable threshold value and receives or refuse.
If previous step passes through, then compares the acceleration degree series of X-direction.The Y-component of B ' and A is removed, remaining bidimensional Covariance is calculated, covariance principal component is analyzed, determines principal direction and time direction.Two ingredients of principal component analysis are mutual Orthogonal, main translation component concentrates on X and Y-direction, Z-direction very little when due to translation mobile phone, so principal component is substantially In X-axis, secondary ingredient is substantially on Z axis.The sequence of principal component component is taken, normalization alignment is equally done, may also need when necessary It overturns, is then compared, design suitable threshold value and receive or refuse.
When sequence alignment, the absolute value of acceleration of the weight and frame of each frame is directly proportional, both take it is maximum that, Increase the weight at acceleration big time point, improves matching accuracy.
Meanwhile the limitation constant to R can be decontroled, firstly, as T, R sequence is done Kalman filtering appropriate or Smothing filtering.Secondly, the acceleration information that gyroscope detects, cannot be used directly to estimated weight direction, it is inverse to be first a R Transformation, B=RT* B calculates gravity direction, subtracts acceleration of gravity then again with the step of front.

Claims (3)

1. a kind of human face in-vivo detection method based on mobile phone forward sight camera, it is characterised in that:
Using following steps,
Step 1: facial image being acquired by camera device of mobile phone, face is detected;
Step 2: identification module extracts face key point from the facial image of acquisition;
The key point for meeting corner feature is chosen in face key point as calibration point, specially selected main points point divides It Wei not four canthus and two corners of the mouths;
Step 3: optical flow tracking being carried out to calibration point, the calibration point of optical flow tracking failure is filtered by ransac algorithm;
Step 4: deliberate action instruction is provided on mobile phone, user translates mobile phone according to the deliberate action instruction on mobile phone, Head still is kept simultaneously, in mobile phone initialization, the 3D structural parameters of the preparatory typing of user are preserved in mobile phone database As reference data;
For processing module in mobile phone translation motion, interception very n framing bit extracts n frame picture in the human face photo of different perspectives, n > 1 In the key point of every picture utilize parallaxometer to calculate the 3D knot of facial image using structure from motion algorithm Structure;
Step 5: processing module compares the 3D structure come out by disparity computation with reference data, if data are coincide, Then determine to be detected as living body, otherwise, it is determined that being data falsification.
2. a kind of human face in-vivo detection method based on mobile phone forward sight camera according to claim 1, it is characterised in that: institute The whole ransac constraint tracking for stating key point in step 3 includes following mode:
Assuming that the t0 moment has key point { P1, P2 ... Pn }, the t1 moment by optical flow tracking to position correspond to Q1, Q2 ... Qn }, it is set with reference time t, then in t1-to < t;
Transformation relation from P to Q: Wi* (F*Pi+B=Qi), F (2*2), B (2*1), i can be described with an affine transformation =1,2 ... n, Wi are the weights of i-th of key point, are initialized as 1;
6 unknown numbers are shared, as long as the number n of key point is greater than 6, this equation group can acquire mean square error with least square method Poor minimal solution, and then their predicted value { G1, G2 ... Gn } is calculated again;
If there is a small amount of optical flow tracking to fail in 1 to n key point, it is assumed that be k-th, then Qk and Gk difference comparsion is big, then The weight Wk for turning k down, then repeats process above, until the weight of point k is less than setting value.
3. a kind of human face in-vivo detection method based on mobile phone forward sight camera according to claim 1, it is characterised in that:
Forward sight camera internal reference is demarcated, determines user face key point 3D structure determination, specifically,
It is provided with distance threshold, if user is first typing face information, user is made in many ways of mobile phone against face The movement of position multi-angle, structure from motion algorithm extract the key point coordinate of each frame, same between any two frame The distance between one key point is key point distance, and any all key point sum of the distance of two frames are overall distance;
It is provided with key frame set, if the overall distance between any one frame of selection and each element of key frame set is equal Greater than threshold value, then the frame is added in key frame set, structure from motion algorithm utilizes key frame set pair After key point 3D structure is restored, complete to key point 3D compound with regular structure;
Key point 3D compound with regular structure specifically,
The 3D structure that structure from motion algorithm restores is dimensionless, needs exist for fixing its size, is adjusted Dimensional parameters λ, the distance between left eye angle and the right eye angle of right eye of left eye are 1 decimeter;
It is X-axis from left eye angle to the direction at right eye angle using the left eye angle of eye and the right eye angle line center of right eye as origin Direction, two corners of the mouth central points to the direction of origin are Y direction, then determine Z-direction by right-handed coordinate system principle, and construction RT becomes Key point of changing commanders 3D location information snaps in the coordinate system of the right hand, is then stored in database, after obtaining Regularization Key point 3D structure.
CN201811139901.3A 2018-09-28 2018-09-28 Face living body detection method based on mobile phone forward-looking camera Active CN109271950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811139901.3A CN109271950B (en) 2018-09-28 2018-09-28 Face living body detection method based on mobile phone forward-looking camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811139901.3A CN109271950B (en) 2018-09-28 2018-09-28 Face living body detection method based on mobile phone forward-looking camera

Publications (2)

Publication Number Publication Date
CN109271950A true CN109271950A (en) 2019-01-25
CN109271950B CN109271950B (en) 2021-02-05

Family

ID=65198805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811139901.3A Active CN109271950B (en) 2018-09-28 2018-09-28 Face living body detection method based on mobile phone forward-looking camera

Country Status (1)

Country Link
CN (1) CN109271950B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934187A (en) * 2019-03-19 2019-06-25 西安电子科技大学 Based on face Activity determination-eye sight line random challenge response method
CN109993863A (en) * 2019-02-20 2019-07-09 南通大学 A kind of access control system and its control method based on recognition of face
CN110188728A (en) * 2019-06-06 2019-08-30 四川长虹电器股份有限公司 A kind of method and system of head pose estimation
CN110276313A (en) * 2019-06-25 2019-09-24 网易(杭州)网络有限公司 Identity identifying method, identification authentication system, medium and calculating equipment
CN110688946A (en) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 Public cloud silence in-vivo detection device and method based on picture identification
CN111563490A (en) * 2020-07-14 2020-08-21 北京搜狐新媒体信息技术有限公司 Face key point tracking method and device and electronic equipment
CN112016437A (en) * 2020-08-26 2020-12-01 中国科学院重庆绿色智能技术研究院 Living body detection method based on face video key frame
CN113536844A (en) * 2020-04-16 2021-10-22 中移(成都)信息通信科技有限公司 Face comparison method, device, equipment and medium
CN114724257A (en) * 2022-04-20 2022-07-08 北京快联科技有限公司 Living body detection method and device
CN114743253A (en) * 2022-06-13 2022-07-12 四川迪晟新达类脑智能技术有限公司 Living body detection method and system based on distance characteristics of key points of adjacent faces
CN115937958A (en) * 2022-12-01 2023-04-07 北京惠朗时代科技有限公司 Blink detection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
CN102254154A (en) * 2011-07-05 2011-11-23 南京大学 Method for authenticating human-face identity based on three-dimensional model reconstruction
CN105427385A (en) * 2015-12-07 2016-03-23 华中科技大学 High-fidelity face three-dimensional reconstruction method based on multilevel deformation model
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
US20180046871A1 (en) * 2016-08-09 2018-02-15 Mircea Ionita Methods and systems for enhancing user liveness detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1920886A (en) * 2006-09-14 2007-02-28 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
CN102254154A (en) * 2011-07-05 2011-11-23 南京大学 Method for authenticating human-face identity based on three-dimensional model reconstruction
CN105427385A (en) * 2015-12-07 2016-03-23 华中科技大学 High-fidelity face three-dimensional reconstruction method based on multilevel deformation model
CN105718863A (en) * 2016-01-15 2016-06-29 北京海鑫科金高科技股份有限公司 Living-person face detection method, device and system
US20180046871A1 (en) * 2016-08-09 2018-02-15 Mircea Ionita Methods and systems for enhancing user liveness detection

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993863A (en) * 2019-02-20 2019-07-09 南通大学 A kind of access control system and its control method based on recognition of face
CN109934187A (en) * 2019-03-19 2019-06-25 西安电子科技大学 Based on face Activity determination-eye sight line random challenge response method
CN109934187B (en) * 2019-03-19 2023-04-07 西安电子科技大学 Random challenge response method based on face activity detection-eye sight
CN110188728A (en) * 2019-06-06 2019-08-30 四川长虹电器股份有限公司 A kind of method and system of head pose estimation
CN110276313A (en) * 2019-06-25 2019-09-24 网易(杭州)网络有限公司 Identity identifying method, identification authentication system, medium and calculating equipment
CN110276313B (en) * 2019-06-25 2022-04-22 杭州网易智企科技有限公司 Identity authentication method, identity authentication device, medium and computing equipment
CN110688946A (en) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 Public cloud silence in-vivo detection device and method based on picture identification
CN113536844A (en) * 2020-04-16 2021-10-22 中移(成都)信息通信科技有限公司 Face comparison method, device, equipment and medium
CN113536844B (en) * 2020-04-16 2023-10-31 中移(成都)信息通信科技有限公司 Face comparison method, device, equipment and medium
CN111563490B (en) * 2020-07-14 2020-11-03 北京搜狐新媒体信息技术有限公司 Face key point tracking method and device and electronic equipment
CN111563490A (en) * 2020-07-14 2020-08-21 北京搜狐新媒体信息技术有限公司 Face key point tracking method and device and electronic equipment
CN112016437A (en) * 2020-08-26 2020-12-01 中国科学院重庆绿色智能技术研究院 Living body detection method based on face video key frame
CN112016437B (en) * 2020-08-26 2023-02-10 中国科学院重庆绿色智能技术研究院 Living body detection method based on face video key frame
CN114724257A (en) * 2022-04-20 2022-07-08 北京快联科技有限公司 Living body detection method and device
CN114743253A (en) * 2022-06-13 2022-07-12 四川迪晟新达类脑智能技术有限公司 Living body detection method and system based on distance characteristics of key points of adjacent faces
CN114743253B (en) * 2022-06-13 2022-08-09 四川迪晟新达类脑智能技术有限公司 Living body detection method and system based on distance characteristics of key points of adjacent faces
CN115937958A (en) * 2022-12-01 2023-04-07 北京惠朗时代科技有限公司 Blink detection method, device, equipment and storage medium
CN115937958B (en) * 2022-12-01 2023-12-15 北京惠朗时代科技有限公司 Blink detection method, blink detection device, blink detection equipment and storage medium

Also Published As

Publication number Publication date
CN109271950B (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN109271950A (en) A kind of human face in-vivo detection method based on mobile phone forward sight camera
CN107609383B (en) 3D face identity authentication method and device
CN107748869B (en) 3D face identity authentication method and device
CN107633165B (en) 3D face identity authentication method and device
CN105205455B (en) The in-vivo detection method and system of recognition of face on a kind of mobile platform
CN105023010B (en) A kind of human face in-vivo detection method and system
WO2018040307A1 (en) Vivo detection method and device based on infrared visible binocular image
CN104966070B (en) Biopsy method and device based on recognition of face
Li et al. Seeing your face is not enough: An inertial sensor-based liveness detection for face authentication
Medioni et al. Identifying noncooperative subjects at a distance using face images and inferred three-dimensional face models
CN105426827A (en) Living body verification method, device and system
CN105138967B (en) Biopsy method and device based on human eye area active state
CN109271923A (en) Human face posture detection method, system, electric terminal and storage medium
CN109255319A (en) For the recognition of face payment information method for anti-counterfeit of still photo
US20210256244A1 (en) Method for authentication or identification of an individual
Farrukh et al. FaceRevelio: a face liveness detection system for smartphones with a single front camera
CN112633217A (en) Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN115035546A (en) Three-dimensional human body posture detection method and device and electronic equipment
Wu et al. The value of multiple viewpoints in gesture-based user authentication
KR20110024178A (en) Device and method for face recognition using 3 dimensional shape information
CN108009532A (en) Personal identification method and terminal based on 3D imagings
JP7264308B2 (en) Systems and methods for adaptively constructing a three-dimensional face model based on two or more inputs of two-dimensional face images
CN108446595A (en) A kind of space-location method, device, system and storage medium
Tong et al. Cross-view gait identification with embedded learning
Kornilova et al. Smartportraits: Depth powered handheld smartphone dataset of human portraits for state estimation, reconstruction and synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant