CN105426827A - Living body verification method, device and system - Google Patents

Living body verification method, device and system Download PDF

Info

Publication number
CN105426827A
CN105426827A CN201510756011.7A CN201510756011A CN105426827A CN 105426827 A CN105426827 A CN 105426827A CN 201510756011 A CN201510756011 A CN 201510756011A CN 105426827 A CN105426827 A CN 105426827A
Authority
CN
China
Prior art keywords
human face
image
sight line
image information
line vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510756011.7A
Other languages
Chinese (zh)
Other versions
CN105426827B (en
Inventor
郭亨凯
彭义刚
李�诚
吴立威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201510756011.7A priority Critical patent/CN105426827B/en
Publication of CN105426827A publication Critical patent/CN105426827A/en
Application granted granted Critical
Publication of CN105426827B publication Critical patent/CN105426827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a living body verification method, device and system. The method comprises the following steps: generating a vision center point which moves according to a preset track, and collecting multiple frames of the face images of tested objects in the movement process of the vision center point, wherein the line of sight of each tested object always moves along with the vision center point in an acquisition process; extracting image information from each frame of collected face image; according to the extracted image information, estimating a vector of the line of sight; according to the estimated vector of the line of sight, obtaining an estimated projection track; and comparing the estimated projection track with a preset track of the vision center point, judging that the tested object is a living body when a similarity between the estimated projection track and the preset track of the vision center point is greater than or equal to a preset threshold value, and judging that the tested object is not the living body when the similarity between the estimated projection track and the preset track of the vision center point is smaller than the preset threshold value.

Description

Live body verification method, device and system
Technical field
The present invention relates to computer vision field, relate in particular to a kind of live body verification method, device and system.
Background technology
In recent years, face recognition technology has had significant progress.Such as, but in a lot of application scenario, recognition of face mobile payment, video witness are opened an account, when verifying facial image, need to judge that this facial image is the facial image of live body simultaneously, or the facial image in photo or the video of recording.
Face live body verification method conventional at present mainly contains several as follows:
1) by gathering the depth information of facial image, its modeling reconstruct is mated with three-dimensional template.The shortcoming of this method limits by environmental baseline, may not easily obtain complete depth information, and the accuracy of three-dimensional modeling still has much room for improvement.
2) according to face grain details information, some unique points in facial image or statistical nature information and real human face template are compared.But, when the resolution of image to be detected lower or sufficiently complete time, when accurately cannot obtain grain details information, the method is also inapplicable.
3) Chinese patent application CN201210331141.2 discloses a kind of biopsy method, it gathers multiframe facial image, face key point/block in each two field picture is positioned, by judging whether average difference values is greater than predetermined threshold value and judges whether live body.Chinese patent application CN201510243778.X also discloses a kind of biopsy method, is equally also to gather multiframe facial image, and the Changing Pattern whether being met real human face by the rule of the attribute change value judging key point judges whether live body.But, this type of method detecting face global motion due to the motion of face be single biological characteristic, and not malleable, when these class methods with based on face identity identifying method in conjunction with time, if people maliciously be have collected my face moving image a large amount of for detecting by other people, then the reliability of these class methods will reduce.
4), in the method disclosed in Chinese patent application CN201310363154.2, judge whether it is live body by motions such as nictation of the face in detected image.But for so simple action nictation, be forged than being easier to, the reliability making these class methods false proof reduces.
Summary of the invention
Technical matters to be solved by this invention is that existing live body proof scheme device is complicated, and accuracy and reliability are not high.
For this reason, the embodiment of the present invention proposes a kind of live body verification method, comprise: generate the optic centre point by desired guiding trajectory motion, and in described optic centre point motion process, gather the facial image of multiframe measurand, in gatherer process, the sight line of measurand follows the motion of described optic centre point all the time; Image information is extracted to gathered every frame facial image; Sight line vector is estimated according to extracted image information; The projected footprint estimated is obtained according to the sight line vector estimated; The desired guiding trajectory of the projected footprint of described estimation and described optic centre point is contrasted, when similarity when is between the two more than or equal to predetermined threshold value, judge that measurand is live body, when the similarity when is between the two less than predetermined threshold value, judge that measurand is not live body.
Preferably, described to estimate sight line vector according to extracted image information be described image information is input to neural network model to obtain the sight line vector estimated, described neural network model is obtained by following steps: gather the facial image under magnanimity different people, different sight; Image information and sight line vector is extracted from the facial image collected; Described neural network model is obtained according to obtained image information and sight line vector.
Preferably, described image information comprises eyes image, and the step extracting image information the described facial image from collecting comprises: carry out Face datection to the every width facial image collected, obtain human face region; To obtained human face region, mark human face characteristic point; According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
Preferably, the described step obtaining described neural network model according to obtained image information and sight line vector comprises: using obtained eyes image as input, build multilayer degree of depth convolutional neural networks, described multilayer degree of depth convolutional neural networks is connected successively by convolutional layer, down-sampled layer, non-linear layer, last one deck is the full articulamentum of a f dimension, and the sight line vector obtained is as output layer; Utilize the eyes image and sight line vector that obtain, train built degree of depth convolutional neural networks, described training, based on back-propagation algorithm, utilizes stochastic gradient descent Renewal model parameter on the training data.
Preferably, described image information comprises human face posture characteristic sum eyes image, and the step extracting image information the described facial image from collecting comprises: carry out Face datection to the every width facial image collected, obtain human face region; To obtained human face region, mark human face characteristic point; The human face characteristic point marked is normalized, using the human face characteristic point after normalization as human face posture feature; According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
Preferably, the described step obtaining described neural network model according to obtained image information and sight line vector comprises: using obtained eyes image as input, build multilayer degree of depth convolutional neural networks, described multilayer degree of depth convolutional neural networks is connected successively by convolutional layer, down-sampled layer, non-linear layer, last one deck is the full articulamentum of a f dimension, and the full articulamentum that obtained this f of human face posture characteristic sum ties up is stitched together, as the full articulamentum expanded, the sight line vector obtained is as output layer; Utilize the human face posture feature, eyes image and the sight line vector that obtain, train built degree of depth convolutional neural networks, described training, based on back-propagation algorithm, utilizes stochastic gradient descent Renewal model parameter on the training data.
Preferably, the step extracting sight line vector the described facial image from collecting comprises: obtain head three-dimensional model; The human face characteristic point marked is snapped on described head three-dimensional model; According to alignment result and the optic centre point position of obtained human face characteristic point and head three-dimensional model, calculate sight line vector.
Preferably, described image information comprises eyes image, and the described step to gathered every frame facial image extraction image information comprises: carry out Face datection to every frame facial image, obtain human face region; To obtained human face region, mark human face characteristic point; According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
Preferably, described image information comprises human face posture characteristic sum eyes image, describedly extracts image information to gathered every frame facial image and comprises: carry out Face datection to every frame facial image, obtain human face region; To obtained human face region, mark human face characteristic point; The human face characteristic point marked is normalized, using the human face characteristic point after normalization as human face posture feature; According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
The embodiment of the present invention additionally provides a kind of live body demo plant, comprise: Track Pick-up and image acquisition units, for generating the optic centre point by desired guiding trajectory motion, and in described optic centre point motion process, gather the multiframe facial image of measurand, in gatherer process, the sight line of measurand follows the motion of described optic centre point all the time; Image information extraction unit, for extracting image information to gathered every frame facial image; Sight line vector estimation unit, for estimating sight line vector according to extracted image information; Projected footprint generation unit, obtains the projected footprint estimated according to the sight line vector estimated; Contrast unit, the desired guiding trajectory of the projected footprint of described estimation and described optic centre point is contrasted, when the similarity when is between the two more than or equal to predetermined threshold value, judges that measurand is live body, when similarity when is between the two less than predetermined threshold value, judge that measurand is not live body.
Preferably, described image information is input to neural network model to obtain the sight line vector estimated by described sight line vector estimation unit, described neural network model is obtained by following subelement: gather subelement, for gathering the facial image under magnanimity different people, different sight; Extract subelement, for extracting image information and sight line vector from the facial image collected; Neural network model generates subelement, for obtaining described neural network model according to obtained image information and sight line vector.
Preferably, described image information comprises eyes image, and described image information extraction unit comprises: Face datection subelement, for carrying out Face datection to every frame facial image, obtains human face region; Human face characteristic point mark subelement, for obtained human face region, marks human face characteristic point; Eyes image cutting subelement, for according to the human face characteristic point marked, obtains the position of eye, cuts out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
Preferably, described image information comprises human face posture characteristic sum eyes image, and described image information extraction unit comprises: Face datection subelement, for carrying out Face datection to every frame facial image, obtains human face region; Human face characteristic point mark subelement, for obtained human face region, marks human face characteristic point; Human face characteristic point normalization subelement, for being normalized the human face characteristic point marked, using the human face characteristic point after normalization as human face posture feature; Eyes image cutting subelement, for according to the human face characteristic point marked, obtains the position of eye, cuts out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
The embodiment of the present invention additionally provides a kind of live body verification system further, comprising: display device, for showing the optic centre point of desired guiding trajectory motion; Image collecting device, for gathering the multiframe facial image of measurand in described optic centre point motion process, in gatherer process, the sight line of measurand follows the motion of described optic centre point all the time; Processor, for generating the described optic centre point moved by described desired guiding trajectory; Image information is extracted to gathered every frame facial image; Sight line vector is estimated according to extracted image information; The projected footprint estimated is obtained according to the sight line vector estimated; The desired guiding trajectory of the projected footprint of estimation and optic centre point is contrasted, when the similarity when is between the two more than or equal to predetermined threshold value, judges that measurand is live body, when the similarity when is between the two less than predetermined threshold value, judge that measurand is not live body.
According to the live body verification method of the embodiment of the present invention, device and system, by the facial image of Real-time Collection measurand in proof procedure, the sight line track of measurand is estimated according to facial image, and by contrasting the actual motion track of the sight line track of estimation and optic centre point to judge measurand whether live body, only need the equipment with camera and screen namely can complete judgement, do not need complicated peripheral apparatus; Adopt the mode of Eye-controlling focus to allow the sight line of measurand move along with the desired guiding trajectory of stochastic generation, be difficult to be forged, greatly improve accuracy and the reliability of live body checking.
According to the live body verification method of the embodiment of the present invention, device and system, using obtained image information as input, build multilayer degree of depth convolutional neural networks, described multilayer degree of depth convolutional neural networks passes through convolutional layer, down-sampled layer, non-linear layer connects successively, last one deck is the full articulamentum of a f dimension, the sight line vector obtained is as output layer, and utilize the image information and sight line vector that obtain, train to obtain neural network model to built degree of depth convolutional neural networks, thus sight line vector can be estimated from extracted image information rapidly and accurately, and then improve the accuracy of live body judgement.
According to the live body verification method of the embodiment of the present invention, device and system, choose eyes image and be used as image information thus can calculating be simplified, complete detection fast; In further preferred embodiment, choose both human face posture characteristic sum eyes images and be used as image information, the situation that the head further contemplating measurand moves, when measurand head moves, still can carry out In vivo detection exactly.
Accompanying drawing explanation
Can understanding the features and advantages of the present invention clearly by reference to accompanying drawing, accompanying drawing is schematic and should not be construed as and carry out any restriction to the present invention, in the accompanying drawings:
Fig. 1 shows the process flow diagram of the live body verification method according to the embodiment of the present invention;
Fig. 2 shows the process flow diagram gathered every frame facial image being extracted to the step of image information according to the embodiment of the present invention;
Fig. 3 shows the schematic diagram of 21 human face characteristic points;
Fig. 4 shows the process flow diagram of the acquisition methods of the neural network model according to the embodiment of the present invention;
Fig. 5 shows the schematic diagram of the neural network model according to the embodiment of the present invention;
Fig. 6 shows the process flow diagram of live body verification method according to another embodiment of the present invention;
Fig. 7 shows the process flow diagram according to another embodiment of the present invention gathered every frame facial image being extracted to the step of image information;
Fig. 8 shows the schematic diagram of the live body demo plant according to the embodiment of the present invention;
Fig. 9 shows the schematic diagram of the live body verification system according to the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the present invention is described in detail.
Embodiment 1
As shown in Figure 1, the live body verification method that the present embodiment provides, this verification method only needs one namely can complete with the equipment of camera and screen, comprises the steps:
S11. generate the optic centre point pressing desired guiding trajectory motion, and gather multiframe facial image in this optic centre point motion process.In gatherer process, the sight line of measurand needs to follow the motion of optic centre point all the time, watch attentively on optic centre point, this optic centre point is according to the running orbit preset and the method for operation, screen moves, its running orbit can be screen straight line from left to right, straight line from top to bottom, a circular trace etc., but is not limited to this cited several running orbit.In order to promote the difficulty of forgery, running orbit also can be the random walk track at every turn verifying stochastic generation.The optic centre point method of operation can be uniform speed, also can be variable Rate.In test process, also can Real-time Collection facial image, the mode of image acquisition can be continuous acquisition, and also can gather at interval, interval time can be the same or different.
S12. image information is extracted to gathered every frame facial image, in the present embodiment, choose eyes image and be used as image information, thus can calculating be simplified, complete detection fast.Certainly, also more information can be extracted from gathered facial image, to promote the accuracy of checking.
S13. sight line vector is estimated according to extracted eyes image.Usually, this step can utilize neural network model to realize, being about to extracted eyes image is input in the neural network model of association eyes image and the sight line vector trained, and just can obtain the sight line vector estimated, thus can estimate sight line vector rapidly and accurately.
S14. the projected footprint estimated is obtained according to the sight line vector estimated.Usually, sight line vector that can be first estimated according to each two field picture obtains the movement locus of sight line vector, this movement locus is projected to the plane at screen place, then namely can obtain the projected footprint of movement locus on screen of the sight line vector estimated.
S15. the projected footprint of estimation and desired guiding trajectory are contrasted, when the similarity when is between the two more than or equal to predetermined threshold value, can judge that measurand is live body, when the similarity when is between the two less than predetermined threshold value, can judge that measurand is not live body.
Realize live body checking based on the method for Eye-controlling focus according to the live body verification method of the present embodiment, but existing Eye-controlling focus scheme often needs complexity and the cooperation of the peripheral apparatus such as eye tracker of costliness and light source, platform, and can not apply it in In vivo detection, more impossiblely apply it in the In vivo detection of mobile terminal, and the existing Eye-controlling focus scheme not adopting the peripheral apparatus such as eye tracker then often degree of accuracy is inadequate, cannot be applied in In vivo detection.
Live body verification method disclosed in the embodiment of the present invention, by the facial image of Real-time Collection measurand in proof procedure, the sight line track of measurand is estimated according to facial image, and by contrasting the actual motion track of the sight line track of estimation and optic centre point to judge measurand whether live body, only need the equipment with camera and screen namely can complete judgement, do not need complicated peripheral apparatus; Adopt the mode of Eye-controlling focus to allow the sight line of measurand move along with the desired guiding trajectory of stochastic generation, be difficult to be forged, greatly improve accuracy and the reliability of live body checking.In addition, in the live body proof procedure disclosed in the embodiment of the present invention, need not pass through auditory tone cues between demo plant and user, user also need not speak.
Preferably, as shown in Figure 2, above-mentioned steps S12 can comprise:
S121. Face datection is carried out to every frame facial image, obtain human face region.
S122. to obtained human face region, human face characteristic point is marked.Such as, the method of facial modeling can be used to mark human face characteristic point, see document SupervisedDescentMethodanditsApplicationstoFaceAlignment, ComputerVisionandPatternRecognition (CVPR), 2013IEEEConference, 532-539 page.The schematic diagram of 21 human face characteristic points is shown, i.e. each 6 unique points of right and left eyes in Fig. 3, nose 4 unique points, mouth 5 unique points, has it will be appreciated by those skilled in the art that and adopt more or less human face characteristic point to be also feasible.
S123. according to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right.
S124. cut out eyes image is normalized to unified resolution m × n-pixel size.
By above-mentioned steps, namely easily, rapidly eyes image can be extracted from facial image.To introduce the acquisition methods of neural network model in detail hereinafter, as shown in Figure 4, the method can comprise the steps:
S21. gather the facial image under magnanimity different people, different sight, specifically can comprise the steps:
S21a) use the method for ripe camera calibration, obtain the inner parameter of camera, meanwhile, use the calibration steps based on minute surface, estimate the three-dimensional position of screen.
S21b) use the equipment with camera and screen, screen generates several optic centre points at random one by one.
S21c) require that sight line is watched attentively the optic centre point on screen by collected people, when gathering people and confirming that sight line has watched optic centre point attentively, gather facial image now.
S21d) repeat above-mentioned steps S21b and S21c, gather the facial image under the different people of magnanimity, different sight.Wherein, collecting device comprises camera and the screen of different model, and as notebook computer, panel computer, smart mobile phone etc., collected people's large contingent, gathers environment changeable.
S22. from the facial image collected, extract eyes image and sight line vector, specifically can comprise the steps:
S22a) utilize the method identical with step S12, from the facial image collected, extract eyes image, do not repeat them here.
S22b) obtain head three-dimensional model, this model can be the head three-dimensional model of this specific collected people that prior modeling obtains, and also can adopt predefined average head three-dimensional model to different people.
S22c) utilize as existing algorithms such as EPnP algorithms, the human face characteristic point marked is snapped on head three-dimensional model, and successive optimization, obtain optimum results human face characteristic point being snapped to head three-dimensional model.
S22d) according to alignment result and the optic centre point position of obtained human face characteristic point and head three-dimensional model, calculate sight line vector, namely eyes (eye intermediate features point is as the position of eyes) are to the vector of optic centre point, and this vector is normalized to unit length.
S23. obtain neural network model according to obtained eyes image and sight line vector, specifically can comprise the steps:
S23a) deep neural network model is built, using the eyes image of obtained m × n resolution as input, build multilayer degree of depth convolutional neural networks, degree of depth convolutional neural networks is connected successively by convolutional layer, down-sampled layer, non-linear layer, last one deck is the full articulamentum of a f dimension, the sight line vector v obtained as output layer, as shown in Figure 5.
S23b) utilize the eyes image obtained and sight line vector data, built degree of depth convolutional neural networks is trained, obtains the deep neural network model of Eye-controlling focus.Training, based on back-propagation algorithm, utilizes stochastic gradient descent Renewal model parameter on the training data.
By adopting this deep neural network model to estimate sight line vector, without the need to complexity, the peripheral apparatus such as the eye tracker of costliness just can carry out sight line vector estimation rapidly and accurately, thus improve the accuracy of live body judgement.
Embodiment 2
When the head of measurand moves, the accuracy that sight line vector is estimated can be had influence on, and then the accuracy of live body judgement can be had influence on.Therefore, as different from Example 1, when carrying out In vivo detection, the image considering that the head of measurand moves also is needed.For this reason, as shown in Figure 6, the live body verification method that the present embodiment provides equally only needs one namely can complete with the equipment of camera and screen, comprises the steps:
S31. generate the optic centre point pressing desired guiding trajectory motion, and gather multiframe facial image in described optic centre point motion process.
S32. image information is extracted to every frame facial image.In order to the head eliminating contingent measurand moves the impact produced live body checking accuracy, in the present embodiment, choose human face posture characteristic sum eyes image and be used as image information.
S33. sight line vector is estimated according to extracted human face posture characteristic sum eyes image.Usually, extracted human face posture characteristic sum eyes image can be input in the neural network model of the association face posture feature, eyes image and the sight line vector that have trained, just can obtain the sight line vector estimated.
S34. the projected footprint estimated is obtained according to the sight line vector estimated.
S35. projected footprint and desired guiding trajectory are contrasted, when the similarity when is between the two more than or equal to predetermined threshold value, can judge that measurand is live body, when the similarity when is between the two less than predetermined threshold value, can judge that measurand is not live body.
According to the live body verification method of the present embodiment, by the facial image of Real-time Collection measurand in proof procedure, the sight line track of measurand is estimated according to facial image, and by contrasting the actual motion track of the sight line track of estimation and optic centre point to judge measurand whether live body, only need the equipment with camera and screen namely can complete judgement, do not need complicated peripheral apparatus; The mode of Eye-controlling focus is adopted to allow the sight line of measurand move along with the desired guiding trajectory of stochastic generation, be difficult to be forged, greatly improve accuracy and the reliability of live body checking, and the situation that the head further contemplating measurand moves, when measurand head moves, still In vivo detection can be carried out exactly.
Preferably, as shown in Figure 7, above-mentioned steps S32 can comprise:
S321. every frame facial image carries out Face datection, obtains human face region.
S322. to obtained human face region, human face characteristic point is marked.
S323. the human face characteristic point marked is normalized, makes coordinate be positioned in the coordinate range of [0,1] × [0,1], using the human face characteristic point after normalization as human face posture feature.
S324. according to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right.
S325. cut out eyes image is normalized to unified resolution m × n-pixel size.
By above-mentioned steps, namely human face posture characteristic sum eyes image can be extracted easily, rapidly from facial image.To introduce the acquisition methods of neural network model in detail hereinafter, as shown in Figure 7, the method can comprise the steps:
S41. gather the facial image under magnanimity different people, different sight, specifically can comprise the steps:
S41a) use the method for ripe camera calibration, obtain the inner parameter of camera, meanwhile, use the calibration steps based on minute surface, estimate the three-dimensional position of screen.
S41b) use the equipment with camera and screen, screen generates several optic centre points at random one by one.
S41c) require that sight line is watched attentively the optic centre point on screen by collected people, when gathering people and confirming that sight line has watched optic centre point attentively, gather facial image now.
S41d) repeat above-mentioned steps S41b and S41c, gather the facial image under the different people of magnanimity, different sight.Similarly, collecting device comprises camera and the screen of different model, and as notebook computer, panel computer, smart mobile phone etc., collected people's large contingent, gathers environment changeable.
S42. from the facial image collected, extract human face posture feature, eyes image and sight line vector, specifically can comprise the steps:
S42a) Face datection is carried out to the facial image collected, obtain human face region.
S42b) to obtained human face region, human face characteristic point is marked.
S42c) human face characteristic point marked is normalized, makes coordinate be positioned in the coordinate range of [0,1] × [0,1], using the human face characteristic point after normalization as human face posture feature.
S42d) according to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and normalize to unified resolution m × n-pixel size.
S42e) head three-dimensional model is obtained.
S42f) obtained human face characteristic point is corresponded on head three-dimensional model.
S42g) according to alignment result and the optic centre point position of obtained human face characteristic point and head three-dimensional model, calculate sight line vector, namely eyes (eye intermediate features point is as the position of eyes) are to the vector of optic centre point, and this vector is normalized to unit length.
S43. obtain neural network model according to obtained human face posture feature, eyes image and sight line vector, specifically can comprise the steps:
S23a) deep neural network model is built, using the eyes image of obtained m × n resolution as input, build multilayer degree of depth convolutional neural networks, this multilayer degree of depth convolutional neural networks is connected successively by convolutional layer, down-sampled layer, non-linear layer, last one deck is the full articulamentum of a f dimension, and is stitched together by the full articulamentum that obtained this f of human face posture characteristic sum ties up, as the full articulamentum expanded, the sight line vector v obtained as output layer, equally as shown in Figure 5.
S23b) utilize the human face posture feature, eyes image and the sight line vector data that obtain, the deep neural network of building is trained, and obtains the deep neural network model of Eye-controlling focus.Training, based on back-propagation algorithm, utilizes stochastic gradient descent Renewal model parameter on the training data.
By adopting this deep neural network model to estimate sight line vector, without the need to complexity, the peripheral apparatus such as the eye tracker of costliness just can carry out sight line vector estimation rapidly and accurately, thus improve the accuracy of live body judgement.
Embodiment 3
Present embodiment discloses a kind of live body demo plant, as shown in Figure 8, comprising:
Track Pick-up and image acquisition units 11, for generating the optic centre point by desired guiding trajectory motion, and gather multiframe facial image in optic centre point motion process, and in gatherer process, the sight line of measurand follows the motion of described optic centre point all the time;
Image information extraction unit 12, for extracting image information to gathered every frame facial image;
Sight line vector estimation unit 13, for estimating sight line vector according to extracted image information;
Projected footprint generation unit 14, obtains the projected footprint estimated according to the sight line vector estimated;
Contrast unit 15, the desired guiding trajectory of the projected footprint of estimation and optic centre point is contrasted, when the similarity when is between the two more than or equal to predetermined threshold value, judges that measurand is live body, when similarity when is between the two less than predetermined threshold value, judge that measurand is not live body.
According to the live body demo plant of the present embodiment, by the facial image of Real-time Collection measurand in proof procedure, the sight line track of measurand is estimated according to facial image, and by contrasting the actual motion track of the sight line track of estimation and optic centre point to judge measurand whether live body, only need the equipment with camera and screen namely can complete judgement, compared with live body verification method of the prior art, do not need complicated platform and light source to coordinate, and the accuracy judged and reliability high.
When image information is eyes image, this image information extraction unit 12 can comprise:
Face datection subelement, for carrying out Face datection to every frame facial image, obtains human face region;
Human face characteristic point mark subelement, for obtained human face region, marks human face characteristic point;
Eyes image cutting subelement, for according to the human face characteristic point marked, obtains the position of eye, cuts out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
As a kind of preferred implementation, when image information is human face posture characteristic sum eyes image, image information extraction unit can comprise:
Face datection subelement, for carrying out Face datection to every frame facial image, obtains human face region;
Human face characteristic point mark subelement, for obtained human face region, marks human face characteristic point;
Human face characteristic point normalization subelement, for being normalized the human face characteristic point marked, using the human face characteristic point after normalization as human face posture feature;
Eyes image cutting subelement, for according to the human face characteristic point marked, obtains the position of eye, cuts out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
Thus, the situation that the head further contemplating measurand moves, when measurand head moves, still can carry out In vivo detection exactly.
Preferably, described image information is input to neural network model to obtain the sight line vector estimated by sight line vector estimation unit 13, and this neural network model can be obtained by following subelement:
Gather subelement, for gathering the facial image under magnanimity different people, different sight;
Extract subelement, for extracting image information and sight line vector from the facial image collected;
Neural network model generates subelement, for obtaining described neural network model according to obtained image information and sight line vector.
The generation method of neural network model is identical with embodiment 2 with embodiment 1, does not repeat them here.By adopting this deep neural network model to estimate sight line vector, without the need to complexity, the peripheral apparatus such as the eye tracker of costliness just can carry out sight line vector estimation rapidly and accurately, thus improve the accuracy of live body judgement.
Embodiment 4
Present embodiment discloses a kind of live body verification system, this live body verification system can be applied to mobile phone, panel computer, notebook computer, PC and other all equipment with camera and screen, and as shown in Figure 9, this system comprises:
Display device 21, for showing the optic centre point 24 that desired guiding trajectory 25 moves, this display device 21 can be such as display screen;
Image collecting device 22, for gathering the multiframe facial image of measurand in optic centre point 24 motion process, in gatherer process, the sight line of measurand is followed optic centre point 24 all the time and is moved, and this image collecting device 22 can be such as camera;
Processor 23, for generating the optic centre point 24 that this moves by desired guiding trajectory 25; Image information is extracted to gathered every frame facial image; Sight line vector is estimated according to extracted image information; The projected footprint 26 estimated is obtained according to the sight line vector estimated; The projected footprint 26 of estimation is contrasted with the desired guiding trajectory 25 of optic centre point, when similarity when is between the two more than or equal to predetermined threshold value, judge that measurand is live body, when the similarity when is between the two less than predetermined threshold value, judge that measurand is not live body.
According to the live body verification system of the present embodiment, by the facial image of Real-time Collection measurand in proof procedure, the sight line track of measurand is estimated according to facial image, and by contrasting the actual motion track of the sight line track of estimation and optic centre point to judge measurand whether live body, only need the equipment with camera and screen namely can complete judgement, do not need complicated peripheral apparatus; Adopt the mode of Eye-controlling focus to allow the sight line of measurand move along with the desired guiding trajectory of stochastic generation, be difficult to be forged, greatly improve accuracy and the reliability of live body checking.
When image information is eyes image, this can comprise the step that gathered every frame facial image extracts image information:
Face datection is carried out to every frame facial image, obtains human face region;
To obtained human face region, mark human face characteristic point;
According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
As a kind of preferred implementation, when image information is human face posture characteristic sum eyes image, this can comprise the step that gathered every frame facial image extracts image information:
Face datection is carried out to every frame facial image, obtains human face region;
To obtained human face region, mark human face characteristic point;
The human face characteristic point marked is normalized, using the human face characteristic point after normalization as human face posture feature;
According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
Thus, the situation that the head further contemplating measurand moves, when measurand head moves, still can carry out In vivo detection exactly.
As a kind of preferred implementation, by both extracted eyes image or human face posture characteristic sum eyes image are input to the neural network model trained, to obtain the sight line vector estimated, the generation method of neural network model is identical with embodiment 2 with embodiment 1, does not repeat them here.By adopting this deep neural network model to estimate sight line vector, without the need to complexity, the peripheral apparatus such as the eye tracker of costliness just can carry out sight line vector estimation rapidly and accurately, thus improve the accuracy of live body judgement.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the process flow diagram of the method for the embodiment of the present invention, equipment (system) and computer program and/or block scheme.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or square frame.These computer program instructions can being provided to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computing machine or other programmable data processing device produce device for realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make on computing machine or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computing machine or other programmable devices is provided for the step realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
Although describe embodiments of the present invention by reference to the accompanying drawings, but those skilled in the art can make various modifications and variations without departing from the spirit and scope of the present invention, such amendment and modification all fall into by within claims limited range.

Claims (14)

1. a live body verification method, is characterized in that, comprising:
Generate the optic centre point pressing desired guiding trajectory motion, and in described optic centre point motion process, gather the facial image of multiframe measurand, in gatherer process, the sight line of measurand follows the motion of described optic centre point all the time;
Image information is extracted to gathered every frame facial image;
Sight line vector is estimated according to extracted image information;
The projected footprint estimated is obtained according to the sight line vector estimated;
The desired guiding trajectory of the projected footprint of described estimation and described optic centre point is contrasted, when similarity when is between the two more than or equal to predetermined threshold value, judge that measurand is live body, when the similarity when is between the two less than predetermined threshold value, judge that measurand is not live body.
2. method according to claim 1, it is characterized in that, described according to extracted image information estimate sight line vector be described image information is input to neural network model to obtain estimate sight line vector, described neural network model is obtained by following steps:
Gather the facial image under magnanimity different people, different sight;
Image information and sight line vector is extracted from the facial image collected;
Described neural network model is obtained according to obtained image information and sight line vector.
3. method according to claim 2, is characterized in that, described image information comprises eyes image, and the step extracting image information the described facial image from collecting comprises:
Face datection is carried out to the every width facial image collected, obtains human face region;
To obtained human face region, mark human face characteristic point;
According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
4. method according to claim 3, is characterized in that, the described step obtaining described neural network model according to obtained image information and sight line vector comprises:
Using obtained eyes image as input, build multilayer degree of depth convolutional neural networks, described multilayer degree of depth convolutional neural networks is connected successively by convolutional layer, down-sampled layer, non-linear layer, and last one deck is the full articulamentum of a f dimension, and the sight line vector obtained is as output layer;
Utilize the eyes image and sight line vector that obtain, train built degree of depth convolutional neural networks, described training, based on back-propagation algorithm, utilizes stochastic gradient descent Renewal model parameter on the training data.
5. method according to claim 2, is characterized in that, described image information comprises human face posture characteristic sum eyes image, and the step extracting image information the described facial image from collecting comprises:
Face datection is carried out to the every width facial image collected, obtains human face region;
To obtained human face region, mark human face characteristic point;
The human face characteristic point marked is normalized, using the human face characteristic point after normalization as human face posture feature;
According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
6. method according to claim 5, is characterized in that, the described step obtaining described neural network model according to obtained image information and sight line vector comprises:
Using obtained eyes image as input, build multilayer degree of depth convolutional neural networks, described multilayer degree of depth convolutional neural networks is connected successively by convolutional layer, down-sampled layer, non-linear layer, last one deck is the full articulamentum of a f dimension, and the full articulamentum that obtained this f of human face posture characteristic sum ties up is stitched together, as the full articulamentum expanded, the sight line vector obtained is as output layer;
Utilize the human face posture feature, eyes image and the sight line vector that obtain, train built degree of depth convolutional neural networks, described training, based on back-propagation algorithm, utilizes stochastic gradient descent Renewal model parameter on the training data.
7. the method according to any one of claim 2-6, is characterized in that, the step extracting sight line vector the described facial image from collecting comprises:
Obtain head three-dimensional model;
The human face characteristic point marked is snapped on described head three-dimensional model;
According to alignment result and the optic centre point position of obtained human face characteristic point and head three-dimensional model, calculate sight line vector.
8. the method according to any one of claim 1-7, is characterized in that, described image information comprises eyes image, and the described step to gathered every frame facial image extraction image information comprises:
Face datection is carried out to every frame facial image, obtains human face region;
To obtained human face region, mark human face characteristic point;
According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
9. the method according to any one of claim 1-7, is characterized in that, described image information comprises human face posture characteristic sum eyes image, describedly extracts image information to gathered every frame facial image and comprises:
Face datection is carried out to every frame facial image, obtains human face region;
To obtained human face region, mark human face characteristic point;
The human face characteristic point marked is normalized, using the human face characteristic point after normalization as human face posture feature;
According to the human face characteristic point marked, obtain the position of eye, cut out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
10. a live body demo plant, is characterized in that, comprising:
Track Pick-up and image acquisition units, for generating the optic centre point by desired guiding trajectory motion, and in described optic centre point motion process, gather the multiframe facial image of measurand, in gatherer process, the sight line of measurand follows the motion of described optic centre point all the time;
Image information extraction unit, for extracting image information to gathered every frame facial image;
Sight line vector estimation unit, for estimating sight line vector according to extracted image information;
Projected footprint generation unit, obtains the projected footprint estimated according to the sight line vector estimated;
Contrast unit, the desired guiding trajectory of the projected footprint of described estimation and described optic centre point is contrasted, when the similarity when is between the two more than or equal to predetermined threshold value, judges that measurand is live body, when similarity when is between the two less than predetermined threshold value, judge that measurand is not live body.
11. devices according to claim 10, is characterized in that, described image information is input to neural network model to obtain the sight line vector estimated by described sight line vector estimation unit, and described neural network model is obtained by following subelement:
Gather subelement, for gathering the facial image under magnanimity different people, different sight;
Extract subelement, for extracting image information and sight line vector from the facial image collected;
Neural network model generates subelement, for obtaining described neural network model according to obtained image information and sight line vector.
12. devices according to claim 10 or 11, it is characterized in that, described image information comprises eyes image, and described image information extraction unit comprises:
Face datection subelement, for carrying out Face datection to every frame facial image, obtains human face region;
Human face characteristic point mark subelement, for obtained human face region, marks human face characteristic point;
Eyes image cutting subelement, for according to the human face characteristic point marked, obtains the position of eye, cuts out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
13. devices according to claim 10 or 11, it is characterized in that, described image information comprises human face posture characteristic sum eyes image, and described image information extraction unit comprises:
Face datection subelement, for carrying out Face datection to every frame facial image, obtains human face region;
Human face characteristic point mark subelement, for obtained human face region, marks human face characteristic point;
Human face characteristic point normalization subelement, for being normalized the human face characteristic point marked, using the human face characteristic point after normalization as human face posture feature;
Eyes image cutting subelement, for according to the human face characteristic point marked, obtains the position of eye, cuts out the eyes image of two, left and right, and by unified for cut out eyes image to same pixel size.
14. 1 kinds of live body verification systems, is characterized in that, comprising:
Display device, for showing the optic centre point of desired guiding trajectory motion;
Image collecting device, for gathering the multiframe facial image of measurand in described optic centre point motion process, in gatherer process, the sight line of measurand follows the motion of described optic centre point all the time;
Processor, for generating the described optic centre point moved by described desired guiding trajectory; Image information is extracted to gathered every frame facial image; Sight line vector is estimated according to extracted image information; The projected footprint estimated is obtained according to the sight line vector estimated; The desired guiding trajectory of the projected footprint of estimation and optic centre point is contrasted, when the similarity when is between the two more than or equal to predetermined threshold value, judges that measurand is live body, when the similarity when is between the two less than predetermined threshold value, judge that measurand is not live body.
CN201510756011.7A 2015-11-09 2015-11-09 Living body verification method, device and system Active CN105426827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510756011.7A CN105426827B (en) 2015-11-09 2015-11-09 Living body verification method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510756011.7A CN105426827B (en) 2015-11-09 2015-11-09 Living body verification method, device and system

Publications (2)

Publication Number Publication Date
CN105426827A true CN105426827A (en) 2016-03-23
CN105426827B CN105426827B (en) 2019-03-08

Family

ID=55505027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510756011.7A Active CN105426827B (en) 2015-11-09 2015-11-09 Living body verification method, device and system

Country Status (1)

Country Link
CN (1) CN105426827B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956572A (en) * 2016-05-15 2016-09-21 北京工业大学 In vivo face detection method based on convolutional neural network
CN106599883A (en) * 2017-03-08 2017-04-26 王华锋 Face recognition method capable of extracting multi-level image semantics based on CNN (convolutional neural network)
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
CN107545248A (en) * 2017-08-24 2018-01-05 北京小米移动软件有限公司 Biological characteristic biopsy method, device, equipment and storage medium
CN107590429A (en) * 2017-07-20 2018-01-16 阿里巴巴集团控股有限公司 The method and device verified based on eyeprint feature
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108229284A (en) * 2017-05-26 2018-06-29 北京市商汤科技开发有限公司 Eye-controlling focus and training method and device, system, electronic equipment and storage medium
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN108829247A (en) * 2018-06-01 2018-11-16 北京市商汤科技开发有限公司 Exchange method and device based on eye tracking, computer equipment
CN108875469A (en) * 2017-06-14 2018-11-23 北京旷视科技有限公司 In vivo detection and identity authentication method, device and computer storage medium
CN108921209A (en) * 2018-06-21 2018-11-30 杭州骑轻尘信息技术有限公司 Image identification method, device and electronic equipment
CN109376595A (en) * 2018-09-14 2019-02-22 杭州宇泛智能科技有限公司 Monocular RGB camera in-vivo detection method and system based on human eye attention
CN109635554A (en) * 2018-11-30 2019-04-16 努比亚技术有限公司 A kind of red packet verification method, terminal and computer storage medium
CN109711309A (en) * 2018-12-20 2019-05-03 北京邮电大学 A kind of method whether automatic identification portrait picture closes one's eyes
CN109726613A (en) * 2017-10-27 2019-05-07 虹软科技股份有限公司 A kind of method and apparatus for detection
CN109886080A (en) * 2018-12-29 2019-06-14 深圳云天励飞技术有限公司 Human face in-vivo detection method, device, electronic equipment and readable storage medium storing program for executing
CN110853073A (en) * 2018-07-25 2020-02-28 北京三星通信技术研究有限公司 Method, device, equipment and system for determining attention point and information processing method
WO2020063000A1 (en) * 2018-09-29 2020-04-02 北京市商汤科技开发有限公司 Neural network training and line of sight detection methods and apparatuses, and electronic device
CN111291607A (en) * 2018-12-06 2020-06-16 广州汽车集团股份有限公司 Driver distraction detection method, driver distraction detection device, computer equipment and storage medium
WO2020164284A1 (en) * 2019-02-12 2020-08-20 平安科技(深圳)有限公司 Method and apparatus for recognising living body based on planar detection, terminal, and storage medium
CN111881431A (en) * 2020-06-28 2020-11-03 百度在线网络技术(北京)有限公司 Man-machine verification method, device, equipment and storage medium
CN111967293A (en) * 2020-06-22 2020-11-20 云知声智能科技股份有限公司 Face authentication method and system combining voiceprint recognition and attention detection
CN112287909A (en) * 2020-12-24 2021-01-29 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements
CN112633217A (en) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN113505756A (en) * 2021-08-23 2021-10-15 支付宝(杭州)信息技术有限公司 Face living body detection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110007949A1 (en) * 2005-11-11 2011-01-13 Global Rainmakers, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
CN103400122A (en) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 Method for recognizing faces of living bodies rapidly
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN103593598A (en) * 2013-11-25 2014-02-19 上海骏聿数码科技有限公司 User online authentication method and system based on living body detection and face recognition
CN104966070A (en) * 2015-06-30 2015-10-07 北京汉王智远科技有限公司 Face recognition based living body detection method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110007949A1 (en) * 2005-11-11 2011-01-13 Global Rainmakers, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
CN103400122A (en) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 Method for recognizing faces of living bodies rapidly
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN103593598A (en) * 2013-11-25 2014-02-19 上海骏聿数码科技有限公司 User online authentication method and system based on living body detection and face recognition
CN104966070A (en) * 2015-06-30 2015-10-07 北京汉王智远科技有限公司 Face recognition based living body detection method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙霖: "人脸识别中的活体检测技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956572A (en) * 2016-05-15 2016-09-21 北京工业大学 In vivo face detection method based on convolutional neural network
CN106599883A (en) * 2017-03-08 2017-04-26 王华锋 Face recognition method capable of extracting multi-level image semantics based on CNN (convolutional neural network)
CN106599883B (en) * 2017-03-08 2020-03-17 王华锋 CNN-based multilayer image semantic face recognition method
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
CN108229284B (en) * 2017-05-26 2021-04-09 北京市商汤科技开发有限公司 Sight tracking and training method and device, system, electronic equipment and storage medium
CN108229284A (en) * 2017-05-26 2018-06-29 北京市商汤科技开发有限公司 Eye-controlling focus and training method and device, system, electronic equipment and storage medium
CN108875469A (en) * 2017-06-14 2018-11-23 北京旷视科技有限公司 In vivo detection and identity authentication method, device and computer storage medium
CN107590429A (en) * 2017-07-20 2018-01-16 阿里巴巴集团控股有限公司 The method and device verified based on eyeprint feature
CN107545248B (en) * 2017-08-24 2021-04-02 北京小米移动软件有限公司 Biological characteristic living body detection method, device, equipment and storage medium
CN107545248A (en) * 2017-08-24 2018-01-05 北京小米移动软件有限公司 Biological characteristic biopsy method, device, equipment and storage medium
CN109726613A (en) * 2017-10-27 2019-05-07 虹软科技股份有限公司 A kind of method and apparatus for detection
US11017557B2 (en) 2017-10-27 2021-05-25 Arcsoft Corporation Limited Detection method and device thereof
CN109726613B (en) * 2017-10-27 2021-09-10 虹软科技股份有限公司 Method and device for detection
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN107992842B (en) * 2017-12-13 2020-08-11 深圳励飞科技有限公司 Living body detection method, computer device, and computer-readable storage medium
CN108829247B (en) * 2018-06-01 2022-11-15 北京市商汤科技开发有限公司 Interaction method and device based on sight tracking and computer equipment
CN108829247A (en) * 2018-06-01 2018-11-16 北京市商汤科技开发有限公司 Exchange method and device based on eye tracking, computer equipment
CN108921209A (en) * 2018-06-21 2018-11-30 杭州骑轻尘信息技术有限公司 Image identification method, device and electronic equipment
CN110853073A (en) * 2018-07-25 2020-02-28 北京三星通信技术研究有限公司 Method, device, equipment and system for determining attention point and information processing method
CN109376595A (en) * 2018-09-14 2019-02-22 杭州宇泛智能科技有限公司 Monocular RGB camera in-vivo detection method and system based on human eye attention
WO2020063000A1 (en) * 2018-09-29 2020-04-02 北京市商汤科技开发有限公司 Neural network training and line of sight detection methods and apparatuses, and electronic device
CN110969061A (en) * 2018-09-29 2020-04-07 北京市商汤科技开发有限公司 Neural network training method, neural network training device, visual line detection method, visual line detection device and electronic equipment
CN109635554A (en) * 2018-11-30 2019-04-16 努比亚技术有限公司 A kind of red packet verification method, terminal and computer storage medium
CN111291607B (en) * 2018-12-06 2021-01-22 广州汽车集团股份有限公司 Driver distraction detection method, driver distraction detection device, computer equipment and storage medium
CN111291607A (en) * 2018-12-06 2020-06-16 广州汽车集团股份有限公司 Driver distraction detection method, driver distraction detection device, computer equipment and storage medium
CN109711309B (en) * 2018-12-20 2020-11-27 北京邮电大学 Method for automatically identifying whether portrait picture is eye-closed
CN109711309A (en) * 2018-12-20 2019-05-03 北京邮电大学 A kind of method whether automatic identification portrait picture closes one's eyes
CN109886080A (en) * 2018-12-29 2019-06-14 深圳云天励飞技术有限公司 Human face in-vivo detection method, device, electronic equipment and readable storage medium storing program for executing
WO2020164284A1 (en) * 2019-02-12 2020-08-20 平安科技(深圳)有限公司 Method and apparatus for recognising living body based on planar detection, terminal, and storage medium
CN111967293A (en) * 2020-06-22 2020-11-20 云知声智能科技股份有限公司 Face authentication method and system combining voiceprint recognition and attention detection
CN111881431A (en) * 2020-06-28 2020-11-03 百度在线网络技术(北京)有限公司 Man-machine verification method, device, equipment and storage medium
CN111881431B (en) * 2020-06-28 2023-08-22 百度在线网络技术(北京)有限公司 Man-machine verification method, device, equipment and storage medium
US11989272B2 (en) 2020-06-28 2024-05-21 Baidu Online Network Technology (Beijing) Co., Ltd. Human-machine verification method, device and storage medium
CN112287909A (en) * 2020-12-24 2021-01-29 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements
CN112633217A (en) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN113505756A (en) * 2021-08-23 2021-10-15 支付宝(杭州)信息技术有限公司 Face living body detection method and device

Also Published As

Publication number Publication date
CN105426827B (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN105426827A (en) Living body verification method, device and system
EP3373202B1 (en) Verification method and system
Dikovski et al. Evaluation of different feature sets for gait recognition using skeletal data from Kinect
KR102106135B1 (en) Apparatus and method for providing application service by using action recognition
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN108154075A (en) The population analysis method learnt via single
CN108369785A (en) Activity determination
US20100208038A1 (en) Method and system for gesture recognition
CN105809144A (en) Gesture recognition system and method adopting action segmentation
CN109298785A (en) A kind of man-machine joint control system and method for monitoring device
CN106033601A (en) Method and apparatus for detecting abnormal situation
CN110458895A (en) Conversion method, device, equipment and the storage medium of image coordinate system
CN106960473B (en) behavior perception system and method
CN110211222B (en) AR immersion type tour guide method and device, storage medium and terminal equipment
CN110276239A (en) Eyeball tracking method, electronic device and non-transient computer-readable recording medium
KR20200056602A (en) Apparatus and method for recognizing movement of object
CN107480586A (en) Bio-identification photo bogus attack detection method based on human face characteristic point displacement
CN113378649A (en) Identity, position and action recognition method, system, electronic equipment and storage medium
CN108875506A (en) Face shape point-tracking method, device and system and storage medium
CN111241926A (en) Attendance checking and learning condition analysis method, system, equipment and readable storage medium
CN112700568B (en) Identity authentication method, equipment and computer readable storage medium
RU2005100267A (en) METHOD AND SYSTEM OF AUTOMATIC VERIFICATION OF THE PRESENCE OF A LIVING FACE OF A HUMAN IN BIOMETRIC SECURITY SYSTEMS
CN112149517A (en) Face attendance checking method and system, computer equipment and storage medium
CN114639168B (en) Method and system for recognizing running gesture
CN104318228A (en) Method for acquiring optimal visual field through head-mounted video recording device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant