CN105426827B - Living body verification method, device and system - Google Patents

Living body verification method, device and system Download PDF

Info

Publication number
CN105426827B
CN105426827B CN201510756011.7A CN201510756011A CN105426827B CN 105426827 B CN105426827 B CN 105426827B CN 201510756011 A CN201510756011 A CN 201510756011A CN 105426827 B CN105426827 B CN 105426827B
Authority
CN
China
Prior art keywords
image
human face
image information
sight line
line vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510756011.7A
Other languages
Chinese (zh)
Other versions
CN105426827A (en
Inventor
郭亨凯
彭义刚
李�诚
吴立威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201510756011.7A priority Critical patent/CN105426827B/en
Publication of CN105426827A publication Critical patent/CN105426827A/en
Application granted granted Critical
Publication of CN105426827B publication Critical patent/CN105426827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention discloses a kind of living body verification methods, device and system, the method comprise the steps that generating the optic centre point by desired guiding trajectory movement, and the facial image of multiframe measurand is acquired in the optic centre point motion process, in collection process, the sight of measurand follows the optic centre point to move always;Image information is extracted to every frame facial image collected;Sight line vector is estimated according to extracted image information;The projected footprint estimated according to the sight line vector estimated;The projected footprint of the estimation and the desired guiding trajectory of the optic centre point are compared, when similarity between the two is greater than or equal to preset threshold, judge that measurand for living body, when similarity between the two is less than preset threshold, judges that measurand is not living body.

Description

Living body verification method, device and system
Technical field
The present invention relates to computer vision field, a kind of living body verification method, device and system are related in particular to.
Background technique
In recent years, face recognition technology has significant progress.But it is mobile in many applications, such as recognition of face Payment, video witness are opened an account, and when verifying to facial image, while needing to judge that the facial image is living body Facial image in facial image or photo or the video of recording.
Currently used face living body verification method mainly there are several types of:
1) it by the depth information of acquisition facial image, is modeled reconstruct and is matched with three-dimensional template.This method lacks Point is limited by environmental condition, may be not easy to obtain complete depth information, and the accuracy of three-dimensional modeling is still to be improved.
2) according to face texture detail information, by facial image some characteristic points or statistical nature information and true people Face template is compared.But when the resolution ratio of image to be detected is lower or sufficiently complete, it can not accurately obtain grain details When information, this method is simultaneously not suitable for.
3) Chinese patent application CN201210331141.2 discloses a kind of biopsy method, acquires multiframe face figure Picture positions face key point/block in each frame image, is sentenced by judging whether average difference values are greater than preset threshold It is disconnected whether living body.Chinese patent application CN201510243778.X also discloses a kind of biopsy method, also acquires Multiframe facial image is judged by judging whether the rule of attribute change value of key point meets the changing rule of real human face Whether living body.However, the method for such detection face global motion is since the movement of face is single biological characteristic, and it is not easy Change, when such method is in conjunction with the identity identifying method based on face, if a people is maliciously had collected greatly by other people My face moving image is measured for detecting, then the reliability of such method will reduce.
4) in method disclosed in Chinese patent application CN201310363154.2, pass through blinking for the face in detection image The movement such as eye is to determine whether be living body.However, being easier to be forged, movement simple in this way of blinking so that such The anti-fake reliability of method reduces.
Summary of the invention
It is complicated that technical problem to be solved by the present invention lies in existing living body proof scheme devices, and accuracy and reliability It is not high.
For this purpose, the embodiment of the present invention proposes a kind of living body verification method, comprising: generate the vision by desired guiding trajectory movement Central point, and in the optic centre point motion process acquire multiframe measurand facial image, in collection process, quilt The sight for surveying object follows the optic centre point to move always;Image information is extracted to every frame facial image collected, Include: that Face datection is carried out to every frame facial image, obtains human face region;To obtained human face region, face spy is determined Sign point;According to the human face characteristic point, the position of eye is obtained, cuts out the eyes image of left and right two;Wherein described image Information includes eyes image;Sight line vector is estimated according to extracted image information;It is obtained according to the sight line vector estimated The projected footprint of estimation;The projected footprint of the estimation and the desired guiding trajectory of the optic centre point are compared, the two is worked as Between similarity be greater than or equal to preset threshold when, judge measurand for living body, when similarity between the two be less than preset threshold When value, judge that measurand is not living body.
Preferably, it is described according to extracted image information estimate sight line vector be by described image information input to mind The sight line vector estimated through network model, the neural network model are obtained by following steps: acquisition magnanimity is different Facial image under people, different sight;Image information and sight line vector are extracted from collected facial image;According to acquired Image information and sight line vector obtain the neural network model.
Preferably, it is described from extracted in collected facial image in image information and sight line vector from collected people The step of image information is extracted in face image includes: to carry out Face datection to collected every width facial image, obtains face area Domain;To obtained human face region, human face characteristic point is determined;According to the human face characteristic point, the position of eye is obtained, is cut The eyes image of left and right two out.
Preferably, it is described from extracted in collected facial image in image information and sight line vector from collected people The step of image information is extracted in face image further include: the eyes image cut out is unified to same pixel size.
Preferably, the step of neural network model is obtained according to obtained image information and sight line vector packet It includes: using obtained eyes image as input, building multilayer depth convolutional neural networks, the multilayer depth convolutional Neural net Network is sequentially connected by convolutional layer, down-sampled layer, non-linear layer, and the last layer is the full articulamentum of f dimension, obtained view Line vector is as output layer;Using obtained eyes image and sight line vector, to the depth convolutional neural networks built into Row training, the training are based on back-propagation algorithm, update model parameter using stochastic gradient descent on the training data.
Preferably, it is described from extracted in collected facial image in image information and sight line vector from collected people The step of image information is extracted in face image includes: to carry out Face datection to collected every width facial image, obtains face area Domain;To obtained human face region, human face characteristic point is determined;The human face characteristic point is normalized;According to normalization Human face characteristic point, obtain the position of eye, cut out two eyes images in left and right.
Preferably, it is described from extracted in collected facial image in image information and sight line vector from collected people The step of image information is extracted in face image further include: the eyes image cut out is unified to same pixel size.
Preferably, the step of neural network model is obtained according to obtained image information and sight line vector packet It includes: using obtained eyes image as input, building multilayer depth convolutional neural networks, the multilayer depth convolutional Neural net Network is sequentially connected by convolutional layer, down-sampled layer, non-linear layer, and the last layer is the full articulamentum of f dimension, and will be acquired Normalization after human face characteristic point as human face posture feature and this f dimension full articulamentum be stitched together, as expansion Full articulamentum, obtained sight line vector is as output layer;Utilize obtained human face posture feature, eyes image and sight Vector is trained the depth convolutional neural networks built, and the training is based on back-propagation algorithm, on the training data Model parameter is updated using stochastic gradient descent.
Preferably, it is described from extracted in collected facial image in image information and sight line vector from collected people The step of sight line vector is extracted in face image includes: to obtain head threedimensional model;The human face characteristic point is snapped into the head On portion's threedimensional model;According to alignment result and optic centre the point position of obtained human face characteristic point and head threedimensional model, Calculate sight line vector.
Preferably, described the step of image information is extracted to every frame facial image collected further include: will be cut out Eyes image it is unified to same pixel size.
Preferably, described image information includes human face posture feature, described to extract figure to every frame facial image collected As information further include: the human face characteristic point is normalized.
Preferably, described that image information is extracted to every frame facial image collected further include: the eye that will be cut out Image is unified to same pixel size.
The embodiment of the invention also provides a kind of living bodies to verify device, comprising: track generates and image acquisition units, is used for The optic centre point by desired guiding trajectory movement is generated, and acquires the multiframe of measurand in the optic centre point motion process Facial image, in collection process, the sight of measurand follows the optic centre point to move always;Image information is extracted single Member, for extracting image information to every frame facial image collected comprising: Face datection subelement, for every frame people Face image carries out Face datection, obtains human face region;Human face characteristic point marks subelement, is used for obtained human face region, Determine human face characteristic point;Eyes image cuts subelement, for obtaining the position of eye according to the human face characteristic point, cuts out Cut the eyes image of left and right two;Wherein described image information includes eyes image;Sight line vector estimation unit is used for basis Extracted image information estimates sight line vector;Projected footprint generation unit is estimated according to the sight line vector estimated Projected footprint;Comparison unit compares the projected footprint of the estimation and the desired guiding trajectory of the optic centre point, when When similarity between the two is greater than or equal to preset threshold, measurand is judged for living body, when similarity between the two is less than in advance If when threshold value, judging that measurand is not living body.
Preferably, the sight line vector estimation unit is estimated described image information input to neural network model Sight line vector, the neural network model obtained by following subelement: acquisition subelement, for acquire magnanimity different people, Facial image under different sight;Extract subelement, for extracted from collected facial image image information and sight to Amount;Neural network model generates subelement, for obtaining the neural network according to obtained image information and sight line vector Model.
Preferably, the eyes image cuts subelement and is also used to the eyes image cut out unification to same pixel Size.
Preferably, described image information further includes human face posture feature, described image information extraction unit further include: face Characteristic point normalizes subelement, for the human face characteristic point to be normalized.
Preferably, the eyes image cuts subelement and is also used to the eyes image cut out unification to same pixel Size.
The embodiment of the present invention further additionally provides a kind of living body verifying system, comprising: display device, it is default for showing The optic centre point of track movement;Image collecting device, for acquiring measurand in the optic centre point motion process Multiframe facial image, in collection process, the sight of measurand follows the optic centre point to move always;Processor, For generating the optic centre point for pressing desired guiding trajectory movement;Image letter is extracted to every frame facial image collected Breath comprising: Face datection is carried out to every frame facial image, obtains human face region;To obtained human face region, people is determined Face characteristic point;According to the human face characteristic point, the position of eye is obtained, cuts out the eyes image of left and right two;It is wherein described Image information includes eyes image;Sight line vector is estimated according to extracted image information;According to the sight line vector estimated The projected footprint estimated;The projected footprint of estimation and the desired guiding trajectory of optic centre point compared, when between the two When similarity is greater than or equal to preset threshold, measurand is judged for living body, when similarity between the two is less than preset threshold, Judge that measurand is not living body.
Living body verification method according to an embodiment of the present invention, device and system, by acquiring quilt in real time in verification process The facial image for surveying object, estimates the sight track of measurand according to facial image, and by the sight track that will estimate and The actual motion track comparison of optic centre point come judge measurand whether living body, it is only necessary to setting with camera and screen It is standby to complete to judge, do not need complicated peripheral apparatus;Allowed by the way of Eye-controlling focus the sight of measurand with The desired guiding trajectory generated at random is mobile, it is difficult to be forged, greatly improve the accuracy and reliability of living body verifying.
Living body verification method according to an embodiment of the present invention, device and system, using obtained image information as input, Multilayer depth convolutional neural networks are built, the multilayer depth convolutional neural networks pass through convolutional layer, down-sampled layer, non-linear layer Be sequentially connected, the last layer be a f dimension full articulamentum, obtained sight line vector be used as output layer, and utilization obtained by Image information and sight line vector, the depth convolutional neural networks built are trained to obtain neural network model, from And sight line vector rapidly and accurately can be estimated from the image information extracted, and then improve the accurate of living body judgement Degree.
Living body verification method according to an embodiment of the present invention, device and system choose eyes image as image information It is calculated so as to simplify, detection is rapidly completed;In further preferred embodiment, human face posture feature and eye are chosen Both images further contemplate the head of measurand there is a situation where moving, in measurand as image information When head moves, In vivo detection still can be accurately carried out.
Detailed description of the invention
The features and advantages of the present invention will be more clearly understood by referring to the accompanying drawings, and attached drawing is schematically without that should manage Solution is carries out any restrictions to the present invention, in the accompanying drawings:
Fig. 1 shows the flow chart of living body verification method according to an embodiment of the present invention;
Fig. 2 shows the step of extracting image information to every frame facial image collected according to an embodiment of the present invention Flow chart;
Fig. 3 shows the schematic diagram of 21 human face characteristic points;
Fig. 4 shows the flow chart of the acquisition methods of neural network model according to an embodiment of the present invention;
Fig. 5 shows the schematic diagram of neural network model according to an embodiment of the present invention;
Fig. 6 shows the flow chart of living body verification method according to another embodiment of the present invention;
Fig. 7 shows the step according to another embodiment of the present invention that image information is extracted to every frame facial image collected Rapid flow chart;
Fig. 8 shows the schematic diagram of living body verifying device according to an embodiment of the present invention;
Fig. 9 shows the schematic diagram of living body verifying system according to an embodiment of the present invention.
Specific embodiment
Below in conjunction with attached drawing, embodiments of the present invention is described in detail.
Embodiment 1
As shown in Figure 1, living body verification method provided in this embodiment, which only needs one to have camera and screen The equipment of curtain can be completed, and include the following steps:
S11. the optic centre point by desired guiding trajectory movement is generated, and acquires multiframe in the optic centre point motion process Facial image.In collection process, the sight of measurand needs that optic centre point is followed to move always, watches attentively in optic centre On point, which moves, running track can be screen on the screen according to preset running track and the method for operation The straight line of curtain from left to right, straight line from top to bottom, a circular trace etc., but it is not limited to cited these types operation rail Mark.In order to promote the difficulty of forgery, running track is also possible to verify the random walk track generated at random every time.Optic centre The point method of operation can be uniform speed, be also possible to variable Rate.During the test, facial image can be also acquired in real time, The mode of Image Acquisition can be continuous acquisition, can also be spaced acquisition, and interval time can be the same or different.
S12. image information is extracted to every frame facial image collected, in the present embodiment, chooses eyes image to make For image information, is calculated so as to simplify, detection is rapidly completed.It is of course also possible to be extracted from facial image collected More information, to promote the accuracy of verifying.
S13. sight line vector is estimated according to extracted eyes image.In general, the step can use neural network mould Type is realized, i.e., extracted eyes image is input to the neural network of trained association eyes image and sight line vector In model, so that it may the sight line vector estimated, so as to rapidly and accurately estimate sight line vector.
S14. the projected footprint estimated according to the sight line vector estimated.In general, can be first according to each frame image institute The sight line vector estimated obtains the motion profile of sight line vector, which is projected to the plane where screen, then The projected footprint of the motion profile for the sight line vector that can be estimated on the screen.
S15. the projected footprint of estimation is compared with desired guiding trajectory, is preset when similarity between the two is greater than or equal to When threshold value, it can be determined that measurand is living body, when similarity between the two is less than preset threshold, it can be determined that measurand It is not living body.
Living body verification method according to the present embodiment is the method based on Eye-controlling focus to realize that living body is verified, however existing Eye-controlling focus scheme generally require the cooperations of the peripheral apparatus such as complicated and expensive eye tracker and light source, platform, and can not It can apply it in In vivo detection, unlikely apply it in the In vivo detection of mobile terminal, and existing not use eye Then often accuracy is inadequate again for the Eye-controlling focus scheme of the peripheral apparatus such as dynamic instrument, can not be applied in In vivo detection.
Living body verification method disclosed in the embodiment of the present invention, by the people for acquiring measurand in real time in verification process Face image estimates the sight track of measurand, and the sight track by that will estimate and optic centre point according to facial image Actual motion track comparison come judge measurand whether living body, it is only necessary to the equipment with camera and screen can be complete At judgement, complicated peripheral apparatus is not needed;Allow the sight of measurand with generating at random by the way of Eye-controlling focus Desired guiding trajectory is mobile, it is difficult to be forged, greatly improve the accuracy and reliability of living body verifying.In addition, implementing in the present invention In living body verification process disclosed in example, verify between device and user without going through auditory tone cues, user need not also speak.
Preferably, as shown in Fig. 2, above-mentioned steps S12 may include:
S121. Face datection is carried out to every frame facial image, obtains human face region.
S122. to obtained human face region, human face characteristic point is marked.It is, for example, possible to use facial modelings Method mark human face characteristic point, referring to document Supervised Descent Method and its Applications To Face Alignment, Computer Vision and Pattern Recognition (CVPR), 2013IEEE Conference, the 532-539 pages.The schematic diagram of 21 human face characteristic points, i.e. each 6 characteristic points of right and left eyes are shown in Fig. 3, 4 characteristic points of nose, 5 characteristic points of mouth, it will be appreciated by those skilled in the art that using more or fewer human face characteristic points It is also feasible.
S123. according to the human face characteristic point marked, the position of eye is obtained, cuts out the eyes image of left and right two.
S124., the eyes image cut out is normalized to unified resolution ratio m × n-pixel size.
Through the above steps, it can easily, eyes image is rapidly extracted from facial image.Hereinafter will The acquisition methods of neural network model are discussed in detail, as shown in figure 4, this method may include steps of:
S21. the facial image under magnanimity different people, different sight is acquired, can specifically include following steps:
S21a) using the method for mature camera calibration, the inner parameter of camera is obtained, meanwhile, using based on mirror The calibration method in face estimates the three-dimensional position of screen.
S21b) using the equipment for having camera and screen, several optic centre points are generated one by one at random on the screen.
S21c collected people) is required to watch sight attentively optic centre on the screen point, when acquisition people confirms that sight has been infused When depending on to optic centre point, facial image at this time is acquired.
S21d) repeat the above steps S21b and S21c, acquire the facial image under the different people of magnanimity, different sight.Its In, acquisition equipment includes the camera and screen of different model, such as laptop, tablet computer, smart phone, is collected Everybody count it is numerous, acquisition environment it is changeable.
S22. eyes image and sight line vector are extracted from collected facial image, can specifically include following steps:
S22a method identical with step S12) is utilized, eyes image is extracted from collected facial image, herein not It repeats again.
S22b head threedimensional model) is obtained, which can be the head for modeling the obtained specific collected people in advance Threedimensional model can also use an average head threedimensional model of predefined to different people.
S22c such as existing algorithm of EPnP algorithm) is utilized, the human face characteristic point marked is snapped into head threedimensional model On, and successive optimization, obtain the optimum results that human face characteristic point is snapped to head threedimensional model.
S22d) according to alignment result and optic centre the point position of obtained human face characteristic point and head threedimensional model, Calculate sight line vector, i.e., eyes (position of the eye intermediate features point as eyes) to optic centre point vector, and by this to Amount normalizes to unit length.
S23. neural network model is obtained according to obtained eyes image and sight line vector, can specifically include following step It is rapid:
S23a deep neural network model) is built, using the eyes image of obtained m × n resolution ratio as input, is built Multilayer depth convolutional neural networks, depth convolutional neural networks are sequentially connected by convolutional layer, down-sampled layer, non-linear layer, most Later layer is the full articulamentum of f dimension, and obtained sight line vector v is as output layer, as shown in Figure 5.
S23b) utilize obtained eyes image and sight line vector data, to the depth convolutional neural networks built into Row training, obtains the deep neural network model of Eye-controlling focus.Training be based on back-propagation algorithm, utilize on the training data with The decline of machine gradient updates model parameter.
Sight line vector is estimated by using the deep neural network model, the peripheral hardwares such as expensive eye tracker without complexity Equipment can rapidly and accurately carry out sight line vector estimation, to improve the accuracy of living body judgement.
Embodiment 2
When the head of measurand moves, the accuracy of sight line vector estimation is influenced whether, and then influence whether The accuracy of living body judgement.Therefore, unlike the first embodiment, when carrying out In vivo detection, it is also necessary to consider measurand The image that moves of head.For this purpose, as shown in fig. 6, living body verification method provided in this embodiment equally only needs a band There is the equipment of camera and screen that can complete, includes the following steps:
S31. the optic centre point by desired guiding trajectory movement is generated, and acquisition is more in the optic centre point motion process Frame facial image.
S32. image information is extracted to every frame facial image.It is transported to eliminate the head for the measurand that may occur The dynamic influence generated to living body verifying accuracy chooses human face posture feature and eyes image in the present embodiment as figure As information.
S33. sight line vector is estimated according to extracted human face posture feature and eyes image.In general, can will be mentioned The human face posture feature and eyes image taken is input to trained association face posture feature, eyes image and sight line vector Neural network model in, so that it may the sight line vector estimated.
S34. the projected footprint estimated according to the sight line vector estimated.
S35. projected footprint and desired guiding trajectory are compared, when similarity between the two is greater than or equal to preset threshold When, it can be determined that measurand is living body, when similarity between the two is less than preset threshold, it can be determined that measurand is not Living body.
According to the living body verification method of the present embodiment, by the face figure for acquiring measurand in real time in verification process Picture estimates the sight track of measurand according to facial image, and passes through the reality of the sight track and optic centre point that will estimate Border motion profile comparison come judge measurand whether living body, it is only necessary to the equipment with camera and screen can be completed to sentence It is disconnected, do not need complicated peripheral apparatus;Allow the sight of measurand default with what is generated at random by the way of Eye-controlling focus Track is mobile, it is difficult to be forged, greatly improve the accuracy and reliability of living body verifying, and also further contemplate tested The head of object is there is a situation where moving, and when measurand head moves, still can accurately carry out In vivo detection.
Preferably, as shown in fig. 7, above-mentioned steps S32 may include:
S321. every frame facial image carries out Face datection, obtains human face region.
S322. to obtained human face region, human face characteristic point is marked.
S323. the human face characteristic point marked is normalized, is located at coordinate in the coordinate range of [0,1] × [0,1], Using the human face characteristic point after normalization as human face posture feature.
S324. according to the human face characteristic point marked, the position of eye is obtained, cuts out the eyes image of left and right two.
S325., the eyes image cut out is normalized to unified resolution ratio m × n-pixel size.
Through the above steps, it can easily, human face posture feature and eye are rapidly extracted from facial image Image.It hereinafter will be described in detail the acquisition methods of neural network model, as shown in fig. 7, this method may include walking as follows It is rapid:
S41. the facial image under magnanimity different people, different sight is acquired, can specifically include following steps:
S41a) using the method for mature camera calibration, the inner parameter of camera is obtained, meanwhile, using based on mirror The calibration method in face estimates the three-dimensional position of screen.
S41b) using the equipment for having camera and screen, several optic centre points are generated one by one at random on the screen.
S41c collected people) is required to watch sight attentively optic centre on the screen point, when acquisition people confirms that sight has been infused When depending on to optic centre point, facial image at this time is acquired.
S41d) repeat the above steps S41b and S41c, acquire the facial image under the different people of magnanimity, different sight.Together Sample, acquisition equipment includes the camera and screen of different model, such as laptop, tablet computer, smart phone, is adopted Collect everybody count it is numerous, acquisition environment it is changeable.
S42. human face posture feature, eyes image and sight line vector are extracted from collected facial image, it specifically can be with Include the following steps:
S42a Face datection) is carried out to collected facial image, obtains human face region.
S42b) to obtained human face region, human face characteristic point is marked.
S42c) human face characteristic point marked is normalized, is located at coordinate in the coordinate range of [0,1] × [0,1], Using the human face characteristic point after normalization as human face posture feature.
S42d) according to the human face characteristic point marked, the position of eye is obtained, cuts out the eyes image of left and right two, And normalize to unified resolution ratio m × n-pixel size.
S42e head threedimensional model) is obtained.
S42f) obtained human face characteristic point is corresponded on the threedimensional model of head.
S42g) according to alignment result and optic centre the point position of obtained human face characteristic point and head threedimensional model, Calculate sight line vector, i.e., eyes (position of the eye intermediate features point as eyes) to optic centre point vector, and by this to Amount normalizes to unit length.
S43. neural network model is obtained according to obtained human face posture feature, eyes image and sight line vector, specifically It may include steps of:
S23a deep neural network model) is built, using the eyes image of obtained m × n resolution ratio as input, is built Multilayer depth convolutional neural networks, the multilayer depth convolutional neural networks are successively connected by convolutional layer, down-sampled layer, non-linear layer It connects, the last layer is the full articulamentum of f dimension, and the full articulamentum of obtained human face posture feature and this f dimension is spelled It is connected together, as the full articulamentum of expansion, obtained sight line vector v is also shown in FIG. 5 as output layer.
S23b) using obtained human face posture feature, eyes image and sight line vector data, the depth built is neural Network is trained, and obtains the deep neural network model of Eye-controlling focus.Training is based on back-propagation algorithm, on the training data Model parameter is updated using stochastic gradient descent.
Sight line vector is estimated by using the deep neural network model, the peripheral hardwares such as expensive eye tracker without complexity Equipment can rapidly and accurately carry out sight line vector estimation, to improve the accuracy of living body judgement.
Embodiment 3
Present embodiment discloses a kind of living bodies to verify device, as shown in Figure 8, comprising:
Track generates and image acquisition units 11, for generating the optic centre point for pressing desired guiding trajectory movement, and in vision Multiframe facial image is acquired in central point motion process, in collection process, the sight of measurand follows the vision always Central point movement;
Image information extraction unit 12, for extracting image information to every frame facial image collected;
Sight line vector estimation unit 13, for estimating sight line vector according to extracted image information;
Projected footprint generation unit 14, the projected footprint estimated according to the sight line vector estimated;
Comparison unit 15 compares the projected footprint of estimation and the desired guiding trajectory of optic centre point, when between the two When similarity is greater than or equal to preset threshold, measurand is judged for living body, when similarity between the two is less than preset threshold, Judge that measurand is not living body.
Device is verified according to the living body of the present embodiment, by the face figure for acquiring measurand in real time in verification process Picture estimates the sight track of measurand according to facial image, and passes through the reality of the sight track and optic centre point that will estimate Border motion profile comparison come judge measurand whether living body, it is only necessary to the equipment with camera and screen can be completed to sentence It is disconnected, compared with living body verification method in the prior art, do not need complicated platform and light source cooperation, and the accuracy judged And high reliablity.
When image information is eyes image, which may include:
Face datection subelement obtains human face region for carrying out Face datection to every frame facial image;
Human face characteristic point marks subelement, for marking human face characteristic point to obtained human face region;
Eyes image cuts subelement, for obtaining the position of eye, cutting out a left side according to the human face characteristic point marked Right two eyes images, and the eyes image cut out is unified to same pixel size.
As a preferred implementation manner, when image information is human face posture feature and eyes image, image information is mentioned The unit is taken to may include:
Face datection subelement obtains human face region for carrying out Face datection to every frame facial image;
Human face characteristic point marks subelement, for marking human face characteristic point to obtained human face region;
Human face characteristic point normalizes subelement, for the human face characteristic point marked to be normalized, after normalization Human face characteristic point as human face posture feature;
Eyes image cuts subelement, for obtaining the position of eye, cutting out a left side according to the human face characteristic point marked Right two eyes images, and the eyes image cut out is unified to same pixel size.
The head of measurand is further contemplated as a result, there is a situation where moving, and is moved on measurand head When, it still can accurately carry out In vivo detection.
Preferably, sight line vector estimation unit 13 is estimated described image information input to neural network model Sight line vector, the neural network model can be obtained by following subelement:
Subelement is acquired, for acquiring the facial image under magnanimity different people, different sight;
Subelement is extracted, for extracting image information and sight line vector from collected facial image;
Neural network model generates subelement, for obtaining the nerve according to obtained image information and sight line vector Network model.
The generation method of neural network model is identical as embodiment 1 and embodiment 2, and details are not described herein.By using this Deep neural network model estimates sight line vector, and without complexity, the peripheral apparatus such as expensive eye tracker can be quick and precisely Ground carries out sight line vector estimation, to improve the accuracy of living body judgement.
Embodiment 4
Present embodiment discloses a kind of living body verify system, the living body verifying system can be applied to mobile phone, tablet computer, Laptop, PC machine and other all equipment with camera and screen, as shown in figure 9, the system includes:
Display device 21, the optic centre point 24 moved for showing desired guiding trajectory 25, which for example can be with It is display screen;
Image collecting device 22, for acquiring the multiframe face figure of measurand in 24 motion process of optic centre point Picture, in collection process, the sight of measurand follows optic centre point 24 to move always, which for example may be used To be camera;
Processor 23, for generating the optic centre point 24 moved by desired guiding trajectory 25;To every frame face collected Image zooming-out image information;Sight line vector is estimated according to extracted image information;It is obtained according to the sight line vector estimated The projected footprint 26 of estimation;The projected footprint 26 of estimation and the desired guiding trajectory 25 of optic centre point are compared, when between the two Similarity when being greater than or equal to preset threshold, judge measurand for living body, when similarity between the two is less than preset threshold When, judge that measurand is not living body.
System is verified according to the living body of the present embodiment, by the face figure for acquiring measurand in real time in verification process Picture estimates the sight track of measurand according to facial image, and passes through the reality of the sight track and optic centre point that will estimate Border motion profile comparison come judge measurand whether living body, it is only necessary to the equipment with camera and screen can be completed to sentence It is disconnected, do not need complicated peripheral apparatus;Allow the sight of measurand default with what is generated at random by the way of Eye-controlling focus Track is mobile, it is difficult to be forged, greatly improve the accuracy and reliability of living body verifying.
When image information is eyes image, the step of this extracts image information to every frame facial image collected, can be with Include:
Face datection is carried out to every frame facial image, obtains human face region;
To obtained human face region, human face characteristic point is marked;
According to the human face characteristic point marked, the position of eye is obtained, cuts out two eyes images in left and right, and by institute The eyes image cut out is unified to same pixel size.
As a preferred implementation manner, when image information is human face posture feature and eyes image, this is to being acquired Every frame facial image extract image information the step of may include:
Face datection is carried out to every frame facial image, obtains human face region;
To obtained human face region, human face characteristic point is marked;
The human face characteristic point marked is normalized, using the human face characteristic point after normalization as human face posture spy Sign;
According to the human face characteristic point marked, the position of eye is obtained, cuts out two eyes images in left and right, and by institute The eyes image cut out is unified to same pixel size.
The head of measurand is further contemplated as a result, there is a situation where moving, and is moved on measurand head When, it still can accurately carry out In vivo detection.
As a preferred implementation manner, by by extracted eyes image or human face posture feature and eyes image two Person is input to trained neural network model, with the sight line vector estimated, the generation method of neural network model with Embodiment 1 and embodiment 2 are identical, and details are not described herein.Sight line vector, nothing are estimated by using the deep neural network model Need the peripheral apparatus such as complicated and expensive eye tracker that can rapidly and accurately carry out sight line vector estimation, to improve living body The accuracy of judgement.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although the embodiments of the invention are described in conjunction with the attached drawings, but those skilled in the art can not depart from this hair Various modifications and variations can be made in the case where bright spirit and scope, and such modifications and variations are each fallen within by appended claims Within limited range.

Claims (18)

1. a kind of living body verification method characterized by comprising
The optic centre point by desired guiding trajectory movement is generated, and acquires tested pair of multiframe in the optic centre point motion process The facial image of elephant, in collection process, the sight of measurand follows the optic centre point to move always;
Image information is extracted to every frame facial image collected comprising: Face datection is carried out to every frame facial image, is obtained Human face region;To obtained human face region, human face characteristic point is determined;According to the human face characteristic point, the position of eye is obtained It sets, cuts out the eyes image of left and right two;Wherein described image information includes eyes image;
Sight line vector is estimated according to extracted image information;
The projected footprint estimated according to the sight line vector estimated;
The projected footprint of the estimation and the desired guiding trajectory of the optic centre point are compared, when similarity between the two is big When preset threshold, measurand is judged for living body, when similarity between the two is less than preset threshold, judgement is tested Object is not living body.
2. the method according to claim 1, wherein it is described according to extracted image information estimate sight to Amount is the sight line vector for being estimated described image information input to neural network model, and the neural network model passes through Following steps obtain:
Acquire the facial image under magnanimity different people, different sight;
Image information and sight line vector are extracted from collected facial image;
The neural network model is obtained according to obtained image information and sight line vector.
3. according to the method described in claim 2, it is characterized in that, described extract image information from collected facial image Include: with the step of extracting image information in sight line vector from collected facial image
Face datection is carried out to collected every width facial image, obtains human face region;
To obtained human face region, human face characteristic point is determined;
According to the human face characteristic point, the position of eye is obtained, cuts out the eyes image of left and right two.
4. according to the method described in claim 3, it is characterized in that, described extract image information from collected facial image With the step of image information is extracted in sight line vector from collected facial image further include:
The eyes image cut out is unified to same pixel size.
5. according to the method described in claim 3, it is characterized in that, described obtain according to obtained image information and sight line vector Include: to the step of neural network model
Using obtained eyes image as input, multilayer depth convolutional neural networks, the multilayer depth convolutional Neural are built Network is sequentially connected by convolutional layer, down-sampled layer, non-linear layer, and the last layer is the full articulamentum of f dimension, obtained Sight line vector is as output layer;
Using obtained eyes image and sight line vector, the depth convolutional neural networks built are trained, the instruction Practice and be based on back-propagation algorithm, updates model parameter using stochastic gradient descent on the training data.
6. according to the method described in claim 2, it is characterized in that, described extract image information from collected facial image Include: with the step of extracting image information in sight line vector from collected facial image
Face datection is carried out to collected every width facial image, obtains human face region;
To obtained human face region, human face characteristic point is determined;
The human face characteristic point is normalized;
According to normalized human face characteristic point, the position of eye is obtained, cuts out the eyes image of left and right two.
7. according to the method described in claim 6, it is characterized in that, described extract image information from collected facial image With the step of image information is extracted in sight line vector from collected facial image further include:
The eyes image cut out is unified to same pixel size.
8. according to the method described in claim 6, it is characterized in that, described obtain according to obtained image information and sight line vector Include: to the step of neural network model
Using obtained eyes image as input, multilayer depth convolutional neural networks, the multilayer depth convolutional Neural are built Network is sequentially connected by convolutional layer, down-sampled layer, non-linear layer, and the last layer is the full articulamentum of f dimension, and by gained To normalization after human face characteristic point be stitched together as the full articulamentum of human face posture feature and this f dimension, as opening up The full articulamentum of exhibition, obtained sight line vector is as output layer;
Using obtained human face posture feature, eyes image and sight line vector, to the depth convolutional neural networks built into Row training, the training are based on back-propagation algorithm, update model parameter using stochastic gradient descent on the training data.
9. according to the method described in claim 2, it is characterized in that, described extract image information from collected facial image Include: with the step of extracting sight line vector in sight line vector from collected facial image
Obtain head threedimensional model;
The human face characteristic point is snapped on the head threedimensional model;
According to alignment result and optic centre the point position of obtained human face characteristic point and head threedimensional model, calculate sight to Amount.
10. method according to claim 1 to 9, which is characterized in that described to every frame face figure collected As the step of extracting image information further include:
The eyes image cut out is unified to same pixel size.
11. method according to claim 1 to 9, which is characterized in that described image information further includes face appearance State feature, it is described that image information is extracted to every frame facial image collected further include:
The human face characteristic point is normalized.
12. according to the method for claim 11, which is characterized in that described to extract image to every frame facial image collected Information further include:
The eyes image cut out is unified to same pixel size.
13. a kind of living body verifies device characterized by comprising
Track generates and image acquisition units, for generating the optic centre point for pressing desired guiding trajectory movement, and in the vision The multiframe facial image of measurand is acquired in heart point motion process, in collection process, the sight of measurand follows always The optic centre point movement;
Image information extraction unit, for extracting image information to every frame facial image collected comprising: Face datection Unit obtains human face region for carrying out Face datection to every frame facial image;Human face characteristic point mark subelement, for pair Obtained human face region, determines human face characteristic point;Eyes image cuts subelement, is used for according to the human face characteristic point, The position of eye is obtained, the eyes image of left and right two is cut out;Wherein described image information includes eyes image;
Sight line vector estimation unit, for estimating sight line vector according to extracted image information;
Projected footprint generation unit, the projected footprint estimated according to the sight line vector estimated;
Comparison unit compares the projected footprint of the estimation and the desired guiding trajectory of the optic centre point, when between the two Similarity when being greater than or equal to preset threshold, judge measurand for living body, when similarity between the two is less than preset threshold When, judge that measurand is not living body.
14. device according to claim 13, which is characterized in that the sight line vector estimation unit is by described image information The sight line vector for being input to neural network model to be estimated, the neural network model are obtained by following subelement:
Subelement is acquired, for acquiring the facial image under magnanimity different people, different sight;
Subelement is extracted, for extracting image information and sight line vector from collected facial image;
Neural network model generates subelement, for obtaining the neural network according to obtained image information and sight line vector Model.
15. device described in 3 or 14 according to claim 1, which is characterized in that the eyes image cut subelement be also used to by The eyes image cut out is unified to same pixel size.
16. device described in 3 or 14 according to claim 1, which is characterized in that described image information further includes human face posture spy Sign, described image information extraction unit further include:
Human face characteristic point normalizes subelement, for the human face characteristic point to be normalized.
17. device according to claim 16, which is characterized in that the eyes image cuts subelement and is also used to be cut out The eyes image cut is unified to same pixel size.
18. a kind of living body verifies system characterized by comprising
Display device, for showing the optic centre point of desired guiding trajectory movement;
Image collecting device, for acquiring the multiframe facial image of measurand in the optic centre point motion process, In collection process, the sight of measurand follows the optic centre point to move always;
Processor, for generating the optic centre point for pressing desired guiding trajectory movement;To every frame facial image collected Extract image information comprising: Face datection is carried out to every frame facial image, obtains human face region;To obtained face area Human face characteristic point is determined in domain;According to the human face characteristic point, the position of eye is obtained, cuts out the eye figure of left and right two Picture;Wherein described image information includes eyes image;Sight line vector is estimated according to extracted image information;According to estimating The projected footprint estimated of sight line vector;The projected footprint of estimation and the desired guiding trajectory of optic centre point are compared, When similarity between the two is greater than or equal to preset threshold, measurand is judged for living body, the similarity when between the two is less than When preset threshold, judge that measurand is not living body.
CN201510756011.7A 2015-11-09 2015-11-09 Living body verification method, device and system Active CN105426827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510756011.7A CN105426827B (en) 2015-11-09 2015-11-09 Living body verification method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510756011.7A CN105426827B (en) 2015-11-09 2015-11-09 Living body verification method, device and system

Publications (2)

Publication Number Publication Date
CN105426827A CN105426827A (en) 2016-03-23
CN105426827B true CN105426827B (en) 2019-03-08

Family

ID=55505027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510756011.7A Active CN105426827B (en) 2015-11-09 2015-11-09 Living body verification method, device and system

Country Status (1)

Country Link
CN (1) CN105426827B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956572A (en) * 2016-05-15 2016-09-21 北京工业大学 In vivo face detection method based on convolutional neural network
CN106599883B (en) * 2017-03-08 2020-03-17 王华锋 CNN-based multilayer image semantic face recognition method
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection
CN107066983B (en) * 2017-04-20 2022-08-09 腾讯科技(上海)有限公司 Identity verification method and device
CN108229284B (en) * 2017-05-26 2021-04-09 北京市商汤科技开发有限公司 Sight tracking and training method and device, system, electronic equipment and storage medium
CN108875469A (en) * 2017-06-14 2018-11-23 北京旷视科技有限公司 In vivo detection and identity authentication method, device and computer storage medium
CN107590429A (en) * 2017-07-20 2018-01-16 阿里巴巴集团控股有限公司 The method and device verified based on eyeprint feature
CN107545248B (en) * 2017-08-24 2021-04-02 北京小米移动软件有限公司 Biological characteristic living body detection method, device, equipment and storage medium
CN109726613B (en) 2017-10-27 2021-09-10 虹软科技股份有限公司 Method and device for detection
CN107992842B (en) * 2017-12-13 2020-08-11 深圳励飞科技有限公司 Living body detection method, computer device, and computer-readable storage medium
CN108829247B (en) * 2018-06-01 2022-11-15 北京市商汤科技开发有限公司 Interaction method and device based on sight tracking and computer equipment
CN108921209A (en) * 2018-06-21 2018-11-30 杭州骑轻尘信息技术有限公司 Image identification method, device and electronic equipment
CN110853073A (en) * 2018-07-25 2020-02-28 北京三星通信技术研究有限公司 Method, device, equipment and system for determining attention point and information processing method
CN109376595B (en) * 2018-09-14 2023-06-23 杭州宇泛智能科技有限公司 Monocular RGB camera living body detection method and system based on human eye attention
CN110969061A (en) * 2018-09-29 2020-04-07 北京市商汤科技开发有限公司 Neural network training method, neural network training device, visual line detection method, visual line detection device and electronic equipment
CN109635554A (en) * 2018-11-30 2019-04-16 努比亚技术有限公司 A kind of red packet verification method, terminal and computer storage medium
CN111291607B (en) * 2018-12-06 2021-01-22 广州汽车集团股份有限公司 Driver distraction detection method, driver distraction detection device, computer equipment and storage medium
CN109711309B (en) * 2018-12-20 2020-11-27 北京邮电大学 Method for automatically identifying whether portrait picture is eye-closed
CN109886080A (en) * 2018-12-29 2019-06-14 深圳云天励飞技术有限公司 Human face in-vivo detection method, device, electronic equipment and readable storage medium storing program for executing
CN109977764A (en) * 2019-02-12 2019-07-05 平安科技(深圳)有限公司 Vivo identification method, device, terminal and storage medium based on plane monitoring-network
CN111967293A (en) * 2020-06-22 2020-11-20 云知声智能科技股份有限公司 Face authentication method and system combining voiceprint recognition and attention detection
CN111881431B (en) * 2020-06-28 2023-08-22 百度在线网络技术(北京)有限公司 Man-machine verification method, device, equipment and storage medium
CN112287909B (en) * 2020-12-24 2021-09-07 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements
CN112633217A (en) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
CN113505756A (en) * 2021-08-23 2021-10-15 支付宝(杭州)信息技术有限公司 Face living body detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400122A (en) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 Method for recognizing faces of living bodies rapidly
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN103593598A (en) * 2013-11-25 2014-02-19 上海骏聿数码科技有限公司 User online authentication method and system based on living body detection and face recognition
CN104966070A (en) * 2015-06-30 2015-10-07 北京汉王智远科技有限公司 Face recognition based living body detection method and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8260008B2 (en) * 2005-11-11 2012-09-04 Eyelock, Inc. Methods for performing biometric recognition of a human eye and corroboration of same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400122A (en) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 Method for recognizing faces of living bodies rapidly
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
CN103593598A (en) * 2013-11-25 2014-02-19 上海骏聿数码科技有限公司 User online authentication method and system based on living body detection and face recognition
CN104966070A (en) * 2015-06-30 2015-10-07 北京汉王智远科技有限公司 Face recognition based living body detection method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人脸识别中的活体检测技术研究;孙霖;《中国博士学位论文全文数据库 信息科技辑》;20110815(第8期);I138-69

Also Published As

Publication number Publication date
CN105426827A (en) 2016-03-23

Similar Documents

Publication Publication Date Title
CN105426827B (en) Living body verification method, device and system
CN108256433B (en) Motion attitude assessment method and system
Dikovski et al. Evaluation of different feature sets for gait recognition using skeletal data from Kinect
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
CN104966070B (en) Biopsy method and device based on recognition of face
CN103177269B (en) For estimating the apparatus and method of object gesture
CN108345869A (en) Driver's gesture recognition method based on depth image and virtual data
CN108369785A (en) Activity determination
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
Viraktamath et al. Face detection and tracking using OpenCV
CN107895160A (en) Human face detection and tracing device and method
CN109472198A (en) A kind of video smiling face's recognition methods of attitude robust
CN105740780A (en) Method and device for human face in-vivo detection
CN106650619A (en) Human action recognition method
CN110211222B (en) AR immersion type tour guide method and device, storage medium and terminal equipment
CN109598242A (en) A kind of novel biopsy method
KR20220028654A (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
Zhu Computer vision-driven evaluation system for assisted decision-making in sports training
CN110796101A (en) Face recognition method and system of embedded platform
CN107480586A (en) Bio-identification photo bogus attack detection method based on human face characteristic point displacement
CN108765014A (en) A kind of intelligent advertisement put-on method based on access control system
CN109543629A (en) A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing
Yang et al. Research on face recognition sports intelligence training platform based on artificial intelligence
RU2005100267A (en) METHOD AND SYSTEM OF AUTOMATIC VERIFICATION OF THE PRESENCE OF A LIVING FACE OF A HUMAN IN BIOMETRIC SECURITY SYSTEMS

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant