CN109977846A - A kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular - Google Patents

A kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular Download PDF

Info

Publication number
CN109977846A
CN109977846A CN201910221151.2A CN201910221151A CN109977846A CN 109977846 A CN109977846 A CN 109977846A CN 201910221151 A CN201910221151 A CN 201910221151A CN 109977846 A CN109977846 A CN 109977846A
Authority
CN
China
Prior art keywords
infrared
face
image
monocular
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910221151.2A
Other languages
Chinese (zh)
Other versions
CN109977846B (en
Inventor
张宇
邵枭虎
蒋方玲
周祥东
石宇
刘鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Chinese Academy of Sciences
Chongqing Institute of Green and Intelligent Technology of CAS
Original Assignee
University of Chinese Academy of Sciences
Chongqing Institute of Green and Intelligent Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Chinese Academy of Sciences, Chongqing Institute of Green and Intelligent Technology of CAS filed Critical University of Chinese Academy of Sciences
Priority to CN201910221151.2A priority Critical patent/CN109977846B/en
Publication of CN109977846A publication Critical patent/CN109977846A/en
Application granted granted Critical
Publication of CN109977846B publication Critical patent/CN109977846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The present invention proposes a kind of biopsy method based on the camera shooting of near-infrared monocular, comprising: acquisition near-infrared image information;It detects whether comprising face in the near-infrared image, if face is not detected, judges to identify the non-true man of object;If detecting face, user is prompted to make specified facial expressions and acts;The Optical-flow Feature of the facial expressions and acts is extracted, while extracting the face-image depth characteristic of near-infrared image;The Optical-flow Feature and face-image depth characteristic are inputted into deep learning classifier;Obtain face recognition result;The present invention can effectively take precautions against video and three-dimensional mask attack, improve the accuracy rate of In vivo detection.

Description

A kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular
Technical field
The present invention relates to security protection identification field more particularly to it is a kind of based on near-infrared monocular camera shooting biopsy method and System.
Background technique
In recent years, face recognition technology commercial applications are further extensive, however face easily uses photo, video, three-dimensional surface The modes such as tool are palmed off, therefore face In vivo detection is the important topic of recognition of face Yu Verification System safety.It is examined from living body From the point of view of the type of the image capture device of survey, In vivo detection common at present is mainly visible images acquisition and multispectral image Two classes are acquired, wherein multispectral Image Acquisition includes the imaging devices such as near-infrared, far infrared, thermal infrared;From In vivo detection From the point of view of implementation method, there are interactive In vivo detection and non-interactive type biopsy method, interactive In vivo detection includes passing through use Whether the family movements such as blink, open one's mouth to distinguish are true man.And non-interactive type In vivo detection is then not necessarily to realize by user's cooperation.
The advantage of biopsy method based on visible light be it is at low cost, for the mobile subscribers such as mobile phone do In vivo detection without Hardware device need to be added;The disadvantage is that poor robustness, influences vulnerable to light variation etc., it can not judge that HD video is attacked.So base It can be used only in the lower occasion of security level in the In vivo detection of visible light.
Method based on multi-spectral image processing is usually to use binocular camera, including a visible image capturing head and one A infrared camera.There is provided multispectral information, the adaptability of the further boosting algorithm of energy for its advantage;The disadvantage is that increasing hard Part cost and power consumption, and two cameras can not may be installed simultaneously for some portable small devices.
Interactive In vivo detection is usually to pass through visible image capturing head acquisition data, is referred to by system to user's sending movement It enables to determine whether being true man.The disadvantage is that for commonly acting --- it blinks, open one's mouth, can not judge whether it is that true man wear After mask, acted by the hole of mask.In addition it can not also judge the movement attack of HD video.
Non-interactive type In vivo detection mainly passes through the information of single-frame images to determine whether being true man.Its advantage is reaction speed Degree is fast, and disadvantage is that accuracy rate is not high, is easy under attack.
Summary of the invention
In view of the above problem of the existing technology, the present invention proposes a kind of In vivo detection based on the camera shooting of near-infrared monocular Method and system mainly solve the problems, such as prior art poor robustness and at high cost.
To achieve the goals above and other purposes, the technical solution adopted by the present invention are as follows.
A kind of biopsy method based on the camera shooting of near-infrared monocular, comprising:
Acquire near-infrared image information;
It detects whether comprising face in the near-infrared image, if face is not detected, judges to identify the non-true man of object; If detecting face, user is prompted to make specified facial expressions and acts;
The Optical-flow Feature of the facial expressions and acts is extracted, while extracting the face-image depth characteristic of near-infrared image;
The Optical-flow Feature and face-image depth characteristic are inputted into deep learning classifier;
Obtain In vivo detection result.
Optionally, the near-infrared image is acquired using near-infrared monocular cam.The near-infrared monocular cam packet The near-infrared monocular cam for including 850nm, only acquires near-infrared image, does not need visible image capturing head, reduces equipment cost And power consumption.
Optionally, the infrared supplementary lighting sources for the interference of wiping out background light are introduced when acquiring near-infrared image.It is described infrared The effective distance of supplementary lighting sources is shorter, can largely filtering environmental light interference, improve the accuracy rate of detection.
Optionally, the extraction face-image depth characteristic specifically includes:
The convolutional layer that will test the near-infrared image information input deep learning neural network comprising face, obtains near-infrared Characteristics of image;Every layer of convolutional layer is acted on by the first activation primitive, activates the face characteristic in the near-infrared image feature, Finally obtain face-image depth characteristic.
Optionally, the specific implementation procedure of the deep learning classifier includes:
The face-image depth characteristic and Optical-flow Feature are combined to the full connection for inputting the deep learning classifier Layer, then every layer of full articulamentum is acted on by the second activation primitive and activates corresponding countenance motion characteristic, and according to described Countenance motion characteristic judges identify whether object is real human face.
Optionally, first activation primitive includes:
Wherein, x is near-infrared image feature, and y is the output of activation primitive.
Optionally, second activation primitive includes:
Wherein, x is the assemblage characteristic of face image depth characteristic and Optical-flow Feature, and y is the output of activation primitive.Described The ability to express of one activation primitive and the second activation primitive is strong, and good convergence can effectively improve deep neural network algorithm Accuracy rate.
A kind of In vivo detection system based on the camera shooting of near-infrared monocular, comprising:
Near-infrared image acquisition module, for acquiring near-infrared image;
Detection module, for detecting in the near-infrared image with the presence or absence of face;
Action prompt module, for prompting user to make specified facial expressions and acts;
Characteristic extracting module, for extracting the Optical-flow Feature of countenance movement;
Deep learning identification module, the Optic flow information that facial image and countenance act for identification;
The output end connecting detection module of the near-infrared image acquisition module and the input terminal of characteristic extracting module;It is described The output end of detection module is connect with the input terminal of action prompt module and deep learning identification module, the characteristic extracting module Output end connect deep learning identification module.
Optionally, the near-infrared image acquisition module includes near-infrared monocular cam and infrared supplementary lighting sources.
Optionally, the deep learning identification module includes near-infrared facial image feature extraction unit and for identification The depth recognition unit of Optical-flow Feature and near-infrared facial image feature;The near-infrared facial image feature extraction unit it is defeated The input terminal of outlet connection depth recognition unit.
As described above, a kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular of the present invention, has with following Beneficial effect.
Infrared supplementary lighting sources are introduced, the interference of environment light can be effectively filtered out;By detecting near-infrared image face information, It can effectively avoid the attack of video or photo;It is imaged using monocular and reduces system cost and power consumption;According to the light stream of facial expressions and acts Feature can take precautions against the attack of three-dimensional mask.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the biopsy method based on the camera shooting of near-infrared monocular of the present invention.
Fig. 2 is a kind of structural block diagram of the In vivo detection system based on the camera shooting of near-infrared monocular of the present invention.
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification Other advantages and efficacy of the present invention can be easily understood for disclosed content.The present invention can also pass through in addition different specific realities The mode of applying is embodied or practiced, the various details in this specification can also based on different viewpoints and application, without departing from Various modifications or alterations are carried out under spirit of the invention.It should be noted that in the absence of conflict, following embodiment and implementation Feature in example can be combined with each other.
It should be noted that illustrating the basic structure that only the invention is illustrated in a schematic way provided in following embodiment Think, only shown in schema then with related component in the present invention rather than component count, shape and size when according to actual implementation Draw, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its assembly layout kenel It is likely more complexity.
Referring to Fig. 1, the present invention provides a kind of biopsy method based on the camera shooting of near-infrared monocular in one embodiment, Include:
Acquire near-infrared image information;
It detects whether comprising face in the near-infrared image, if face is not detected, judges to identify the non-true man of object; If detecting face, user is prompted to make specified facial expressions and acts;
The Optical-flow Feature of the facial expressions and acts is extracted, while extracting the face-image depth characteristic of near-infrared image;
The Optical-flow Feature and face-image depth characteristic are inputted into deep learning classifier;
Obtain In vivo detection result.
The acquisition near-infrared image information mainly passes through near-infrared monocular cam and cooperates infrared supplementary lighting sources;One In embodiment, the near-infrared monocular cam includes using 850nm near-infrared monocular cam, can also use other shortwaves Near-infrared camera is to reach identical effect.The infrared supplementary lighting sources infrared supplementary lighting sources shorter using effective distance, can have The interference for imitating filtering environmental light, enhances the effect of the near-infrared image of acquisition, reduces the False Rate of detection process.
The near-infrared image is detected, is judged in the near-infrared image with the presence or absence of face.The purpose of detection is to prevent The means such as model video attack, because video attack needs to show on one screen, which can be mobile phone, computer is shown The equipment such as device, notebook or TV.And these displays can fully absorb near infrared light, so that near infrared light shooting screen The picture that backstage obtains all is a piece of black, can not detect face.Therefore, if can not be detected in the near-infrared image of acquisition To facial image, then there is not face before preceding camera in explanation, or to use the video of screen to attack.If can detect Human face image information, then explanation is not the video or image presented by screen before preceding camera.
When including face in the near-infrared image information for detecting the acquisition, then corresponding near-infrared image is sent to Deep learning neural network model obtains the convolutional layer of near-infrared image input deep learning neural network model close Infrared Image Features;Acting on every layer of convolutional layer by the first activation primitive activates the face in the near-infrared image feature special Sign, finally obtains face image depth characteristic.First activation primitive includes:
Wherein, x is near-infrared image feature, and y is the output of activation primitive.
When in the near-infrared image for detecting the acquisition including face, prompt information can be also issued the user with, is prompted User makes specified facial expressions and acts.The prompt information includes text importing, voice prompting etc.;In one embodiment, described Facial expressions and acts mainly include smiling and frowning, and why select the two facial expressions and acts, and do not have to relatively common blink and open Mouth, be due to for blinking and opening one's mouth, can be worn by true man the mask of eyes and mouth borehole, do in hole movement come Fraud system.And the two expressions of frowning and smile, involved more facial muscles, these facial muscles actuate be can not It is simulated by three-dimensional mask.
After the near-infrared monocular cam collects user's facial expressions and acts, the sparse optical flow of two interval interframe is calculated Feature.The sparse optical flow refers to using pixel in image sequence in the variation in time-domain and the correlation between consecutive frame To find previous frame with corresponding relationship existing between present frame, to calculate one of the motion information of object between consecutive frame Kind method.
By the deep learning of obtained face image depth characteristic and sparse optical flow feature input deep neural network model The face-image depth characteristic and Optical-flow Feature are combined the full connection for inputting the deep learning classifier by classifier Layer, then every layer of full articulamentum is acted on by the second activation primitive and activates corresponding facial expressions and acts feature.According to facial expressions and acts spy Sign judges whether the near-infrared image of acquisition is true man's image.In one embodiment, the deep neural network model is to pass through The deep learning model that a large amount of near-infrared facial image training obtain.
Second activation primitive includes:
Wherein, x is the assemblage characteristic of face image depth characteristic and Optical-flow Feature, and y is the output of activation primitive.
The ability to express of first activation primitive and the second activation primitive is strong, and good convergence can effectively improve depth Spend the accuracy rate of neural network algorithm.
Referring to Fig. 2, in another embodiment, a kind of In vivo detection system based on the camera shooting of near-infrared monocular of the present invention, Include:
Near-infrared image acquisition module, for acquiring near-infrared image;
Detection module, for detecting in the near-infrared image with the presence or absence of face;
Action prompt module, for prompting user to make specified facial expressions and acts;
Characteristic extracting module, for extracting the Optical-flow Feature of countenance movement;
Deep learning identification module, the Optic flow information that facial image and countenance act for identification;
The output end connecting detection module of the near-infrared image acquisition module and the input terminal of characteristic extracting module;It is described The output end of detection module is connect with the input terminal of action prompt module and deep learning identification module, the characteristic extracting module Output end connect deep learning identification module.
The near-infrared image acquisition module includes near-infrared monocular cam and infrared supplementary lighting sources.The near-infrared list Mesh camera includes using 850nm near-infrared monocular cam, other short wavelength-NIR cameras can also be used to reach phase Same effect.The infrared supplementary lighting sources infrared supplementary lighting sources shorter using effective distance, can effectively filter out the interference of environment light, increase The effect of the near-infrared image acquired by force reduces the False Rate of detection process.
The detection module receives the near-infrared image of near-infrared image acquisition module acquisition, and detects the near-infrared figure Whether include face as in, if not including face, determines not have before current monocular cam before face or camera for display Shield the face video presented or image.If testing result is that corresponding near-infrared image is sent to depth comprising face Practise identification module.The deep learning identification module is obtained by the training of a large amount of near-infrared facial images.The depth Practising identification module includes near-infrared facial image feature extraction unit and Optical-flow Feature and infrared face characteristics of image for identification Depth recognition unit.
The near-infrared image is inputted into the near-infrared facial image feature extraction unit, obtains near-infrared image spy Sign;The face characteristic in the near-infrared image feature is activated by the first activation primitive, obtains face image depth characteristic.Institute Stating the first activation primitive includes:
Wherein, x is near-infrared image feature, and y is the output of activation primitive.
Also will test result is sent to action prompt module to detection module simultaneously, if detecting the near-infrared figure of the acquisition It include face as in, the action prompt module issues the user with prompt information according to detection module testing result, prompts user Make specified facial expressions and acts.In one embodiment, the prompt information includes text importing, voice prompting.The specified table Feelings movement includes smiling and frowning.
The near-infrared image acquisition module acquisition human face expression movement, and the corresponding near-infrared image of facial expressions and acts is sent out Characteristic extracting module is given, sparse optical flow feature is obtained.
By the sparse optical flow feature and face image depth characteristic in conjunction with the full connection for inputting the depth recognition unit Layer, then every layer of full articulamentum is acted on by the second activation primitive, activate corresponding facial expressions and acts feature.According to finally obtained Facial expressions and acts feature judges whether the near-infrared image of acquisition is true man's image.
Second activation primitive includes:
Wherein, x is the assemblage characteristic of face image depth characteristic and Optical-flow Feature, and y is the output of activation primitive.
In conclusion a kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular of the present invention, is used only close red The near-infrared image of outer monocular cam acquisition carries out In vivo detection, does not need visible images, reduces system cost and function Consumption;Infrared supplementary lighting sources effectively filter out bias light interference, improve the quality of acquisition image;It is strong using ability to express, convergence is good Activation primitive, can effectively improve the accuracy rate of identification;Near-infrared image combination sparse optical flow feature can effectively take precautions against The attack of video and three-dimensional mask.So the present invention effectively overcomes various shortcoming in the prior art and has high industrial benefit With value.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe The personage for knowing this technology all without departing from the spirit and scope of the present invention, carries out modifications and changes to above-described embodiment.Cause This, institute is complete without departing from the spirit and technical ideas disclosed in the present invention by those of ordinary skill in the art such as At all equivalent modifications or change, should be covered by the claims of the present invention.

Claims (10)

1. a kind of biopsy method based on the camera shooting of near-infrared monocular characterized by comprising
Acquire near-infrared image information;
It detects whether comprising face in the near-infrared image, if face is not detected, judges to identify the non-true man of object;If inspection Face is measured, then user is prompted to make specified facial expressions and acts;
The Optical-flow Feature of the facial expressions and acts is extracted, while extracting the face-image depth characteristic of near-infrared image;
The Optical-flow Feature and face-image depth characteristic are inputted into deep learning classifier;
Obtain In vivo detection result.
2. the biopsy method according to claim 1 based on the camera shooting of near-infrared monocular, which is characterized in that using close red Outer monocular cam acquires the near-infrared image.
3. the biopsy method according to claim 2 based on the camera shooting of near-infrared monocular, which is characterized in that acquisition is close red The infrared supplementary lighting sources for the interference of wiping out background light are introduced when outer image.
4. the biopsy method according to claim 1 based on the camera shooting of near-infrared monocular, which is characterized in that the extraction Face-image depth characteristic specifically includes:
The convolutional layer that will test the near-infrared image information input deep learning neural network comprising face, obtains near-infrared image Feature;Every layer of convolutional layer is acted on by the first activation primitive, activates the face characteristic in the near-infrared image feature, finally Obtain face-image depth characteristic.
5. the biopsy method according to claim 1 based on the camera shooting of near-infrared monocular, which is characterized in that the depth The specific implementation procedure of Study strategies and methods includes:
The face-image depth characteristic and Optical-flow Feature are combined to the full articulamentum for inputting the deep learning classifier, then Every layer of full articulamentum is acted on by the second activation primitive and activates corresponding countenance motion characteristic, and according to finally obtained Countenance motion characteristic judges identify whether object is real human face.
6. the biopsy method according to claim 4 based on the camera shooting of near-infrared monocular, which is characterized in that described first Activation primitive includes:
Wherein, x is near-infrared image feature, and y is the output of activation primitive.
7. the biopsy method according to claim 5 based on the camera shooting of near-infrared monocular, which is characterized in that described second Activation primitive includes:
Wherein, x is the assemblage characteristic of face image depth characteristic and Optical-flow Feature, and y is the output of activation primitive.
8. a kind of In vivo detection system based on the camera shooting of near-infrared monocular characterized by comprising
Near-infrared image acquisition module, for acquiring near-infrared image;
Detection module, for detecting in the near-infrared image with the presence or absence of face;
Action prompt module, for prompting user to make specified facial expressions and acts;
Characteristic extracting module, for extracting the Optical-flow Feature of countenance movement;
Deep learning identification module, the Optic flow information that facial image and countenance act for identification;
The output end connecting detection module of the near-infrared image acquisition module and the input terminal of characteristic extracting module;The detection The output end of module is connect with the input terminal of action prompt module and deep learning identification module, the characteristic extracting module it is defeated Outlet connects deep learning identification module.
9. the In vivo detection system according to claim 8 based on the camera shooting of near-infrared monocular, which is characterized in that described close red Outer image capture module includes near-infrared monocular cam and infrared supplementary lighting sources.
10. the In vivo detection system according to claim 8 based on the camera shooting of near-infrared monocular, which is characterized in that the depth Degree study identification module includes near-infrared facial image feature extraction unit and Optical-flow Feature and near-infrared face for identification The depth recognition unit of characteristics of image;The output end of the near-infrared facial image feature extraction unit connects depth recognition unit Input terminal.
CN201910221151.2A 2019-03-22 2019-03-22 Living body detection method and system based on near-infrared monocular photography Active CN109977846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910221151.2A CN109977846B (en) 2019-03-22 2019-03-22 Living body detection method and system based on near-infrared monocular photography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910221151.2A CN109977846B (en) 2019-03-22 2019-03-22 Living body detection method and system based on near-infrared monocular photography

Publications (2)

Publication Number Publication Date
CN109977846A true CN109977846A (en) 2019-07-05
CN109977846B CN109977846B (en) 2023-02-10

Family

ID=67080001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910221151.2A Active CN109977846B (en) 2019-03-22 2019-03-22 Living body detection method and system based on near-infrared monocular photography

Country Status (1)

Country Link
CN (1) CN109977846B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991307A (en) * 2019-11-27 2020-04-10 北京锐安科技有限公司 Face recognition method, device, equipment and storage medium
CN112597932A (en) * 2020-12-28 2021-04-02 上海汽车集团股份有限公司 Living body detection method and device and computer readable storage medium
CN113066237A (en) * 2021-03-26 2021-07-02 中国工商银行股份有限公司 Face living body detection and identification method for automatic teller machine and automatic teller machine
CN113691696A (en) * 2021-07-23 2021-11-23 杭州魔点科技有限公司 Face recognition method based on camera module and camera module

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2546996A1 (en) * 2005-05-17 2006-11-17 Spectratech Inc. Optical coherence tomograph
CN105787458A (en) * 2016-03-11 2016-07-20 重庆邮电大学 Infrared behavior identification method based on adaptive fusion of artificial design feature and depth learning feature
CN106203369A (en) * 2016-07-18 2016-12-07 三峡大学 Active stochastic and dynamic for anti-counterfeiting recognition of face instructs generation system
CN107111598A (en) * 2014-12-19 2017-08-29 深圳市大疆创新科技有限公司 Use the light stream imaging system and method for ultrasonic depth sense
CN107358181A (en) * 2017-06-28 2017-11-17 重庆中科云丛科技有限公司 The infrared visible image capturing head device and method of monocular judged for face live body
CN107368798A (en) * 2017-07-07 2017-11-21 四川大学 A kind of crowd's Emotion identification method based on deep learning
CN206672320U (en) * 2017-04-18 2017-11-24 中国科学院重庆绿色智能技术研究院 A kind of reverse crowd regulation warning system in region
CN108062546A (en) * 2018-02-11 2018-05-22 厦门华厦学院 A kind of computer face Emotion identification system
CN108664922A (en) * 2018-05-10 2018-10-16 东华大学 A kind of infrared video Human bodys' response method based on personal safety
CN108875509A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 Biopsy method, device and system and storage medium
CN109145817A (en) * 2018-08-21 2019-01-04 佛山市南海区广工大数控装备协同创新研究院 A kind of face In vivo detection recognition methods
CN109308436A (en) * 2017-07-28 2019-02-05 西南科技大学 A kind of living body faces recognition methods based on active infrared video

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2546996A1 (en) * 2005-05-17 2006-11-17 Spectratech Inc. Optical coherence tomograph
CN107111598A (en) * 2014-12-19 2017-08-29 深圳市大疆创新科技有限公司 Use the light stream imaging system and method for ultrasonic depth sense
CN105787458A (en) * 2016-03-11 2016-07-20 重庆邮电大学 Infrared behavior identification method based on adaptive fusion of artificial design feature and depth learning feature
CN106203369A (en) * 2016-07-18 2016-12-07 三峡大学 Active stochastic and dynamic for anti-counterfeiting recognition of face instructs generation system
CN206672320U (en) * 2017-04-18 2017-11-24 中国科学院重庆绿色智能技术研究院 A kind of reverse crowd regulation warning system in region
CN107358181A (en) * 2017-06-28 2017-11-17 重庆中科云丛科技有限公司 The infrared visible image capturing head device and method of monocular judged for face live body
CN107368798A (en) * 2017-07-07 2017-11-21 四川大学 A kind of crowd's Emotion identification method based on deep learning
CN109308436A (en) * 2017-07-28 2019-02-05 西南科技大学 A kind of living body faces recognition methods based on active infrared video
CN108875509A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 Biopsy method, device and system and storage medium
CN108062546A (en) * 2018-02-11 2018-05-22 厦门华厦学院 A kind of computer face Emotion identification system
CN108664922A (en) * 2018-05-10 2018-10-16 东华大学 A kind of infrared video Human bodys' response method based on personal safety
CN109145817A (en) * 2018-08-21 2019-01-04 佛山市南海区广工大数控装备协同创新研究院 A kind of face In vivo detection recognition methods

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FERNÁNDEZ-CABALLERO: "Optical flow or image subtraction in human detection from infrared camera on mobile robot", 《ROBOTICS AND AUTONOMOUS SYSTEMS》 *
陶明慧: "基于退化模型的图像渐晕处理技术", 《红外与激光工程》 *
黄建恺: "人脸识别的活体检测技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991307A (en) * 2019-11-27 2020-04-10 北京锐安科技有限公司 Face recognition method, device, equipment and storage medium
CN110991307B (en) * 2019-11-27 2023-09-26 北京锐安科技有限公司 Face recognition method, device, equipment and storage medium
CN112597932A (en) * 2020-12-28 2021-04-02 上海汽车集团股份有限公司 Living body detection method and device and computer readable storage medium
CN113066237A (en) * 2021-03-26 2021-07-02 中国工商银行股份有限公司 Face living body detection and identification method for automatic teller machine and automatic teller machine
CN113691696A (en) * 2021-07-23 2021-11-23 杭州魔点科技有限公司 Face recognition method based on camera module and camera module

Also Published As

Publication number Publication date
CN109977846B (en) 2023-02-10

Similar Documents

Publication Publication Date Title
CN104915649B (en) A kind of biopsy method applied to recognition of face
CN109977846A (en) A kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
CN108596041B (en) A kind of human face in-vivo detection method based on video
Fathy et al. Face-based active authentication on mobile devices
WO2018040307A1 (en) Vivo detection method and device based on infrared visible binocular image
CN108470169A (en) Face identification system and method
CN105659200B (en) For showing the method, apparatus and system of graphic user interface
CN102708383B (en) System and method for detecting living face with multi-mode contrast function
CN105260726B (en) Interactive video biopsy method and its system based on human face posture control
CN109598242B (en) Living body detection method
CN109271950A (en) A kind of human face in-vivo detection method based on mobile phone forward sight camera
CN110866454B (en) Face living body detection method and system and computer readable storage medium
Le et al. Eye blink detection for smart glasses
CN108537131A (en) A kind of recognition of face biopsy method based on human face characteristic point and optical flow field
CN208351494U (en) Face identification system
CN109508706A (en) A kind of silent biopsy method based on micro- Expression Recognition and noninductive recognition of face
Farrukh et al. FaceRevelio: a face liveness detection system for smartphones with a single front camera
CN111767788A (en) Non-interactive monocular in vivo detection method
CN108363944A (en) Recognition of face terminal is double to take the photograph method for anti-counterfeit, apparatus and system
Geng Research on athlete’s action recognition based on acceleration sensor and deep learning
US9996743B2 (en) Methods, systems, and media for detecting gaze locking
CN108647650B (en) Human face in-vivo detection method and system based on corneal reflection and optical coding
CN110222647A (en) A kind of human face in-vivo detection method based on convolutional neural networks
Ali et al. Spoofing attempt detection using gaze colocation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant