CN108446690A - A kind of human face in-vivo detection method based on various visual angles behavioral characteristics - Google Patents
A kind of human face in-vivo detection method based on various visual angles behavioral characteristics Download PDFInfo
- Publication number
- CN108446690A CN108446690A CN201810555735.9A CN201810555735A CN108446690A CN 108446690 A CN108446690 A CN 108446690A CN 201810555735 A CN201810555735 A CN 201810555735A CN 108446690 A CN108446690 A CN 108446690A
- Authority
- CN
- China
- Prior art keywords
- video
- face
- motor pattern
- noise
- pattern
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
A kind of human face in-vivo detection method based on various visual angles behavioral characteristics, the motor pattern mapping graph of video is extracted using the image-forming principle of motion blur image, and motor pattern and noise pattern are combined, for face In vivo detection, ensure the privacy and property safety of user, malicious attack is resisted, the safety of face authentication system is enhanced.Including four major parts, are respectively:A) video pre-filtering, b) mapping of face motor pattern, c) noise pattern extraction, d) classification.When there is user to ask access face authentication system, the video of face is obtained by system camera, it is pre-processed, and extracts its face motor pattern mapping graph and noise pattern respectively, the movable information in entire video is mapped on single image by wherein face motor pattern mapping graph.Then gray level co-occurrence matrixes are calculated, support vector machine classifier is sent into and classifies, it is live body or prosthese finally to export result.
Description
Technical field
The invention belongs to living things feature recognition fields, are related to the In vivo detection technology of biological characteristic, and in particular to Yi Zhongli
The method for carrying out face In vivo detection with the various visual angles behavioral characteristics including motor pattern and noise pattern.
Background technology
In living things feature recognition field, the features such as face characteristic is with its contact-free, natural identification method, it is widely applied
In the fields such as authentication, safety monitoring and monitoring, network security control.And face also with more individual privacies, property peace
It is closely connected together entirely.Therefore, face authentication system has attracted more attackers' note that they attempt to the vacation of face
Body attacks invasive system, and Fig. 1 gives several typical prosthese attack patterns, including photo attack (a), and video attacks (b), and
3D models attack (c).This brings larger safety and privacy concerns to the identity authorization system based on face.Therefore, it studies
With the face In vivo detection with high-performance and generally used with important application value.
Face In vivo detection technology mainly judges that face is live body or non-living body.Only live body is identified as in face
In the case of, the result of face authentication is only effectively, otherwise, is just regarded as rogue attacks.In current practice
The performance of living body faces detection technique need to be improved, it is believed that ideal anti-Cheating Technology should have in online identity certification
Following feature:(1) be illuminated by the light, the influence of capture apparatus it is small;(2) speed is fast, can realize that real-time online is handled;(3) user
Interface is relatively more natural and few as possible with the interaction of user;(4) there is preferable detectability to live body;(5) to photo, video etc.
Various attack means have stronger detectability.
Current human face in-vivo detection method can be divided into two classes:Additional information is needed, and without additional information.Before
The video/image and other sensors that person is captured using camera obtain information such as depth information etc., or are handed over based on user
Mutual information carries out In vivo detection.Such method better performances, but extras are needed, user experience is unfriendly, costly,
Use occasion is restricted.And the video and image that the latter captures just with camera are detected, expense is low, and user experience is good
Good, applicable situation is extensive.
Face In vivo detection algorithm without extras can be further divided into based on static information and be based on multidate information.
Research on Methods color, texture or other information based on static nature are used for In vivo detection, however can only consider the spy in single frames
Property.In order to solve this problem, A.Pinto is in article " Using visual rhythms for detecting video-
Think that noise pattern is different from Polaroid if video is secondary imaging in based facial spoof attacks "
The case where, and the visual rhythm by analyzing residual noise video carries out In vivo detection, the disadvantage is that being set by imaging device and display
Standby is affected.Based on the method for behavioral characteristics In vivo detection is carried out using the movable information of face.Pan G. in 2007 etc.
People is in article " Eyeblink-based anti-Spoofing in face recognition from a generic
It is attacked by blink detection photo in webcamera ", achieves preferable performance, but this method cannot attack for detecting video
It hits.2017, N.N.Lakshminarayana et al. was in article " A discriminative spatio-temporal
Main Basiss in mapping of face for liveness detection " by breathing as In vivo detection, it is believed that exhale
The slight change for leading to facial blood flow and causing color is inhaled, and does not have this variation in prosthese video.However, practical application
In be illuminated by the light influence with noise, the slight change of skin color is difficult extraction, and this method cannot be just when requestor makes up
Often work.
In conclusion existing human face in-vivo detection method can not preferably take into account detection performance and applicability, calculation is constrained
The practical application of method.The problem of for existing face In vivo detection, proposes to regard using the image-forming principle extraction of motion blur image
The motor pattern mapping graph of frequency, and motor pattern and noise pattern are combined, it is used for face In vivo detection, ensures that user's is hidden
Private and property safety, resists malicious attack, enhances the safety of face authentication system.
Invention content
The object of the present invention is to provide a kind of performance height, human face in-vivo detection methods applied widely.
The human face in-vivo detection method that the present invention uses is as shown in Figure 2.Algorithm includes four major parts, is respectively:a)
Video pre-filtering, b) mapping of face motor pattern, c) noise pattern extraction, d) classification.Access face authentication is asked when there is user
When system, the video of face is obtained by system camera, it is pre-processed, and extract the mapping of its face motor pattern respectively
Then figure and noise pattern calculate gray level co-occurrence matrixes (Gray-level Co-occurrence Matrix, GLCM), be sent into
Support vector machines (Support Vector Machines, SVM) grader is classified, finally export result be live body or
Prosthese.
System specifically adopts the following technical scheme that and step:
1. video pre-filtering
In order to avoid the influence of background in video, the present invention cuts video in pretreatment.For a face
Video, detects the position of face in video first frame, and is cut to all frames of video using the location information.By video
Pretreatment, we obtain the face video without background, as shown in Fig. 2 (a).
2. face motor pattern maps
It is inspired by motion blur image, the present invention proposes the concept of face motor pattern mapping graph, will be in entire video
The motor pattern of face is mapped on single image.Specific method be the content in video is regarded as be taken it is moving
The duration of video is regarded as the time for exposure of motion blur image by object, then the operational mode mapping of video is obtained by following formula
It arrives:
Wherein V is the video after pretreatment, Vp,q(t) it is value of the video in the position t moment (p, q), p and q difference
For the abscissa and ordinate of each pixel in video.For RGB Three Channel Color videos, the present invention selects G (green) channel
It is analyzed, because the channels G can most reflect that the colour of skin changes.avg[Vp,q(t)] V is indicatedp,qMean value in the time domain.T is video
Duration, ψ are motor pattern mapping graph, and ψ (p, q) is value of the motor pattern mapping graph in the position point (p, q).
For digital video, signal is sampled with the rate of frame, therefore motor pattern mapping graph ψ is expressed as:
Wherein, Vp,q(k) indicate that the value of the position video kth frame midpoint (p, q), l indicate the totalframes of video.Pass through above formula
The motor pattern mapping graph of video can be calculated.
Fig. 3 lists the motor pattern mapping graph under several different situations, every a line in figure represent it is a kind of in the case of
Motor pattern maps, and is followed successively by real human face, the attack of fixed photo, the attack with large-amplitude sloshing, hand-held illumination from top to bottom
The attack of piece and the attack of fixed video.It can be seen that the motor pattern of real human face includes global and local motion information.
The motor pattern mapping graph of the attack of fixed photo is very flat, i.e., almost without movement.In the third line, face is quick in video
Mobile, local motion information is interfered by strong global motion, is unfavorable for the classification of In vivo detection.In fourth line, hand-held illumination
Piece attack is only comprising global motion without local motion.Last column is the attack of fixed video, and motor pattern mapping is non-
Often it is similar to real human face.To remedy this, it is further introduced into another noise pattern characteristic.
3. noise pattern extracts
Noise in video is introduced into imaging process, and attack video contains secondary imaging, therefore its noise
Different from Polaroid real human face video.The noise pattern that video was extracted and expressed to this section introduction how is examined for live body
It surveys, is as follows:
1) noise in video is extracted first.Assuming that V is input video, then residual noise video can be expressed as
VNR=Vgray-VFiltered
Wherein VgrayIt is the video of gray processing processing, VFilteredIt is VgrayVideo after low-pass filtering.The present invention adopts
With Gaussian filter, mean μ=0, standard deviation sigma=2, filter size is 7 × 7.As a result such as first small figure in Fig. 2 (c)
It is shown.
2) 2D discrete Fourier transforms (discrete Fourier transform, DFT) are carried out to residual noise video,
It is shown below
Wherein FS is Fourier spectrum, and (u, w) is the abscissa and ordinate of Fourier spectrum, and M and N respectively represent video
Height and width, x and y are the abscissa and ordinate in face video.As a result as shown in second small figure in Fig. 2 (c).
3) visual rhythm of frequency spectrum video is sought.Visual rhythm chooses the band-like portions of each frame of selecting video, forms one
Image.Wherein, the selection of band-like portions is extremely important, generally selects the horizontally or vertically band-like portions at center, such as Fig. 4 institutes
Show.Solid box indicates that vertical band-like portions, dotted line frame indicate horizontal band-like portions.
4. classification
Classified using two category feature above-mentioned this part.Since the difference of true and false two class is mainly reflected in texture
In information, therefore extract gray level co-occurrence matrixes (the gray level co- of motor pattern image and noise visual rhythm image
Occurrence matrix, GLCM) feature, one-dimensional vector is converted to, support vector machines (support vector are then utilized
Machine, SVM) classification of grader parting.
During this, the fusion of multi-angle feature is most important, directly affects last classification performance.Here it uses and determines
Plan grade merges, specifically based on motor pattern feature, supplemented by noise pattern feature.From several segments of the same video,
It can obtain several motor patterns mapping image and corresponding GLCM features obtain several classification results on this basis.
It is voted using these classification results, if voting results have high confidence, as last judging result, otherwise
Using the classification results of the GLCM features of noise pattern as final result.Detailed process such as following formula indicates:
Wherein PpositiveThe ratio of aggregate votes is accounted for for the poll that result is real human face in voting process;HthresholdTo set
A fixed higher threshold value, works as PpositiveWhen more than this threshold value, it is determined as real human face;LthresholdFor lower threshold value, when
PpositiveWhen less than this threshold value, it is determined as that prosthese is attacked;Work as PpositiveWhen between two threshold values, output noise pattern
Judging result.
Compared with prior art, the present invention has the advantage that:
Inter-library performance is good, is limited by imaging device and test environment smaller.Using CASIA FASD as training data,
Replay attack databases are as test data, and the error rates HTER such as inter-library is 19.75%, compared to newest related work
Make have improvement by a relatively large margin.
Description of the drawings
Fig. 1 is common attack type;
Fig. 2 is In vivo detection system framework according to the present invention;
Fig. 3 is the motor pattern mapping graph formed under different situations;
Fig. 4 is the formation schematic diagram of visual rhythm figure.
Specific implementation mode
In order to realize the above problem, the face In vivo detection system based on various visual angles behavioral characteristics that the present invention provides a kind of
Design method.The present invention is described in further detail with reference to the accompanying drawings and examples.
This method specifically includes:
1. video pre-filtering
In order to avoid the influence of background in video, the present invention cuts video in pretreatment.For a face
Video, detects the position of face in video first frame, and is cut to all frames of video using the location information.By video
Pretreatment, we obtain the face video without background, as shown in Fig. 2 (a).
2. face motor pattern extracts
It is inspired by motion blur image, the present invention proposes the concept of face motor pattern mapping graph, will be in entire video
The motor pattern of face is mapped on single image.Specific method be the content in video is regarded as be taken it is moving
The duration of video is regarded as the time for exposure of motion blur image by object, then the operational mode mapping graph of video is by following formula
It obtains:
Wherein V is the video after pretreatment, Vp,q(t) it is value of the video in the position t moment (p, q), p and q difference
For the abscissa and ordinate of each pixel in video.For RGB Three Channel Color videos, the present invention selects G (green) channel
It is analyzed, because the channels G can most reflect that the colour of skin changes.avg[Vp,q(t)] V is indicatedp,qMean value in the time domain.T is video
Duration, ψ are motor pattern mapping graph, and ψ (p, q) is value of the motor pattern mapping graph in the position point (p, q).
For digital video, signal is sampled with the rate of frame, therefore motion blur image ψ is expressed as:
Wherein, Vp,q(k) indicate that the value of the position video kth frame midpoint (p, q), l indicate the totalframes of video.Pass through above formula
The motor pattern mapping graph of video can be calculated.
Since the result of motor pattern mapping is related with the duration of video, intercept regular length in video
Segment is calculated.The length T of video clip is set as 4s by the present invention.
3. noise pattern extracts
Noise in video is introduced into imaging process, and attack video contains secondary imaging, therefore its noise
Different from Polaroid real human face video.The noise pattern that video was extracted and expressed to this section introduction how is examined for live body
It surveys, is as follows:
1) noise in video is extracted first.Assuming that V is input video, then residual noise video can be expressed as
VNR=Vgray-VFiltered
Wherein VgrayIt is the video of gray processing processing, VFilteredIt is VgrayVideo after low-pass filtering.The present invention adopts
With Gaussian filter, mean μ=0, standard deviation sigma=2, filter size is 7 × 7.As a result such as first small figure in Fig. 2 (c)
It is shown.
2) 2D discrete Fourier transforms (discrete Fourier transform, DFT) are carried out to residual noise video,
It is shown below
Wherein FS is Fourier spectrum, and (u, w) is the abscissa and ordinate of Fourier spectrum, and M and N respectively represent video
Height and width, x and y are the abscissa and ordinate in face video.As a result as shown in second small figure in Fig. 2 (c).
3) visual rhythm of frequency spectrum video is sought.Visual rhythm chooses the band-like portions of each frame of selecting video, forms one
Image.Wherein, the selection of band-like portions is extremely important, generally selects the horizontally or vertically band-like portions at center, such as Fig. 4 institutes
Show.Solid box indicates that vertical band-like portions, dotted line frame indicate horizontal band-like portions.The present invention is vertical using 30 pixel wides
Strip visual rhythm.
4. classification
Classified using two category feature above-mentioned this part.Since the difference of true and false two class is mainly reflected in texture
In information, therefore extract gray level co-occurrence matrixes (the gray level co- of motor pattern image and noise visual rhythm image
Occurrence matrix, GLCM) feature, one-dimensional vector is converted to, support vector machines (support vector are then utilized
Machine, SVM) classification of grader parting.
The fusion process of multi-angle feature is most important, directly affects last classification performance.Here decision level is used
Fusion, specifically based on motor pattern feature, supplemented by noise pattern feature.It, can be with from several segments of the same video
It obtains several motor patterns mapping image and corresponding GLCM features obtains several classification results on this basis.It utilizes
These classification results are voted, if there is voting results high confidence otherwise will make an uproar as last judging result
The classification results of the GLCM features of sound pattern are as final result.Detailed process such as following formula indicates:
Wherein PpositiveThe ratio of aggregate votes is accounted for for the poll that result is real human face in voting process;HthresholdTo set
A fixed higher threshold value, works as PpositiveWhen more than this threshold value, it is determined as real human face;LthresholdFor lower threshold value, when
PpositiveWhen less than this threshold value, it is determined as that prosthese is attacked;Work as PpositiveWhen between two threshold values, output noise pattern
Judging result.The present invention sets Hthreshold=0.9, Lthreshold=0.2.
Claims (5)
1. a kind of human face in-vivo detection method based on various visual angles behavioral characteristics, it is characterised in that:Including four parts, respectively
It is:A) video pre-filtering, b) mapping of face motor pattern, c) noise pattern extraction, d) classification;Access face is asked when there is user
When Verification System, the video of face is obtained by system camera, it is pre-processed, and extract its face motor pattern respectively
Then mapping graph and noise pattern calculate gray level co-occurrence matrixes, be sent into support vector machine classifier and classify, finally output knot
Fruit is live body or prosthese.
2. according to the method described in claim 1, it is characterized in that:
Video is cut in video pre-filtering;For a face video, the position of face in video first frame is detected,
And all frames of video are cut using the location information;By video pre-filtering, the face video without background is obtained.
3. according to the method described in claim 1, it is characterized in that:
Content in video is regarded as to the body in motion being taken, regards the duration of video as motion blur image
Time for exposure, then the operational mode mapping of video are obtained by following formula:
Wherein V is the video after pretreatment, Vp,q(t) it is value of the video in the position t moment (p, q), p and q are respectively to regard
The abscissa and ordinate of each pixel in frequency;avg[Vp,q(t)] V is indicatedp,qMean value in the time domain;T be video it is lasting when
Between, ψ is motor pattern mapping graph, and ψ (p, q) is value of the motor pattern mapping graph in the position point (p, q);
For digital video, signal is sampled with the rate of frame, therefore motor pattern mapping graph ψ is expressed as:
Wherein, Vp,q(k) indicate that the value of the position video kth frame midpoint (p, q), l indicate the totalframes of video;It is calculated by above formula
Go out the motor pattern mapping graph of video.
4. according to the method described in claim 1, it is characterized in that,
Noise pattern extraction is as follows:
1) noise in video is extracted first;Assuming that V is input video, then residual noise representation of video shot is
VNR=Vgray-VFiltered
Wherein VgrayIt is the video of gray processing processing, VFilteredIt is VgrayVideo after low-pass filtering;Using gaussian filtering
Device, mean μ=0, standard deviation sigma=2, filter size are 7 × 7;2) 2D discrete fourier changes are carried out to residual noise video
It changes, is shown below
Wherein FS is Fourier spectrum, and (u, w) is the abscissa and ordinate of Fourier spectrum, and M and N respectively represent the height of video
And width, x and y are the abscissa and ordinate in face video;3) visual rhythm of frequency spectrum video is sought;Visual rhythm is chosen
The band-like portions of each frame of video form an image;Wherein, band-like portions selected the horizontally or vertically strap at center
Point.
5. according to the method described in claim 1, it is characterized in that,
Detailed process of classifying such as following formula indicates:
Wherein PpositiveThe ratio of aggregate votes is accounted for for the poll that result is real human face in voting process;HthresholdFor setting
One higher threshold value, works as PpositiveWhen more than this threshold value, it is determined as real human face;LthresholdFor lower threshold value, when
PpositiveWhen less than this threshold value, it is determined as that prosthese is attacked;Work as PpositiveWhen between two threshold values, output noise pattern
Judging result;Set Hthreshold=0.9, Lthreshold=0.2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810555735.9A CN108446690B (en) | 2018-05-31 | 2018-05-31 | Human face in-vivo detection method based on multi-view dynamic features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810555735.9A CN108446690B (en) | 2018-05-31 | 2018-05-31 | Human face in-vivo detection method based on multi-view dynamic features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108446690A true CN108446690A (en) | 2018-08-24 |
CN108446690B CN108446690B (en) | 2021-09-14 |
Family
ID=63206480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810555735.9A Active CN108446690B (en) | 2018-05-31 | 2018-05-31 | Human face in-vivo detection method based on multi-view dynamic features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108446690B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815797A (en) * | 2018-12-17 | 2019-05-28 | 北京飞搜科技有限公司 | Biopsy method and device |
CN110222486A (en) * | 2019-05-18 | 2019-09-10 | 王�锋 | User ID authentication method, device, equipment and computer readable storage medium |
CN111241989A (en) * | 2020-01-08 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Image recognition method and device and electronic equipment |
CN111382607A (en) * | 2018-12-28 | 2020-07-07 | 北京三星通信技术研究有限公司 | Living body detection method and device and face authentication system |
WO2020232889A1 (en) * | 2019-05-23 | 2020-11-26 | 平安科技(深圳)有限公司 | Check encashment method, apparatus and device, and computer-readable storage medium |
US20220183616A1 (en) * | 2019-03-06 | 2022-06-16 | San Diego State University (SDSU) Foundation, dba San Diego State University Research Foundation | Methods and systems for continuous measurement of anomalies for dysmorphology analysis |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101999900A (en) * | 2009-08-28 | 2011-04-06 | 南京壹进制信息技术有限公司 | Living body detecting method and system applied to human face recognition |
US20130044923A1 (en) * | 2008-12-05 | 2013-02-21 | DigitalOptics Corporation Europe Limited | Face Recognition Using Face Tracker Classifier Data |
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | Human face living body detection system and method based on optical pulses |
CN105320950A (en) * | 2015-11-23 | 2016-02-10 | 天津大学 | A video human face living body detection method |
CN106228129A (en) * | 2016-07-18 | 2016-12-14 | 中山大学 | A kind of human face in-vivo detection method based on MATV feature |
CN107358152A (en) * | 2017-06-02 | 2017-11-17 | 广州视源电子科技股份有限公司 | A kind of vivo identification method and system |
CN107451575A (en) * | 2017-08-08 | 2017-12-08 | 济南大学 | A kind of face anti-fraud detection method in identity authorization system |
CN107506713A (en) * | 2017-08-15 | 2017-12-22 | 哈尔滨工业大学深圳研究生院 | Living body faces detection method and storage device |
CN107798279A (en) * | 2016-09-07 | 2018-03-13 | 北京眼神科技有限公司 | Face living body detection method and device |
CN107862299A (en) * | 2017-11-28 | 2018-03-30 | 电子科技大学 | A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera |
-
2018
- 2018-05-31 CN CN201810555735.9A patent/CN108446690B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130044923A1 (en) * | 2008-12-05 | 2013-02-21 | DigitalOptics Corporation Europe Limited | Face Recognition Using Face Tracker Classifier Data |
CN101999900A (en) * | 2009-08-28 | 2011-04-06 | 南京壹进制信息技术有限公司 | Living body detecting method and system applied to human face recognition |
CN105320950A (en) * | 2015-11-23 | 2016-02-10 | 天津大学 | A video human face living body detection method |
CN105260731A (en) * | 2015-11-25 | 2016-01-20 | 商汤集团有限公司 | Human face living body detection system and method based on optical pulses |
CN106228129A (en) * | 2016-07-18 | 2016-12-14 | 中山大学 | A kind of human face in-vivo detection method based on MATV feature |
CN107798279A (en) * | 2016-09-07 | 2018-03-13 | 北京眼神科技有限公司 | Face living body detection method and device |
CN107358152A (en) * | 2017-06-02 | 2017-11-17 | 广州视源电子科技股份有限公司 | A kind of vivo identification method and system |
CN107451575A (en) * | 2017-08-08 | 2017-12-08 | 济南大学 | A kind of face anti-fraud detection method in identity authorization system |
CN107506713A (en) * | 2017-08-15 | 2017-12-22 | 哈尔滨工业大学深圳研究生院 | Living body faces detection method and storage device |
CN107862299A (en) * | 2017-11-28 | 2018-03-30 | 电子科技大学 | A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera |
Non-Patent Citations (1)
Title |
---|
ALLAN PINTO 等: "Using Visual Rhythms for Detecting Video-Based Facial Spoof Attacks", 《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109815797A (en) * | 2018-12-17 | 2019-05-28 | 北京飞搜科技有限公司 | Biopsy method and device |
CN109815797B (en) * | 2018-12-17 | 2022-04-19 | 苏州飞搜科技有限公司 | Living body detection method and apparatus |
CN111382607A (en) * | 2018-12-28 | 2020-07-07 | 北京三星通信技术研究有限公司 | Living body detection method and device and face authentication system |
US20220183616A1 (en) * | 2019-03-06 | 2022-06-16 | San Diego State University (SDSU) Foundation, dba San Diego State University Research Foundation | Methods and systems for continuous measurement of anomalies for dysmorphology analysis |
US11883186B2 (en) * | 2019-03-06 | 2024-01-30 | San Diego State University (Sdsu) Foundation | Methods and systems for continuous measurement of anomalies for dysmorphology analysis |
CN110222486A (en) * | 2019-05-18 | 2019-09-10 | 王�锋 | User ID authentication method, device, equipment and computer readable storage medium |
WO2020232889A1 (en) * | 2019-05-23 | 2020-11-26 | 平安科技(深圳)有限公司 | Check encashment method, apparatus and device, and computer-readable storage medium |
CN111241989A (en) * | 2020-01-08 | 2020-06-05 | 腾讯科技(深圳)有限公司 | Image recognition method and device and electronic equipment |
CN111241989B (en) * | 2020-01-08 | 2023-06-13 | 腾讯科技(深圳)有限公司 | Image recognition method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108446690B (en) | 2021-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446690A (en) | A kind of human face in-vivo detection method based on various visual angles behavioral characteristics | |
US6661907B2 (en) | Face detection in digital images | |
CN108596041B (en) | A kind of human face in-vivo detection method based on video | |
CN106557723B (en) | Face identity authentication system with interactive living body detection and method thereof | |
US8600121B2 (en) | Face recognition system and method | |
WO2018082011A1 (en) | Living fingerprint recognition method and device | |
US20040125994A1 (en) | Method for forgery recognition in fingerprint recognition by using a texture classification of gray scale differential images | |
WO2016084072A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
CN103473564B (en) | A kind of obverse face detection method based on sensitizing range | |
WO2016172923A1 (en) | Video detection method, video detection system, and computer program product | |
CN111523344B (en) | Human body living body detection system and method | |
Khan et al. | Low dimensional representation of dorsal hand vein features using principle component analysis (PCA) | |
CN109409343A (en) | A kind of face identification method based on In vivo detection | |
CN107862298B (en) | Winking living body detection method based on infrared camera device | |
Yao et al. | rPPG-based spoofing detection for face mask attack using efficientnet on weighted spatial-temporal representation | |
WO2022268183A1 (en) | Video-based random gesture authentication method and system | |
Das et al. | A framework for liveness detection for direct attacks in the visible spectrum for multimodal ocular biometrics | |
Khan et al. | A new method to extract dorsal hand vein pattern using quadratic inference function | |
Chiu et al. | A micro-control capture images technology for the finger vein recognition based on adaptive image segmentation | |
CN112801066B (en) | Identity recognition method and device based on multi-posture facial veins | |
CN112861588A (en) | Living body detection method and device | |
Yu et al. | Research on face anti-spoofing algorithm based on image fusion | |
CN112861587B (en) | Living body detection method and device | |
Demirel et al. | Iris recognition system using combined colour statistics | |
CN107516091A (en) | A kind of head portrait for ATM terminals, which is covered, sentences knowledge method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |