CN109598242A - A kind of novel biopsy method - Google Patents
A kind of novel biopsy method Download PDFInfo
- Publication number
- CN109598242A CN109598242A CN201811483851.0A CN201811483851A CN109598242A CN 109598242 A CN109598242 A CN 109598242A CN 201811483851 A CN201811483851 A CN 201811483851A CN 109598242 A CN109598242 A CN 109598242A
- Authority
- CN
- China
- Prior art keywords
- model
- face
- living body
- image
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Abstract
The invention discloses a kind of novel biopsy method, overall steps are as follows: data preparation stage: the acquisition of variation characteristic and mark when acquisition and mark, Face datection frame including image and Optic flow information are long;In vivo detection model: being divided into the model for being used to detect the model of panorama light stream and texture information and being used to detect Face datection frame changing pattern by model design phase, is made two models while being carried out In vivo detection;Model training stage: the model for detecting panorama light stream and texture information and the model for detecting the variation of Face datection frame are trained respectively;The model measurement stage: the successive video frames image in the video sequence that gate camera is shot is obtained, inputs an image into In vivo detection model and obtains judging result.The invalid identification time of face identification system is greatly utilized in the present invention, further reduces the detection waiting time of user, and can detect to the attack of various modes.
Description
Technical field
The present invention relates to a kind of detection methods, more particularly to a kind of novel biopsy method, belong to machine vision skill
Art field.
Background technique
Face recognition technology has been widely used in every field as a kind of identity identifying technology, greatly facilitates people
Life.But with the extensive use of face recognition technology, become to get over for the hacker attack mode of face identification system
Come more, main includes three kinds: photo attack plays video attack and the attack of 3D mask, and security performance is seriously threatened.
Therefore, the task of In vivo detection is how to judge that the face that system detection goes out has vital signs, prevents malice adulterator from stealing
Other people face is taken to identify.In face In vivo detection field, main biopsy method is divided into following several:
1) based on the analysis of texture.Technical way: using there are on texture between real human face and attack face
Difference trains feature extractor by mass data, and then extracts face characteristic and carry out vivo identification.Problems and disadvantages:
This method depends critically upon training dataset, and since attack mode is complicated in reality scene, training dataset can not be covered often
A kind of attack condition, thus it is low for untrained Attack Scenarios accuracy of identification.
2) based drive analysis.Technical way: this method thinks the movement between 3D and 2D face face-image
There is difference in mode, there are inconsistencies with face's mass motion direction in the direction of motion of human face characteristic point for real human face;And
In photo face, the direction of motion of the direction of motion of characteristic point and face's entirety is consistent, by optical flow method to living body and non-
Living body is identified.Problems and disadvantages: there is the movable video of harmonic motion cannot obtain in based drive analysis good
Effect, and this method computation complexity with higher can only be used to detect photo attack.
3) based on the analysis of physiological characteristic.Technical way: using the physiology sign of face face, actively by user
Or the movements such as passively cooperate, for example actively turn one's head or passively blink carry out vivo identification.Problems and disadvantages: this method needs
The cooperation of user actively or passively, seriously affects the experience property of user, and detection time is longer.
4) based on the analysis of extras.Technical way: by introducing extras, binocular acquisition system is such as utilized
Color video frequency image and Infrared video image are acquired respectively, are classified eventually by the classifier trained to photo and living body
Detection.Problems and disadvantages: this method introduces extras, and In vivo detection is at high cost.
In conclusion the prior art still has, In vivo detection model generalization performance is low, detection time is long or detection pattern
The problems such as single, therefore broad applicability is not strong, is not able to satisfy the needs of different user.
Summary of the invention
In order to solve shortcoming present in above-mentioned technology, the present invention provides a kind of novel biopsy methods.
In order to solve the above technical problems, the technical solution adopted by the present invention is that: a kind of novel biopsy method, it is whole
Body step are as follows:
Step 1: data preparation stage: acquisition and mark, the long Shi Bianhua of Face datection frame including image and Optic flow information
The acquisition of feature and mark;
Step 2: model design phase: In vivo detection model is divided into the mould for being used to detect panorama light stream and texture information
Type and model for detecting Face datection frame changing pattern;Two models are made to be utilized respectively the panorama light stream in video image
With texture information and Face datection frame it is long when change information carry out In vivo detection simultaneously;
Step 3: model training stage: to the model for detecting panorama light stream and texture information and for detecting people
The model of face detection block changing pattern is trained respectively;Complete In vivo detection model is combined into after the completion of training;
Step 4: the model measurement stage: the successive video frames image in the video sequence that gate camera is shot is obtained,
It inputs an image into In vivo detection model and obtains judging result.
Further, the detailed process of step 1 are as follows:
A, the acquisition of image and Optic flow information and mark: equally spaced first from specific view for video sequence data collection
In frequency sequence acquire RGB image information, be set to 1s, then to collected RGB image information carry out living body with it is non-live
The mark of body: 0 represents non-living body image, and 1 represents living body image;It is final to obtain N Zhang Quanjing image PoriginAnd its corresponding mark
Forigin, the set of N number of sample composition is denoted asSecondly,
For the same video with m frame for an interval, m value 2~6, the acquisition another figure neighbouring with image obtained by the above method
Picture, the minimum two then weighted again using iteration calculate Optic flow information to collected neighbouring two frame RGB images at IRLS method,
And calculated light stream figure is carried out to the mark of living body and non-living body: 0 represents non-living body light stream figure, and 1 represents living body light stream figure;Most
N and D are obtained eventuallyoriginOne-to-one panorama light stream figure PofAnd its mark Fof, the set of N number of sample composition is denoted asBy the raw image data collection D of markoriginWith
Corresponding Optic flow information data set DofMerging becomes D1;
B, the acquisition of variation characteristic and mark when Face datection frame is long: the video that video sequence data is concentrated is passed through
SeetaFace human-face detector, to obtain in each video sequence, the face location of each frame image, location information are as follows: left
Upper angular coordinate (xmin, ymin) and bottom right angular coordinate (xmax, ymax);Then it is calculated using top left co-ordinate and bottom right angular coordinate
Area s=(the x of Face datection framemax-xmin)*(ymax-ymin);Then by the area of Face datection frame in each video according to
Time sequencing is sampled, and is combined into the vector S={ s that a length is n1, s2, s3..., sn, as corresponding video sequence
Face datection frame it is long when variation characteristic, and to the vector carry out living body and non-living body mark: 0 represent non-living body face inspection
Survey frame it is long when variation characteristic, 1 represent living body faces detection block it is long when variation characteristic;Finally calculate N number of Face datection frame
The set of variation characteristic S and its corresponding mark F, N number of sample composition are denoted as D2={ (S1, F1), (S2, F2) ..., (SN,
FN)}。
Further, the detailed process of step 2 are as follows:
A, whole In vivo detection model is denoted as M, and M consists of two parts: for detecting panorama light stream and texture information
Model is A, the model for detecting Face datection frame changing pattern is B;The input of model M is that the face shot by gate enters
Successive video frames image P in video sequence I1, P2, P3..., Pn;
B, it for a picture P of input, is detected using SeetaFace human-face detector, if detecting face,
It is then transferred to step c, starts to carry out In vivo detection;If it is not detected that face, then it is transferred to this step and continues picture to input
Carry out Face datection;
C, model A and Model B carry out In vivo detection simultaneously, panorama light stream and the texture information being utilized respectively in video image
And Face datection frame it is long when change information detected;The detection of model A is transferred to step d, and the detection of Model B is transferred to step
e;
D, using 1s as the time cycle, a pair of of RGB image is constantly acquired, wherein being divided into m frame between two RGB images, m is taken
Value 2~6;According to this pair of of RGB image, the minimum two weighted again using iteration described in step 1 calculates complete at IRLS method
Scape Optic flow information, first input as model A;First original image by this to RGB image simultaneously, as model A
Second input;Model A is judged according to the two inputs, if it is determined that living body, then be transferred to this step and continue
The judgement of lower a pair of picture, the stopping when area of the Face datection frame in picture reaches threshold value are transferred to step g;If sentenced
Break as non-living body, is then directly transferred to step h;
E, whether the size for first determining whether Face datection frame is more than threshold value, if it exceeds threshold value is then directly judged as non-
Living body is transferred to step h;If it does not exceed the threshold, being then transferred to step f;
F, each frame RGB image in input video, the face that SeetaFace human-face detector is detected constantly are acquired
The area s of detection block is stored in chronological order as a numerical value, stops acquisition and storage until encountering following situations: 1)
The area of Face datection frame reaches threshold value;2) size of continuous k frame Face datection frame is floated in a very low range, and k takes 10
~15;3) face is not detected in continuous k frame human-face detector, and k takes 10~15;Then to the area figures of storage according to the time
Sequence is sampled, it is made to become the vector S={ s of a length n1, s2, s3..., sn, as Face datection frame it is long when
Variation characteristic;And this feature is input in Model B and is judged, if it is determined that living body, then be transferred to step g;If it is determined that
For non-living body, then step h is transferred to;
G, model A and Model B are judged as living body simultaneously, then model M judges that the video to normally enter video, that is, is judged as
Living body, face identification system are transferred to face recognition module;
H, model A and Model B wherein have one to be judged as non-living body or are all judged as non-living body, then model M judgement should
Video is improper attack video, that is, is judged as that non-living body, face identification system stop identifying the user identity.
Further, the detailed process of step 3 are as follows:
A, the model for detecting panorama light stream and texture information is A, the mould for detecting Face datection frame changing pattern
Type is B;The two models need to be trained respectively;
B, model A is trained: the original image and corresponding Optic flow information number that the mark that step 1 is obtained is completed
According to collection D1It is divided into training set T1With checksum set V1;By training set T1Be input in model A using batch stochastic gradient descent method into
Row model training, and utilize checksum set V1Model training is verified as a result, i.e. when model is in checksum set V1It is upper to obtain preferable living body inspection
When surveying precision and the precision and not promoted again with training process, deconditioning;
C, be trained to Model B: the Face datection frame that the mark that step 1 is obtained is completed changes characteristic when long
Collection, which is input in Model B, to be trained;
D, final training is completed, and obtains model A and B, and be combined into complete In vivo detection model M.
Further, the detailed process of step 4 are as follows:
A, camera shoots the entrance video sequence I of user's face, and obtains the successive video frames image P in I1, P2,
P3..., Pn;
B, by successive video frames image P1, P2, P3..., PnIt is input in the In vivo detection model M that step 3 obtains, obtains
The judging result of model M.
The difference that the present invention is entered between mode and improper attack mode by probing into normal face, proposes people
Face detection block it is long when variation characteristic, and the Optic flow information of movement and static texture information is combined to carry out In vivo detection.The work
Physical examination method of determining and calculating within the time of the nearly camera of proximity in user by running, and during this period of time, face identification system is not
It can effectively identify face identity.Therefore, the invalid identification time of face identification system is greatly utilized in the present invention, into one
The detection waiting time for reducing user of step, and the attack of various modes can be detected.
Detailed description of the invention
Fig. 1 is overall flow schematic diagram of the invention.
Specific embodiment
The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
The novel biopsy method of one kind shown in FIG. 1, overall process are as follows:
Step 1: data preparation stage
A, the acquisition of image and Optic flow information and mark: equally spaced first from specific view for video sequence data collection
In frequency sequence acquire RGB image information, be set to 1s, then to collected RGB image information carry out living body with it is non-live
The mark of body: 0 represents non-living body image, and 1 represents living body image.It is final to obtain N Zhang Quanjing image PoriginAnd its corresponding mark
Forigin, the set of N number of sample composition is denoted asSecondly,
For the same video with m frame for an interval (m value 2~6), the acquisition another figure neighbouring with image obtained by the above method
Picture, the minimum two then weighted again using iteration are calculated light stream to collected neighbouring two frame RGB images at (IRLS) method and believed
Breath, and by the mark of calculated light stream figure progress living body and non-living body: 0 represents non-living body light stream figure, and 1 represents living body light stream
Figure.It is final to obtain N and DoriginOne-to-one panorama light stream figure PofAnd its mark Fof, the set note of N number of sample composition
ForBy the raw image data collection D of markorigin
With corresponding Optic flow information data set DofMerging becomes D1。
B, the acquisition of variation characteristic and mark when Face datection frame is long: the video that video sequence data is concentrated is passed through
SeetaFace human-face detector, to obtain in each video sequence, the face location of each frame image, location information are as follows: left
Upper angular coordinate (xmin, ymin) and bottom right angular coordinate (xmax, ymax).Then it is calculated using top left co-ordinate and bottom right angular coordinate
Area s=(the x of Face datection framemax-xmin)*(ymax-ymin).Then by the area of Face datection frame in each video according to
Time sequencing is sampled, and is combined into the vector S={ s that a length is n1, s2, s3..., sn, as corresponding video sequence
Face datection frame it is long when variation characteristic, and to the vector carry out living body and non-living body mark: 0 represent non-living body face inspection
Survey frame it is long when variation characteristic, 1 represent living body faces detection block it is long when variation characteristic.Finally calculate N number of Face datection frame
The set of variation characteristic S and its corresponding mark F, N number of sample composition are denoted as D2={ (S1, F1), (S2, F2) ..., (SN,
FN)}。
Step 2: model design phase
A, whole In vivo detection model is denoted as M, and M consists of two parts: for detecting panorama light stream and texture information
Model is A, the model for detecting Face datection frame changing pattern is B.The input of model M is that the face shot by gate enters
Successive video frames image P in video sequence I1, P2, P3..., Pn。
B, it for a picture P of input, is detected using SeetaFace human-face detector, if detecting face,
It is then transferred to step c, starts to carry out In vivo detection;If it is not detected that face, then it is transferred to this step and continues picture to input
Carry out Face datection;
C, model A and Model B carry out In vivo detection simultaneously, panorama light stream and the texture information being utilized respectively in video image
And Face datection frame it is long when change information detected.The detection of model A is transferred to step d, and the detection of Model B is transferred to step
e。
D, using 1s as the time cycle, a pair of of RGB image is constantly acquired, (m takes wherein being divided into m frame between two RGB images
Value 2~6).According to this pair of of RGB image, panorama Optic flow information is calculated using IRLS method described in step 1, as model
First input of A;First original image by this to RGB image simultaneously, second input as model A.Model A root
Judged according to the two inputs, if it is determined that living body, then be transferred to the judgement that this step continues lower a pair of of picture, until
(threshold value is that technical staff obtains according to different application scene herein for stopping when the area of Face datection frame in picture reaches threshold value
Empirical value, due to the difference of personnel's distance and camera riding position, value in different scenes is different), turn
Enter step g;If it is determined that non-living body, then be directly transferred to step h.
E, whether the size for first determining whether Face datection frame is more than the threshold value (threshold value in threshold value and step d herein
It is identical), if it exceeds threshold value is then directly judged as non-living body, it is transferred to step h;If it does not exceed the threshold, being then transferred to step f.
F, each frame RGB image in input video, the face that SeetaFace human-face detector is detected constantly are acquired
The area s of detection block is stored in chronological order as a numerical value, stops acquisition and storage until encountering following situations: 1)
The area of Face datection frame reaches threshold value;2) size of continuous k frame (k takes 10~15) Face datection frame is in a very low range
It floats, which is set between (s-m*s, s+m*s), and m takes 0.05~0.15, and wherein s is first frame figure in continuous k frame image
The size of the Face datection frame of picture;3) face is not detected in continuous k frame (k takes 10~15) human-face detector.Then to storage
Area figures be sampled sequentially in time, so that it is become the vector S={ s of a length n1, s2, s3..., sn, make
For face detection block it is long when variation characteristic.And this feature is input in Model B and is judged, if it is determined that living body, then
It is transferred to step g;If it is determined that non-living body, then be transferred to step h.
G, model A and Model B are judged as living body simultaneously, then model M judges that the video to normally enter video, that is, is judged as
Living body, face identification system are transferred to face recognition module.
H, model A and Model B wherein have one to be judged as non-living body or are all judged as non-living body, then model M judgement should
Video is improper attack video, that is, is judged as that non-living body, face identification system stop identifying the user identity.
Step 3: model training stage
A, the model for detecting panorama light stream and texture information is A, the mould for detecting Face datection frame changing pattern
Type is B;The two models need to be trained respectively.
B, model A is trained: the original image and corresponding Optic flow information number that the mark that step 1 is obtained is completed
According to collection D1It is divided into training set T1With checksum set V1.By training set T1Be input in model A using batch stochastic gradient descent method into
Row model training, and utilize checksum set V1Model training is verified as a result, i.e. when model is in checksum set V1It is upper to obtain preferable living body inspection
When surveying precision and the precision and not promoted again with training process, deconditioning.
C, be trained to Model B: the Face datection frame that the mark that step 1 is obtained is completed changes characteristic when long
Collection, which is input in Model B, to be trained.
D, final training is completed, and obtains model A and B, and be combined into complete In vivo detection model M.
Step 4: the model measurement stage
A, camera shoots the entrance video sequence I of user's face, and obtains the successive video frames image P in I1, P2,
P3..., Pn。
B, by successive video frames image P1, P2, P3..., PnIt is input in the In vivo detection model M that step 3 obtains, obtains
The judging result of model M.
Technical characteristics and technical effect of the invention are as follows:
(1) the face In vivo detection frame of multi thread fusion: the frame to the video sequence continuously inputted while can be learned
Practise Face datection frame it is long when variation characteristic, panoramic information movement Optical-flow Feature and static picture textural characteristics, and combine
The information of three kinds of features carries out In vivo detection end to end.Corresponding technical effect are as follows: can be attacked using much information to various
Blow mode is detected, the precision for improving vivo identification of high degree, while a variety of method parallel detections, is further subtracted
The In vivo detection time is lacked.
(2) Face datection frame it is long when variation characteristic: system just having been detected to, user's face carries out face knowledge to real
All face detection block areas detected during not are modeled, in chronological order the vector of one regular length of sampling composition
As Face datection frame it is long when variation characteristic.Corresponding technical effect are as follows: normal face enters mode and improper people
Different forms can be shown between face attack mode, the variation that will test frame area is modeled, and Face datection frame is formed
Variation characteristic when long probes into two kinds of differences entered between mode.
(3) mobile using panorama Optic flow information detection edge: movement when using user close to camera, it will be in video frame
Neighbouring two Zhang Quanjing's photos (including face and background) calculate panorama light stream as information.Normal face can be calculated
Out like the light stream figure of human face structure;And face is attacked for hand-held photo or video, then calculate photo or video playing medium
Like square structure light stream figure, using the foundation attacked like square rim information as photo or video is attacked in light stream figure.It is right
The technical effect answered are as follows: promote the detection effect of optical flow method using panorama Optic flow information, while realizing and photos and videos is attacked
The detection of mode.
(4) there are double branches, convolutional neural networks structure for In vivo detection: by panorama Optic flow information and right therewith
The original image information answered is separately input in convolutional neural networks, forms double branching networks structures, while utilizing movement
Optic flow information and static texture information carry out In vivo detection.Corresponding technical effect are as follows: the phase of Optic flow information and texture information
Mutually cooperation can greatly promote the precision of In vivo detection.
The invention firstly uses normal faces to enter the difference between mode and improper attack mode, by face into
The variation of Face datection frame area is modeled during entering, propose Face datection frame it is long when variation characteristic, pass through the spy
It levies to detect the attack of improper entrance;The convolutional neural networks for utilizing a Ge Shuan branch simultaneously, by the Optic flow information of movement and
Static texture information is combined, the further precision for improving In vivo detection.Since In vivo detection module of the invention is
It is run during system in effective identification, has no effect on the recognition time of face recognition module, further reduce the inspection of user's living body
The waiting time of survey, while various attack modes can also be identified, the further precision for promoting In vivo detection.
Above embodiment is not limitation of the present invention, and the present invention is also not limited to the example above, this technology neck
The variations, modifications, additions or substitutions that the technical staff in domain is made within the scope of technical solution of the present invention, also belong to this hair
Bright protection scope.
Claims (5)
1. a kind of novel biopsy method, it is characterised in that: the overall step of the method are as follows:
Step 1: data preparation stage: variation characteristic when acquisition and mark, Face datection frame including image and Optic flow information are long
Acquisition and mark;
Step 2: model design phase: In vivo detection model is divided into be used to detect the model of panorama light stream and texture information with
And the model for detecting Face datection frame changing pattern;Two models are made to be utilized respectively panorama light stream and line in video image
Reason information and Face datection frame it is long when change information carry out In vivo detection simultaneously;
Step 3: model training stage: to the model for detecting panorama light stream and texture information and for detecting face inspection
The model for surveying frame changing pattern is trained respectively;Complete In vivo detection model is combined into after the completion of training;
Step 4: the model measurement stage: obtaining the successive video frames image in the video sequence that gate camera is shot, will scheme
Judging result is obtained as being input in In vivo detection model.
2. novel biopsy method according to claim 1, it is characterised in that: the detailed process of the step 1
Are as follows:
A, the acquisition of image and Optic flow information and mark: equally spaced first from particular video frequency sequence for video sequence data collection
RGB image information is acquired in column, is set to 1s, and living body and non-living body then are carried out to collected RGB image information
Mark: 0 represents non-living body image, and 1 represents living body image;It is final to obtain N Zhang Quanjing image PoriginAnd its corresponding mark
Forigin, the set of N number of sample composition is denoted asSecondly,
For the same video with m frame for an interval, m value 2~6, the acquisition another figure neighbouring with image obtained by the above method
Picture, the minimum two then weighted again using iteration calculate Optic flow information to collected neighbouring two frame RGB images at IRLS method,
And calculated light stream figure is carried out to the mark of living body and non-living body: 0 represents non-living body light stream figure, and 1 represents living body light stream figure;Most
N and D are obtained eventuallyoriginOne-to-one panorama light stream figure PofAnd its mark Fof, the set of N number of sample composition is denoted asBy the raw image data collection D of markoriginWith
Corresponding Optic flow information data set DofMerging becomes D1;
B, the acquisition of variation characteristic and mark when Face datection frame is long: the video that video sequence data is concentrated is passed through
SeetaFace human-face detector, to obtain in each video sequence, the face location of each frame image, location information are as follows: left
Upper angular coordinate (xmin, ymin) and bottom right angular coordinate (xmax, ymax);Then it is calculated using top left co-ordinate and bottom right angular coordinate
Area s=(the x of Face datection framemax-xmin)*(ymax-ymin);Then by the area of Face datection frame in each video according to
Time sequencing is sampled, and is combined into the vector S={ s that a length is n1, s2, s3..., sn, as corresponding video sequence
Face datection frame it is long when variation characteristic, and to the vector carry out living body and non-living body mark: 0 represent non-living body face inspection
Survey frame it is long when variation characteristic, 1 represent living body faces detection block it is long when variation characteristic;Finally calculate N number of Face datection frame
The set of variation characteristic S and its corresponding mark F, N number of sample composition are denoted as D2={ (S1, F1), (S2, F2) ..., (SN,
FN)}。
3. novel biopsy method according to claim 2, it is characterised in that: the detailed process of the step 2
Are as follows:
A, whole In vivo detection model is denoted as M, and M consists of two parts: for detecting the model of panorama light stream and texture information
It is B for A, the model for detecting Face datection frame changing pattern;The input of model M is to enter video by the face that gate is shot
Successive video frames image P in sequence I1, P2, P3..., Pn;
B, it for a picture P of input, is detected using SeetaFace human-face detector, if detecting face, is turned
Enter step c, starts to carry out In vivo detection;If it is not detected that face, then be transferred to this step and continue to carry out the picture of input
Face datection;
C, model A and Model B carry out In vivo detection simultaneously, the panorama light stream and texture information that are utilized respectively in video image and
Face datection frame it is long when change information detected;The detection of model A is transferred to step d, and the detection of Model B is transferred to step e;
D, using 1s as the time cycle, a pair of of RGB image is constantly acquired, wherein being divided into m frame, m value 2 between two RGB images
~6;According to this pair of of RGB image, panorama is calculated at IRLS method using the minimum two that iteration described in step 1 weights again
Optic flow information, first input as model A;First original image by this to RGB image simultaneously, as model A's
Second input;Model A is judged according to the two inputs, if it is determined that living body, is then transferred under this step continues
The judgement of a pair of of picture, the stopping when area of the Face datection frame in picture reaches threshold value, is transferred to step g;If it is determined that
For non-living body, then step h is directly transferred to;
E, whether the size for first determining whether Face datection frame is more than threshold value, if it exceeds threshold value is then directly judged as non-live
Body is transferred to step h;If it does not exceed the threshold, being then transferred to step f;
F, each frame RGB image in input video, the Face datection that SeetaFace human-face detector is detected constantly are acquired
The area s of frame is stored in chronological order as a numerical value, stops acquisition and storage: 1) face until encountering following situations
The area of detection block reaches threshold value;2) size of continuous k frame Face datection frame is floated in a very low range, and k takes 10~15;
3) face is not detected in continuous k frame human-face detector, and k takes 10~15;Then sequentially in time to the area figures of storage
It is sampled, it is made to become the vector S={ s of a length n1, s2, s3..., sn, the long Shi Bianhua as Face datection frame
Feature;And this feature is input in Model B and is judged, if it is determined that living body, then be transferred to step g;If it is determined that non-
Living body is then transferred to step h;
G, model A and Model B are judged as living body simultaneously, then model M judges that the video to normally enter video, that is, is judged as living
Body, face identification system are transferred to face recognition module;
H, model A and Model B wherein have one to be judged as non-living body or are all judged as non-living body, then model M judges the video
For improper attack video, that is, it is judged as that non-living body, face identification system stop identifying the user identity.
4. novel biopsy method according to claim 3, it is characterised in that: the detailed process of the step 3
Are as follows:
A, the model for detecting panorama light stream and texture information is A, the model for detecting Face datection frame changing pattern is
B;The two models need to be trained respectively;
B, model A is trained: the original image and corresponding Optic flow information data set D that the mark that step 1 is obtained is completed1
It is divided into training set T1With checksum set V1;By training set T1It is input in model A and carries out model using batch stochastic gradient descent method
Training, and utilize checksum set V1Model training is verified as a result, i.e. when model is in checksum set V1It is upper to obtain preferable In vivo detection precision
And when the precision is not promoted again with training process, deconditioning;
C, be trained to Model B: variation characteristic data set is defeated when the Face datection frame that the mark that step 1 is obtained is completed is long
Enter and is trained into Model B;
D, final training is completed, and obtains model A and B, and be combined into complete In vivo detection model M.
5. novel biopsy method according to claim 4, it is characterised in that: the detailed process of the step 4
Are as follows:
A, camera shoots the entrance video sequence I of user's face, and obtains the successive video frames image P in I1, P2,
P3..., Pn;
B, by successive video frames image P1, P2, P3..., PnIt is input in the In vivo detection model M that step 3 obtains, obtains model M
Judging result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811483851.0A CN109598242B (en) | 2018-12-06 | 2018-12-06 | Living body detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811483851.0A CN109598242B (en) | 2018-12-06 | 2018-12-06 | Living body detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109598242A true CN109598242A (en) | 2019-04-09 |
CN109598242B CN109598242B (en) | 2023-04-18 |
Family
ID=65961997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811483851.0A Active CN109598242B (en) | 2018-12-06 | 2018-12-06 | Living body detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109598242B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472519A (en) * | 2019-07-24 | 2019-11-19 | 杭州晟元数据安全技术股份有限公司 | A kind of human face in-vivo detection method based on multi-model |
CN110991307A (en) * | 2019-11-27 | 2020-04-10 | 北京锐安科技有限公司 | Face recognition method, device, equipment and storage medium |
CN110991432A (en) * | 2020-03-03 | 2020-04-10 | 支付宝(杭州)信息技术有限公司 | Living body detection method, living body detection device, electronic equipment and living body detection system |
CN111178277A (en) * | 2019-12-31 | 2020-05-19 | 支付宝实验室(新加坡)有限公司 | Video stream identification method and device |
CN111368666A (en) * | 2020-02-25 | 2020-07-03 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-current network |
CN111738176A (en) * | 2020-06-24 | 2020-10-02 | 支付宝实验室(新加坡)有限公司 | Living body detection model training method, living body detection device, living body detection equipment and living body detection medium |
CN111881815A (en) * | 2020-07-23 | 2020-11-03 | 高新兴科技集团股份有限公司 | Human face in-vivo detection method based on multi-model feature migration |
CN111967289A (en) * | 2019-05-20 | 2020-11-20 | 高新兴科技集团股份有限公司 | Uncooperative human face in-vivo detection method and computer storage medium |
CN113435353A (en) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Multi-mode-based in-vivo detection method and device, electronic equipment and storage medium |
CN114639129A (en) * | 2020-11-30 | 2022-06-17 | 北京君正集成电路股份有限公司 | Paper medium living body detection method for access control system |
CN114639129B (en) * | 2020-11-30 | 2024-05-03 | 北京君正集成电路股份有限公司 | Paper medium living body detection method for access control system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013131407A1 (en) * | 2012-03-08 | 2013-09-12 | 无锡中科奥森科技有限公司 | Double verification face anti-counterfeiting method and device |
CN106228129A (en) * | 2016-07-18 | 2016-12-14 | 中山大学 | A kind of human face in-vivo detection method based on MATV feature |
CN108038456A (en) * | 2017-12-19 | 2018-05-15 | 中科视拓(北京)科技有限公司 | A kind of anti-fraud method in face identification system |
CN108596041A (en) * | 2018-03-28 | 2018-09-28 | 中科博宏(北京)科技有限公司 | A kind of human face in-vivo detection method based on video |
-
2018
- 2018-12-06 CN CN201811483851.0A patent/CN109598242B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013131407A1 (en) * | 2012-03-08 | 2013-09-12 | 无锡中科奥森科技有限公司 | Double verification face anti-counterfeiting method and device |
CN106228129A (en) * | 2016-07-18 | 2016-12-14 | 中山大学 | A kind of human face in-vivo detection method based on MATV feature |
CN108038456A (en) * | 2017-12-19 | 2018-05-15 | 中科视拓(北京)科技有限公司 | A kind of anti-fraud method in face identification system |
CN108596041A (en) * | 2018-03-28 | 2018-09-28 | 中科博宏(北京)科技有限公司 | A kind of human face in-vivo detection method based on video |
Non-Patent Citations (1)
Title |
---|
胡斐等: "基于微调策略的多线索融合人脸活体检测", 《计算机工程》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111967289A (en) * | 2019-05-20 | 2020-11-20 | 高新兴科技集团股份有限公司 | Uncooperative human face in-vivo detection method and computer storage medium |
CN110472519A (en) * | 2019-07-24 | 2019-11-19 | 杭州晟元数据安全技术股份有限公司 | A kind of human face in-vivo detection method based on multi-model |
CN110472519B (en) * | 2019-07-24 | 2021-10-29 | 杭州晟元数据安全技术股份有限公司 | Human face in-vivo detection method based on multiple models |
CN110991307A (en) * | 2019-11-27 | 2020-04-10 | 北京锐安科技有限公司 | Face recognition method, device, equipment and storage medium |
CN110991307B (en) * | 2019-11-27 | 2023-09-26 | 北京锐安科技有限公司 | Face recognition method, device, equipment and storage medium |
CN111178277A (en) * | 2019-12-31 | 2020-05-19 | 支付宝实验室(新加坡)有限公司 | Video stream identification method and device |
CN111178277B (en) * | 2019-12-31 | 2023-07-14 | 支付宝实验室(新加坡)有限公司 | Video stream identification method and device |
CN111368666A (en) * | 2020-02-25 | 2020-07-03 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-current network |
CN111368666B (en) * | 2020-02-25 | 2023-08-18 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-flow network |
CN110991432A (en) * | 2020-03-03 | 2020-04-10 | 支付宝(杭州)信息技术有限公司 | Living body detection method, living body detection device, electronic equipment and living body detection system |
CN111738176A (en) * | 2020-06-24 | 2020-10-02 | 支付宝实验室(新加坡)有限公司 | Living body detection model training method, living body detection device, living body detection equipment and living body detection medium |
CN111881815A (en) * | 2020-07-23 | 2020-11-03 | 高新兴科技集团股份有限公司 | Human face in-vivo detection method based on multi-model feature migration |
CN114639129A (en) * | 2020-11-30 | 2022-06-17 | 北京君正集成电路股份有限公司 | Paper medium living body detection method for access control system |
CN114639129B (en) * | 2020-11-30 | 2024-05-03 | 北京君正集成电路股份有限公司 | Paper medium living body detection method for access control system |
CN113435353A (en) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Multi-mode-based in-vivo detection method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109598242B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109598242A (en) | A kind of novel biopsy method | |
EP4002198A1 (en) | Posture acquisition method and device, and key point coordinate positioning model training method and device | |
CN105426827B (en) | Living body verification method, device and system | |
CN104361327B (en) | A kind of pedestrian detection method and system | |
CN105243386B (en) | Face living body judgment method and system | |
US9183431B2 (en) | Apparatus and method for providing activity recognition based application service | |
CN107949298A (en) | Determine the body part that user is currently nursed | |
CN102622584B (en) | Method for detecting mask faces in video monitor | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
CN105243356B (en) | A kind of method and device that establishing pedestrian detection model and pedestrian detection method | |
CN109758756B (en) | Gymnastics video analysis method and system based on 3D camera | |
CN105740780A (en) | Method and device for human face in-vivo detection | |
CN111402294A (en) | Target tracking method, target tracking device, computer-readable storage medium and computer equipment | |
CN109002761A (en) | A kind of pedestrian's weight identification monitoring system based on depth convolutional neural networks | |
CN110264493A (en) | A kind of multiple target object tracking method and device under motion state | |
CN110188835A (en) | Data based on production confrontation network model enhance pedestrian's recognition methods again | |
CN106709938B (en) | Based on the multi-target tracking method for improving TLD | |
CN114998934B (en) | Clothes-changing pedestrian re-identification and retrieval method based on multi-mode intelligent perception and fusion | |
CN110781762B (en) | Examination cheating detection method based on posture | |
CN109784130A (en) | Pedestrian recognition methods and its device and equipment again | |
CN109711267A (en) | A kind of pedestrian identifies again, pedestrian movement's orbit generation method and device | |
CN106204633A (en) | A kind of student trace method and apparatus based on computer vision | |
CN116682140A (en) | Three-dimensional human body posture estimation algorithm based on attention mechanism multi-mode fusion | |
CN110580708B (en) | Rapid movement detection method and device and electronic equipment | |
Yao et al. | Micro-expression recognition by feature points tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |