CN108038456A - A kind of anti-fraud method in face identification system - Google Patents
A kind of anti-fraud method in face identification system Download PDFInfo
- Publication number
- CN108038456A CN108038456A CN201711375804.XA CN201711375804A CN108038456A CN 108038456 A CN108038456 A CN 108038456A CN 201711375804 A CN201711375804 A CN 201711375804A CN 108038456 A CN108038456 A CN 108038456A
- Authority
- CN
- China
- Prior art keywords
- face
- feature
- training
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012549 training Methods 0.000 claims abstract description 48
- 238000001514 detection method Methods 0.000 claims abstract description 39
- 238000001727 in vivo Methods 0.000 claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 238000012706 support-vector machine Methods 0.000 claims abstract description 15
- 230000003595 spectral effect Effects 0.000 claims abstract description 10
- 238000013461 design Methods 0.000 claims abstract description 4
- 230000008859 change Effects 0.000 claims description 14
- 238000013527 convolutional neural network Methods 0.000 claims description 11
- 230000000694 effects Effects 0.000 claims description 11
- 239000010410 layer Substances 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 230000001815 facial effect Effects 0.000 claims description 7
- 239000003086 colorant Substances 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000036544 posture Effects 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 5
- 230000007935 neutral effect Effects 0.000 claims description 5
- 210000002569 neuron Anatomy 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 240000006409 Acacia auriculiformis Species 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000008034 disappearance Effects 0.000 claims description 3
- 210000005069 ears Anatomy 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 239000011229 interlayer Substances 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000007639 printing Methods 0.000 claims description 3
- 238000011084 recovery Methods 0.000 claims description 3
- 238000002310 reflectometry Methods 0.000 claims description 3
- 230000005855 radiation Effects 0.000 claims 1
- 230000008901 benefit Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 238000001574 biopsy Methods 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Abstract
The invention discloses a kind of anti-fraud method in face identification system, its specific steps is divided into:Obtain image and do normalized, feature extraction phases, model design phase, training stage, forecast period;Wherein, feature extraction phases are used to extract color Biodiversity Characteristics, fog-level feature, picture moment characteristics, clarity feature, spectral signature, mirror features and convolution feature this seven kinds of features.Residual mlp networks and face microtexture feature, support vector machines are used cooperatively by invention, drastically increase the accuracy and speed of face In vivo detection, and can reach more preferable detection result.In addition, the present invention need not increase the hardware device beyond camera, and it is not required personnel's cooperation to be measured just to reach super real-time face In vivo detection speed, it can solve the problems, such as that the In vivo detection time is long in the prior art, need to increase hardware and detectability is not strong.
Description
Technical field
The present invention relates to a kind of anti-fraud method, more particularly to a kind of anti-fraud method in face identification system, belong to
Technical field of machine vision.
Background technology
Recognition of face has been increasingly becoming a kind of important encryption and decryption mode with its rapidity, validity, user friendly,
But many face identification systems can not distinguish the true and false of face at present, therefore, the vision as caused by dummy's face is taken advantage of in order to prevent
Deceive, biopsy method is introduced in face identification system will improve the practicality and security of recognition of face.Face at present
The main method of In vivo detection has:
(1) the active In vivo detection based on video flowing interactive mode:System carries out Face datection first and face key point is determined
Position, if generating several actions at random there are face in video, is moved if tester completes to specify at the appointed time
Make, then system discriminating test personnel are live body, on the contrary then be determined as non-living body.But this detection method needs user to coordinate,
And the used time is longer.
(2) human face in-vivo detection method based on bright pupil effect:Bright pupil whether there is by the eye areas for detecting face
Effect distinguishes living body faces and non-living body face.This detection method needs to increase extra light source, there are cost compared with
The defects of high.
(3) the face cheat detecting method based on image fault analysis:System carries out Face datection first and face is crucial
Point location.If there are face, extracting 4 kinds of features in picture in picture, (specular reflective characteristics, fog-level are special
Sign, moment characteristics, color Biodiversity Characteristics), it is trained and predicts using support vector machines.The feature of this method extraction is relatively simple
Single, discriminating power is not strong and generalization ability is not strong, therefore cannot be applied well in reality scene.
The content of the invention
In order to solve the shortcoming present in above-mentioned technology, taken advantage of the present invention provides anti-in a kind of face identification system
Deceive method.
In order to solve the above technical problems, the technical solution adopted by the present invention is:Anti- in a kind of face identification system is taken advantage of
Method is deceived, is concretely comprised the following steps:
Step S1, obtain image and do normalized:
RGB image is obtained by camera device, the RGB image of acquisition is then input to the inspection of cascade CNN faces
Survey module;Face detection module carries out Face datection on RGB image, if detecting face, by the human face region figure in figure
Piece inputs to deep neural network and carries out face key point location, and by calculating affine change of the key point to standard key point
Change, the face picture under different postures is transformed into the face picture under standard posture;
Step S2, feature extraction phases:
The following seven kinds of features of extraction:
A, color Biodiversity Characteristics:The following two kinds feature is extracted from distribution of color:The picture of the 60 kinds of colors most often occurred
Plain number accounts for the percentage of total pixel number;The all colours number occurred in face picture;
B, fog-level feature:The color change degree between input picture adjacent pixel is calculated first, then to the picture
Low-pass filter processing is carried out, and calculates color change degree between the adjacent pixel after low-pass filter is handled;Again by original
Begin to input the intensity of variation summation between the adjacent pixel of picture and blurred picture to be contrasted, using this comparing result as fuzzy journey
Spend feature;
C, picture moment characteristics:Preserve the one two three Central Moment Feature of each Color Channel of RGB pictures;First square spy
Sign is average, i.e. the average color of picture, and second moment characteristics is the variance of each Color Channel, and the 3rd Central Moment Feature is
The degree of bias of each Color Channel;
D, clarity feature:The clarity degree of face is calculated using Tenengrad gradient methods;
E, spectral signature:Is detected by face key point, selects people into line trace for face in the green channel of rgb video
Face forehead, left cheek, right cheek, by left ear, five region detection PPG signals by auris dextra;Then spectral signature is calculated,
To after the PPG signals in five regions, carry out subtracting mean operation, and by the bandpass filter of 0.5Hz a to 5Hz, so as to become
The signal new into five groups;Using this five groups of new signals as spectral signature;
F, mirror features:Based on dichromatic reflection model, illumination can be decomposed into unrestrained in the reflectivity I of object specific location x
Reflecting component IdWith specular components Is:
I (x)=Id+Is=wd(x)S(x)E(x)+ws(x) E (x) formula one
Wherein, E (x) is incident intensity, wd(x) and ws(x) be respectively diffusing reflection and mirror-reflection weight coefficient, S
(x) it is local diffusing reflection rate;
Model in the following way, obtained from real human face image recovery to 2D and attack face:
I'(x)=I'd+I's=F (I (x))+w's(x) E'(x) formula two
Since diffusing reflection can be determined by the Skewed transformation of original image, replace I ' with F (I (x))d;
Attack face for photograph print, I (x) are first converted into intensity of the printing ink on paper, then pass through
The diffusing reflection of paper surface reaches final image intensity;Attacked for video, I (x) is converted into the spoke in the pixel of LCD screen
Penetrate intensity;Likewise, mirror-reflection is also due to the surface of attack medium is different and is different from real human face;For single image
Specular components are isolated first, and it is average strong then to calculate the ratio of pixel, mirror-reflection pixel in specular components
Degree and variance are specularly reflecting feature;
G, convolution feature:Arrange the data for training convolutional network;Training data is inputed to the convolutional network of foundation
Start to train;The image that camera is read in is inputed to the convolutional neural networks model trained;It is defeated to extract convolutional neural networks
The feature vector gone out;
Step S3, model design phase:
Residual-mlp modellings are as follows:Model is integrally denoted as M, it is tied by deep neural network A and residual
Structure C two parts form;Increase a residual structure C in every two interlayer of neutral net A, by original neutral net
Function H (x) be converted into F (x)+x, increase residual structures can't increase extra parameter and calculation amount to network, but
The optimization for being F (x) can be more simpler than H (x), materially increases the training speed of model, improves training effect,
When the number of plies of model is deepened, solves disappearance gradient problem well;
Step S4, the training stage:
Training stage comprises the following steps:
S41, by face In vivo detection image collection D points with markup information be training set T and checksum set V;
S42, by residual-mlp network models be denoted as M, then it is M1 ... Mn layers shared;Face figure of the model to input
As P extraction face microtexture combinations of features I, recognition result O is exported after each layer of model, each layer network is all by many god
Formed through member, each neuron there are default weights, then should according to the label difference of current network output and input feature vector
Model training is carried out with batch stochastic gradient descent algorithm, constantly adjusts these weights;
S43, using checksum set V verify model training effect, i.e., when model obtains preferable In vivo detection on checksum set V
Precision and when the precision cannot be lifted again with training process, deconditioning;Final training is completed to obtain model M ';
S44, when carrying out face In vivo detection, support vector machines will find the optimal hyperplane f (x) of a linear classification
=x*w^t+b=0;First by the closest approach of two classification points, the constraints of f (x) is found, is then multiplied by Lagrange
Sub- method and KKT conditions are solved, and final training is completed to obtain model N ';
S45, the residual-mlp networks M ' after training and support vector machines N ', can be micro- to the picture in training set
Textural characteristics have a relatively good recognition result;According to the recognition result of both graders and confidence level on training set
A relatively good fusion weight is picked out, residual-mlp networks are blended with support vector machines, have finally been trained
Into acquisition Model B;
Step S5, forecast period:
RGB image P is read in by camera first, image P is inputed into human-face detector, if there are face in image
Then the face detected is normalized, obtains normalization facial image C;Seven kinds of micro- lines of extraction normalization facial image C
Manage feature I;The integrated classification device B that microtexture feature I input steps S45 is obtained, predicts the result of face In vivo detection.
Residual-mlp networks and face microtexture feature, support vector machines are used cooperatively by the present invention, are greatly carried
The high accuracy and speed of face In vivo detection;Particularly, increased residual structures can cause model to reach more preferable
Detection result.In addition, the biopsy method that the present invention uses need not increase the hardware device beyond camera, and it is not required to
Super real-time face In vivo detection speed can be reached by wanting personnel to be measured to coordinate, and have detection speed compared with conventional art
It hurry up, anti-fraud accuracy of identification is high, saves the advantages of testing cost.
Brief description of the drawings
Fig. 1 is the overall flow schematic diagram of the present invention.
Fig. 2 is the process displaying figure of residual-mlp modellings.
Embodiment
The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
A kind of anti-fraud method in face identification system shown in Fig. 1, concretely comprises the following steps:
Step S1, obtain image and do normalized:
RGB image is obtained by camera device, the RGB image of acquisition is then input to the inspection of cascade CNN faces
Survey module;Face detection module carries out Face datection on RGB image, if detecting face, by the human face region figure in figure
Piece inputs to deep neural network and carries out face key point location, and by calculating affine change of the key point to standard key point
Change, the face picture under different postures is transformed into the face picture under standard posture;
Step S2, feature extraction phases:
The following seven kinds of features of extraction:
A, color Biodiversity Characteristics:Since attack medium would generally lose the diversity of colour when showing face, from face
The following two kinds feature is extracted in color distribution:(1) number of pixels of the 60 kinds of colors most often occurred accounts for the percentage of total pixel number
Than;(2) all colours number occurred in face picture;
B, fog-level feature:It is usually nearer apart from camera to attack medium, so deception face, which occurs to dissipate, hands over phenomenon,
Therefore a clue of the present invention using fog-level as In vivo detection, the algorithm for extracting this feature are as follows:Input is calculated first
Color change degree between picture adjacent pixel, then carries out low-pass filter processing to the picture, and calculates by low pass filtered
Color change degree between adjacent pixel after the processing of ripple device;The change that will be originally inputted again between the adjacent pixel of picture and blurred picture
Change degree summation is contrasted, using this comparing result as fog-level feature;
C, picture moment characteristics:Preserve the one two three central moment of each Color Channel of RGB (RGB triple channel) pictures
Feature;First moment characteristics is average, i.e. the average color of picture, and second moment characteristics is the variance of each Color Channel, the
Three Central Moment Features are the degrees of bias of each Color Channel;
D, clarity feature:The clarity degree of face is calculated using Tenengrad gradient methods.This method utilizes
Solel operators distinguish the gradient of calculated level and vertical direction, and Same Scene descending stair angle value is higher, then image is more clear
E, spectral signature:Slight change occurs for the face skin color that blood flow can be brought, and this slight change produces
Signal be known as PPG (photoplethysmogram) signal, PPG signals are merely able to detect in real face video.
If after skin, some light can be reflected light, and face has covering article, light source can be capped object reflection or inhale
Receive, so that enough light sources will not be detected, the extraction algorithm step of this feature is as follows:(1) in the green channel of rgb video
To face into line trace, detect face key point, select face forehead, left cheek, right cheek, by left ear, five by auris dextra
Region detection PPG signals;(2) and then spectral signature is calculated, after the PPG signals in five regions are obtained, carries out subtracting mean operation,
And by the bandpass filter of 0.5Hz a to 5Hz, so as to become five groups of new signals;(3) using this five groups of new signals as light
Spectrum signature;
F, mirror features:Based on dichromatic reflection model, illumination can be decomposed into unrestrained in the reflectivity I of object specific location x
Reflecting component IdWith specular components Is:
I (x)=Id+Is=wd(x)S(x)E(x)+ws(x) E (x) formula one
Wherein, E (x) is incident intensity, wd(x) and ws(x) be respectively diffusing reflection and mirror-reflection weight coefficient, S
(x) it is local diffusing reflection rate;
Model in the following way, obtained from real human face image recovery to 2D and attack face:
I'(x)=I'd+I's=F (I (x))+w's(x) E'(x) formula two
Since diffusing reflection can be determined by the Skewed transformation of original image, replace I ' with F (I (x))d;
Attack face for photograph print, I (x) are first converted into intensity of the printing ink on paper, then pass through
The diffusing reflection of paper surface reaches final image intensity;Attacked for video, I (x) is converted into the spoke in the pixel of LCD screen
Penetrate intensity;Likewise, mirror-reflection is also due to the surface of attack medium is different and is different from real human face;For single image
Specular components are isolated first, and it is average strong then to calculate the ratio of pixel, mirror-reflection pixel in specular components
Degree and variance are specularly reflecting feature;
G, convolution feature:Convolutional neural networks are a kind of feedforward neural networks, enough draw the Efficient Characterization of original image, this
Enable CNN to identify the rule on picture exactly directly from original pixels, there is outstanding performance for image procossing.Carry
Take convolutional neural networks feature key step as follows:(1) data for training convolutional network are arranged;(2) it is training data is defeated
Enter and start to train to the convolutional network of foundation;(3) image that camera is read in is inputed to the convolutional neural networks mould trained
Type;(4) feature vector of convolutional neural networks output is extracted;
Step S3, model design phase:
Residual-mlp modellings are as follows:Model is integrally denoted as M, it is tied by deep neural network A and residual
Structure C two parts form, as shown in Figure 2;Increase a residual structure C in every two interlayer of neutral net A, by original nerve
The function H (x) of network is converted into F (x)+x, increase residual structures can't increase extra parameter to network and
Calculation amount, but the optimization of F (x) can be more simpler than H (x), materially increases the training speed of model, improves
Training effect, when the number of plies of model is deepened, solves disappearance gradient problem well;
Step S4, the training stage:
Training stage comprises the following steps:
S41, by face In vivo detection image collection D points with markup information be training set T and checksum set V;
S42, by residual-mlp network models be denoted as M, then it is M1 ... Mn layers shared;Face figure of the model to input
As P extraction face microtexture combinations of features I, recognition result O is exported after each layer of model, each layer network is all by many god
Formed through member, each neuron there are default weights, then should according to the label difference of current network output and input feature vector
Model training is carried out with batch stochastic gradient descent algorithm, constantly adjusts these weights;
S43, using checksum set V verify model training effect, i.e., when model obtains preferable In vivo detection on checksum set V
Precision and when the precision cannot be lifted again with training process, deconditioning;Final training is completed to obtain model M ';
S44, when carrying out face In vivo detection, support vector machines will find the optimal hyperplane f (x) of a linear classification
=x*w^t+b=0;First by the closest approach of two classification points, the constraints of f (x) is found, is then multiplied by Lagrange
Sub- method and KKT conditions are solved, and final training is completed to obtain model N ';
S45, the residual-mlp networks M ' after training and support vector machines N ', can be micro- to the picture in training set
Textural characteristics have a relatively good recognition result;According to the recognition result of both graders and confidence level on training set
A relatively good fusion weight is picked out, residual-mlp networks are blended with support vector machines, have finally been trained
Into acquisition Model B;Effect after two kinds of Multiple Classifier Fusions is more preferable than single grader recognition effect;
Step S5, forecast period:
RGB image P is read in by camera first, image P is inputed into human-face detector, if there are face in image
Then the face detected is normalized, obtains normalization facial image C;Seven kinds of micro- lines of extraction normalization facial image C
Manage feature I;The integrated classification device B that microtexture feature I input steps S45 is obtained, predicts the result of face In vivo detection.
The present invention proposes the grader that a kind of residual-mlp is blended with traditional support vector machine, and utilizes face
Textural characteristics carry out face In vivo detection method and system, it compared with prior art, its key point and innovative point are as follows:
First, the microtexture feature of face:1) the microtexture feature of face includes:Fog-level feature, picture moment characteristics, face
Color diversity, image definition feature, specular reflective characteristics, spectral signature, convolution feature;2) this feature is from normalized face
It is calculated in picture.The precision and speed of face In vivo detection, particularly, the party can be greatly lifted using features described above
Method need not increase the hardware device beyond camera, and be not required personnel to be tested to coordinate.
2nd, residual-mlp In vivo detections frame:The In vivo detection frame is on the basis of traditional neural network, increase
Residual (residual error) structure, can solve traditional in vivo detection model when the number of plies is deepened, what error rate improved on the contrary asks
Topic so that model reaches more preferable detection result.
3rd, the grader that residual-mlp and traditional support vector machine blend:Utilize residual-mlp and tradition
The grader that support vector machines blends classifies the authenticity of face, and the fusion of the two has relative to single grader
More preferable In vivo detection effect.
The above embodiment is not limitation of the present invention, and the present invention is also not limited to the example above, this technology neck
The variations, modifications, additions or substitutions that the technical staff in domain is made in the range of technical scheme, also belong to this hair
Bright protection domain.
Claims (1)
1. a kind of anti-fraud method in face identification system, it is characterised in that:The method concretely comprises the following steps:
Step S1, obtain image and do normalized:
RGB image is obtained by camera device, the RGB image of acquisition is then input to cascade CNN Face datection moulds
Block;Face detection module carries out Face datection on RGB image, if detecting face, the human face region picture in figure is defeated
Enter and carry out face key point location to deep neural network, and by calculating key point to the affine transformation of standard key point, will
Face picture under different postures transforms to the face picture under standard posture;
Step S2, feature extraction phases:
The following seven kinds of features of extraction:
A, color Biodiversity Characteristics:The following two kinds feature is extracted from distribution of color:The pixel of the 60 kinds of colors most often occurred
Number accounts for the percentage of total pixel number;The all colours number occurred in face picture;
B, fog-level feature:The color change degree between input picture adjacent pixel is calculated first, and then the picture is carried out
Low-pass filter processing, and calculate color change degree between the adjacent pixel after low-pass filter is handled;Again will be original defeated
The intensity of variation summation entered between the adjacent pixel of picture and blurred picture is contrasted, special using this comparing result as fog-level
Sign;
C, picture moment characteristics:Preserve the one two three Central Moment Feature of each Color Channel of RGB pictures;First moment characteristics be
The average color of average, i.e. picture, second moment characteristics are the variances of each Color Channel, and the 3rd Central Moment Feature is each
The degree of bias of Color Channel;
D, clarity feature:The clarity degree of face is calculated using Tenengrad gradient methods;
E, spectral signature:Face key point is detected, before selecting face into line trace to face in the green channel of rgb video
Volume, left cheek, right cheek, by left ear, five region detection PPG signals by auris dextra;Then spectral signature is calculated, is obtaining five
After the PPG signals in a region, carry out subtracting mean operation, and by the bandpass filter of 0.5Hz a to 5Hz, so as to become five
The new signal of group;Using this five groups of new signals as spectral signature;
F, mirror features:Based on dichromatic reflection model, illumination can be decomposed into diffusing reflection in the reflectivity I of object specific location x
Component IdWith specular components Is:
I (x)=Id+Is=wd(x)S(x)E(x)+ws(x) E (x) formula one
Wherein, E (x) is incident intensity, wd(x) and ws(x) be respectively diffusing reflection and mirror-reflection weight coefficient, S (x) is
Local diffusing reflection rate;
Model in the following way, obtained from real human face image recovery to 2D and attack face:
I'(x)=I'd+I's=F (I (x))+w's(x) E'(x) formula two
Since diffusing reflection can be determined by the Skewed transformation of original image, replace I ' with F (I (x))d;
Attack face for photograph print, I (x) are first converted into intensity of the printing ink on paper, then pass through paper
The diffusing reflection on surface reaches final image intensity;Attacked for video, the radiation that I (x) is converted into the pixel of LCD screen is strong
Degree;Likewise, mirror-reflection is also due to the surface of attack medium is different and is different from real human face;For single image first
Isolate specular components, then calculate the ratio of pixel in specular components, mirror-reflection pixel mean intensity and
Variance is specularly reflecting feature;
G, convolution feature:Arrange the data for training convolutional network;The convolutional network that training data is inputed to foundation starts
Training;The image that camera is read in is inputed to the convolutional neural networks model trained;Extract convolutional neural networks output
Feature vector;
Step S3, model design phase:
Residual-mlp modellings are as follows:Model is integrally denoted as M, it is by deep neural network A and residual structure C two
Part forms;Increase a residual structure C in every two interlayer of neutral net A, by the letter of original neutral net
Number H (x) are converted into F (x)+x, and increase residual structures can't increase extra parameter and calculation amount to network, but F
(x) optimization can be more simpler than H (x), materially increases the training speed of model, improves training effect,
When the number of plies of model is deepened, solves disappearance gradient problem well;
Step S4, the training stage:
Training stage comprises the following steps:
S41, by face In vivo detection image collection D points with markup information be training set T and checksum set V;
S42, by residual-mlp network models be denoted as M, then it is M1 ... Mn layers shared;Model carries the facial image P of input
Face microtexture combinations of features I is taken, recognition result O is exported after each layer of model, each layer network is all by many neurons
Composition, each neuron have default weights, then according to current network output and the label difference application batch of input feature vector
Secondary stochastic gradient descent algorithm carries out model training, constantly adjusts these weights;
S43, using checksum set V verify model training effect, i.e., when model obtains preferable In vivo detection precision on checksum set V
And when the precision cannot be lifted again with training process, deconditioning;Final training is completed to obtain model M ';
S44, when carrying out face In vivo detection, support vector machines will find optimal hyperplane f (x)=x* of a linear classification
W^t+b=0;First by the closest approach of two classification points, the constraints of f (x) is found, then passes through method of Lagrange multipliers
Solved with KKT conditions, final training is completed to obtain model N ';
S45, the residual-mlp networks M ' after training and support vector machines N ', can be to the picture microtexture in training set
Feature has a relatively good recognition result;Selected according to the recognition result of both graders with confidence level on training set
Go out a relatively good fusion weight, residual-mlp networks are blended with support vector machines, final training completion obtains
Obtain Model B;
Step S5, forecast period:
First by camera read in RGB image P, image P is inputed into human-face detector, if in image there are face if it is right
The face detected is normalized, and obtains normalization facial image C;Seven kinds of microtexture spies of extraction normalization facial image C
Levy I;The integrated classification device B that microtexture feature I input steps S45 is obtained, predicts the result of face In vivo detection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711375804.XA CN108038456B (en) | 2017-12-19 | 2017-12-19 | Anti-deception method in face recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711375804.XA CN108038456B (en) | 2017-12-19 | 2017-12-19 | Anti-deception method in face recognition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108038456A true CN108038456A (en) | 2018-05-15 |
CN108038456B CN108038456B (en) | 2024-01-26 |
Family
ID=62099948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711375804.XA Active CN108038456B (en) | 2017-12-19 | 2017-12-19 | Anti-deception method in face recognition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108038456B (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875618A (en) * | 2018-06-08 | 2018-11-23 | 高新兴科技集团股份有限公司 | A kind of human face in-vivo detection method, system and device |
CN108921071A (en) * | 2018-06-24 | 2018-11-30 | 深圳市中悦科技有限公司 | Human face in-vivo detection method, device, storage medium and processor |
CN109255322A (en) * | 2018-09-03 | 2019-01-22 | 北京诚志重科海图科技有限公司 | A kind of human face in-vivo detection method and device |
CN109271863A (en) * | 2018-08-15 | 2019-01-25 | 北京小米移动软件有限公司 | Human face in-vivo detection method and device |
CN109558813A (en) * | 2018-11-14 | 2019-04-02 | 武汉大学 | A kind of AI depth based on pulse signal is changed face video evidence collecting method |
CN109598242A (en) * | 2018-12-06 | 2019-04-09 | 中科视拓(北京)科技有限公司 | A kind of novel biopsy method |
CN109795830A (en) * | 2019-03-04 | 2019-05-24 | 北京旷视科技有限公司 | It is automatically positioned the method and device of logistics tray |
CN109948566A (en) * | 2019-03-26 | 2019-06-28 | 江南大学 | A kind of anti-fraud detection method of double-current face based on weight fusion and feature selecting |
CN109977865A (en) * | 2019-03-26 | 2019-07-05 | 江南大学 | A kind of fraud detection method based on face color space and metric analysis |
CN109993124A (en) * | 2019-04-03 | 2019-07-09 | 深圳市华付信息技术有限公司 | Based on the reflective biopsy method of video, device and computer equipment |
CN110516619A (en) * | 2019-08-29 | 2019-11-29 | 河南中原大数据研究院有限公司 | A kind of cos-attack recognition of face attack algorithm |
CN110569737A (en) * | 2019-08-15 | 2019-12-13 | 深圳华北工控软件技术有限公司 | Face recognition deep learning method and face recognition acceleration camera |
CN110688946A (en) * | 2019-09-26 | 2020-01-14 | 上海依图信息技术有限公司 | Public cloud silence in-vivo detection device and method based on picture identification |
CN110796648A (en) * | 2019-10-28 | 2020-02-14 | 南京泓图人工智能技术研究院有限公司 | Facial chloasma area automatic segmentation method based on melanin extraction |
CN110929680A (en) * | 2019-12-05 | 2020-03-27 | 四川虹微技术有限公司 | Human face living body detection method based on feature fusion |
CN110956149A (en) * | 2019-12-06 | 2020-04-03 | 中国平安财产保险股份有限公司 | Pet identity verification method, device and equipment and computer readable storage medium |
CN111091047A (en) * | 2019-10-28 | 2020-05-01 | 支付宝(杭州)信息技术有限公司 | Living body detection method and device, server and face recognition equipment |
CN111460419A (en) * | 2020-03-31 | 2020-07-28 | 周亚琴 | Internet of things artificial intelligence face verification method and Internet of things cloud server |
CN111738735A (en) * | 2020-07-23 | 2020-10-02 | 腾讯科技(深圳)有限公司 | Image data processing method and device and related equipment |
CN111967289A (en) * | 2019-05-20 | 2020-11-20 | 高新兴科技集团股份有限公司 | Uncooperative human face in-vivo detection method and computer storage medium |
CN113449707A (en) * | 2021-08-31 | 2021-09-28 | 杭州魔点科技有限公司 | Living body detection method, electronic apparatus, and storage medium |
US20210406525A1 (en) * | 2019-06-03 | 2021-12-30 | Tencent Technology (Shenzhen) Company Limited | Facial expression recognition method and apparatus, electronic device and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080260212A1 (en) * | 2007-01-12 | 2008-10-23 | Moskal Michael D | System for indicating deceit and verity |
CN102819733A (en) * | 2012-08-09 | 2012-12-12 | 中国科学院自动化研究所 | Rapid detection fuzzy method of face in street view image |
CN103593598A (en) * | 2013-11-25 | 2014-02-19 | 上海骏聿数码科技有限公司 | User online authentication method and system based on living body detection and face recognition |
CN104665849A (en) * | 2014-12-11 | 2015-06-03 | 西南交通大学 | Multi-physiological signal multi-model interaction-based high-speed railway dispatcher stress detecting method |
US20150163345A1 (en) * | 2013-12-06 | 2015-06-11 | Digimarc Corporation | Smartphone-based methods and systems |
CN106650669A (en) * | 2016-12-27 | 2017-05-10 | 重庆邮电大学 | Face recognition method for identifying counterfeit photo deception |
CN106651750A (en) * | 2015-07-22 | 2017-05-10 | 美国西门子医疗解决公司 | Method and system used for 2D/3D image registration based on convolutional neural network regression |
CN106778683A (en) * | 2017-01-12 | 2017-05-31 | 西安电子科技大学 | Based on the quick Multi-angle face detection method for improving LBP features |
CN106874898A (en) * | 2017-04-08 | 2017-06-20 | 复旦大学 | Extensive face identification method based on depth convolutional neural networks model |
-
2017
- 2017-12-19 CN CN201711375804.XA patent/CN108038456B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080260212A1 (en) * | 2007-01-12 | 2008-10-23 | Moskal Michael D | System for indicating deceit and verity |
CN102819733A (en) * | 2012-08-09 | 2012-12-12 | 中国科学院自动化研究所 | Rapid detection fuzzy method of face in street view image |
CN103593598A (en) * | 2013-11-25 | 2014-02-19 | 上海骏聿数码科技有限公司 | User online authentication method and system based on living body detection and face recognition |
US20150163345A1 (en) * | 2013-12-06 | 2015-06-11 | Digimarc Corporation | Smartphone-based methods and systems |
CN104665849A (en) * | 2014-12-11 | 2015-06-03 | 西南交通大学 | Multi-physiological signal multi-model interaction-based high-speed railway dispatcher stress detecting method |
CN106651750A (en) * | 2015-07-22 | 2017-05-10 | 美国西门子医疗解决公司 | Method and system used for 2D/3D image registration based on convolutional neural network regression |
CN106650669A (en) * | 2016-12-27 | 2017-05-10 | 重庆邮电大学 | Face recognition method for identifying counterfeit photo deception |
CN106778683A (en) * | 2017-01-12 | 2017-05-31 | 西安电子科技大学 | Based on the quick Multi-angle face detection method for improving LBP features |
CN106874898A (en) * | 2017-04-08 | 2017-06-20 | 复旦大学 | Extensive face identification method based on depth convolutional neural networks model |
Non-Patent Citations (2)
Title |
---|
PETER WILD等: "Robust multimodal face and fingerprint fusion in the presence of spoofing attacks", pages 17 - 25 * |
吴继鹏等: "基于FS-LBP特征的人脸活体检测方法", vol. 22, no. 5, pages 65 - 72 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875618A (en) * | 2018-06-08 | 2018-11-23 | 高新兴科技集团股份有限公司 | A kind of human face in-vivo detection method, system and device |
CN108921071A (en) * | 2018-06-24 | 2018-11-30 | 深圳市中悦科技有限公司 | Human face in-vivo detection method, device, storage medium and processor |
CN109271863A (en) * | 2018-08-15 | 2019-01-25 | 北京小米移动软件有限公司 | Human face in-vivo detection method and device |
CN109255322A (en) * | 2018-09-03 | 2019-01-22 | 北京诚志重科海图科技有限公司 | A kind of human face in-vivo detection method and device |
CN109558813A (en) * | 2018-11-14 | 2019-04-02 | 武汉大学 | A kind of AI depth based on pulse signal is changed face video evidence collecting method |
CN109598242A (en) * | 2018-12-06 | 2019-04-09 | 中科视拓(北京)科技有限公司 | A kind of novel biopsy method |
CN109598242B (en) * | 2018-12-06 | 2023-04-18 | 中科视拓(北京)科技有限公司 | Living body detection method |
CN109795830A (en) * | 2019-03-04 | 2019-05-24 | 北京旷视科技有限公司 | It is automatically positioned the method and device of logistics tray |
CN109977865B (en) * | 2019-03-26 | 2023-08-18 | 江南大学 | Fraud detection method based on face color space and metric analysis |
CN109977865A (en) * | 2019-03-26 | 2019-07-05 | 江南大学 | A kind of fraud detection method based on face color space and metric analysis |
CN109948566B (en) * | 2019-03-26 | 2023-08-18 | 江南大学 | Double-flow face anti-fraud detection method based on weight fusion and feature selection |
CN109948566A (en) * | 2019-03-26 | 2019-06-28 | 江南大学 | A kind of anti-fraud detection method of double-current face based on weight fusion and feature selecting |
CN109993124A (en) * | 2019-04-03 | 2019-07-09 | 深圳市华付信息技术有限公司 | Based on the reflective biopsy method of video, device and computer equipment |
CN111967289A (en) * | 2019-05-20 | 2020-11-20 | 高新兴科技集团股份有限公司 | Uncooperative human face in-vivo detection method and computer storage medium |
US20210406525A1 (en) * | 2019-06-03 | 2021-12-30 | Tencent Technology (Shenzhen) Company Limited | Facial expression recognition method and apparatus, electronic device and storage medium |
CN110569737A (en) * | 2019-08-15 | 2019-12-13 | 深圳华北工控软件技术有限公司 | Face recognition deep learning method and face recognition acceleration camera |
CN110516619A (en) * | 2019-08-29 | 2019-11-29 | 河南中原大数据研究院有限公司 | A kind of cos-attack recognition of face attack algorithm |
CN110688946A (en) * | 2019-09-26 | 2020-01-14 | 上海依图信息技术有限公司 | Public cloud silence in-vivo detection device and method based on picture identification |
CN110796648A (en) * | 2019-10-28 | 2020-02-14 | 南京泓图人工智能技术研究院有限公司 | Facial chloasma area automatic segmentation method based on melanin extraction |
CN111091047B (en) * | 2019-10-28 | 2021-08-27 | 支付宝(杭州)信息技术有限公司 | Living body detection method and device, server and face recognition equipment |
CN111091047A (en) * | 2019-10-28 | 2020-05-01 | 支付宝(杭州)信息技术有限公司 | Living body detection method and device, server and face recognition equipment |
CN110929680A (en) * | 2019-12-05 | 2020-03-27 | 四川虹微技术有限公司 | Human face living body detection method based on feature fusion |
CN110929680B (en) * | 2019-12-05 | 2023-05-26 | 四川虹微技术有限公司 | Human face living body detection method based on feature fusion |
CN110956149A (en) * | 2019-12-06 | 2020-04-03 | 中国平安财产保险股份有限公司 | Pet identity verification method, device and equipment and computer readable storage medium |
CN111460419B (en) * | 2020-03-31 | 2020-11-27 | 深圳市微网力合信息技术有限公司 | Internet of things artificial intelligence face verification method and Internet of things cloud server |
CN111460419A (en) * | 2020-03-31 | 2020-07-28 | 周亚琴 | Internet of things artificial intelligence face verification method and Internet of things cloud server |
CN111738735B (en) * | 2020-07-23 | 2021-07-13 | 腾讯科技(深圳)有限公司 | Image data processing method and device and related equipment |
CN111738735A (en) * | 2020-07-23 | 2020-10-02 | 腾讯科技(深圳)有限公司 | Image data processing method and device and related equipment |
CN113449707A (en) * | 2021-08-31 | 2021-09-28 | 杭州魔点科技有限公司 | Living body detection method, electronic apparatus, and storage medium |
CN113449707B (en) * | 2021-08-31 | 2021-11-30 | 杭州魔点科技有限公司 | Living body detection method, electronic apparatus, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108038456B (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038456A (en) | A kind of anti-fraud method in face identification system | |
CN104751108B (en) | Facial image identification device and facial image recognition method | |
CN105138954B (en) | A kind of image automatic screening inquiry identifying system | |
CN101609500B (en) | Quality estimation method of exit-entry digital portrait photos | |
US8345936B2 (en) | Multispectral iris fusion for enhancement and interoperability | |
CN106339673A (en) | ATM identity authentication method based on face recognition | |
CN110084156A (en) | A kind of gait feature abstracting method and pedestrian's personal identification method based on gait feature | |
CN108596041B (en) | A kind of human face in-vivo detection method based on video | |
CN107111743A (en) | The vital activity tracked using gradual eyelid is detected | |
CN101390128B (en) | Detecting method and detecting system for positions of face parts | |
CN105956572A (en) | In vivo face detection method based on convolutional neural network | |
TW200842733A (en) | Object image detection method | |
CN109598242B (en) | Living body detection method | |
CN110516616A (en) | A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set | |
CN109858439A (en) | A kind of biopsy method and device based on face | |
Wang et al. | Investigation into recognition algorithm of helmet violation based on YOLOv5-CBAM-DCN | |
CN109214336A (en) | A kind of vehicle window marker detection method and device | |
CN107798279A (en) | Face living body detection method and device | |
CN112396011B (en) | Face recognition system based on video image heart rate detection and living body detection | |
CN109740572A (en) | A kind of human face in-vivo detection method based on partial color textural characteristics | |
CN106650606A (en) | Matching and processing method for face image and face image model construction system | |
Hadiprakoso et al. | Face anti-spoofing using CNN classifier & face liveness detection | |
CN106855944A (en) | Pedestrian's Marker Identity method and device | |
CN107742094A (en) | Improve the image processing method of testimony of a witness comparison result | |
CN109101925A (en) | Biopsy method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |