CN108038456B - Anti-deception method in face recognition system - Google Patents

Anti-deception method in face recognition system Download PDF

Info

Publication number
CN108038456B
CN108038456B CN201711375804.XA CN201711375804A CN108038456B CN 108038456 B CN108038456 B CN 108038456B CN 201711375804 A CN201711375804 A CN 201711375804A CN 108038456 B CN108038456 B CN 108038456B
Authority
CN
China
Prior art keywords
face
training
model
image
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711375804.XA
Other languages
Chinese (zh)
Other versions
CN108038456A (en
Inventor
张宇聪
张�杰
刘昕
山世光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seetatech Beijing Technology Co ltd
Original Assignee
Seetatech Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seetatech Beijing Technology Co ltd filed Critical Seetatech Beijing Technology Co ltd
Priority to CN201711375804.XA priority Critical patent/CN108038456B/en
Publication of CN108038456A publication Critical patent/CN108038456A/en
Application granted granted Critical
Publication of CN108038456B publication Critical patent/CN108038456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention discloses an anti-deception method in a face recognition system, which comprises the following specific steps: acquiring an image and performing normalization processing, a feature extraction stage, a model design stage, a training stage and a prediction stage; the feature extraction stage is used for extracting seven features, namely color diversity features, blurring degree features, picture moment features, definition features, spectrum features, mirror surface features and convolution features. The invention uses the residual-mlp network, the human face micro texture feature and the support vector machine in a matching way, thereby greatly improving the accuracy and the speed of human face living body detection and achieving better detection effect. In addition, the invention can achieve ultra-real-time human face living body detection speed without adding hardware equipment except a camera and matching with a person to be detected, and can solve the problems of long living body detection time, hardware addition and weak detection capability in the prior art.

Description

Anti-deception method in face recognition system
Technical Field
The invention relates to a spoofing prevention method, in particular to a spoofing prevention method in a face recognition system, and belongs to the technical field of machine vision.
Background
Face recognition gradually becomes an important encryption and decryption mode due to rapidity, effectiveness and user friendliness, but at present, many face recognition systems cannot distinguish the authenticity of the face, so that in order to prevent visual spoofing caused by a dummy face, a living body detection method is introduced into the face recognition system, and the practicability and safety of the face recognition are improved. The main method for detecting human face living bodies at present comprises the following steps:
(1) Active live detection based on video streaming interaction: firstly, the system performs face detection and face key point positioning, if a face exists in a video, a plurality of actions are randomly generated, if a tester completes a specified action within a specified time, the system judges that the tester is a living body, otherwise, the system judges that the tester is a non-living body. But this detection method requires user cooperation and takes a long time.
(2) The human face living body detection method based on the bright pupil effect comprises the following steps: the living face and the non-living face are distinguished by detecting whether a bright pupil effect exists in an eye area of the face. This detection method requires additional light source equipment, and has the defect of high cost.
(3) The face spoofing detection method based on image distortion analysis comprises the following steps: firstly, the system performs face detection and face key point positioning. If the face exists in the picture, 4 features (specular reflection features, blurring degree features, moment features and color diversity features) in the picture are extracted, and training and prediction are performed by using a support vector machine. The features extracted by the method are relatively simple, have weak discrimination capability and weak generalization capability, and therefore cannot be well applied to a real scene.
Disclosure of Invention
In order to solve the defects of the technology, the invention provides an anti-deception method in a face recognition system.
In order to solve the technical problems, the invention adopts the following technical scheme: an anti-deception method in a face recognition system comprises the following specific steps:
step S1, acquiring an image and carrying out normalization processing:
acquiring an RGB image through camera equipment, and then inputting the acquired RGB image to a cascades CNN face detection module; the face detection module performs face detection on the RGB image, if a face is detected, a face region picture in the picture is input to the deep neural network to perform face key point positioning, and affine transformation from a key point to a standard key point is calculated to transform face pictures in different postures into face pictures in standard postures;
step S2, a feature extraction stage:
the following seven features are extracted:
a. color diversity feature: two features are extracted from the color distribution: the number of pixels of 60 colors which most commonly occur is a percentage of the number of total pixel points; the number of all colors appearing in the face picture;
b. blur degree characteristics: firstly, calculating the color change degree between adjacent pixels of an input picture, then carrying out low-pass filter processing on the picture, and calculating the color change degree between the adjacent pixels after the low-pass filter processing; comparing the sum of the variation degrees between adjacent pixels of the original input picture and the fuzzy picture, and taking the comparison result as a fuzzy degree characteristic;
c. picture moment characteristics: storing first two and three central moment characteristics of each color channel of the RGB picture; the first moment feature is the mean, i.e. the average color of the picture, the second moment feature is the variance of each color channel, and the third central moment feature is the skewness of each color channel;
d. sharpness characteristics: calculating the definition degree of the face by using a Tenengard gradient method;
e. spectral characteristics: tracking a human face in a green channel of an RGB video, detecting key points of the human face, and selecting five areas of the forehead, the left cheek, the right cheek, the left side of the ear and the right side of the ear to detect PPG signals; then calculating spectral characteristics, after obtaining PPG signals of five areas, carrying out average reduction operation, and passing through a band-pass filter of 0.5Hz to 5Hz so as to become five groups of new signals; taking the five groups of new signals as spectral features;
f. mirror surface characteristics: based on the bicolor reflection model, the reflectivity I of the illumination at a specific position x of the object can be decomposed into a diffuse reflection component I d And specular reflection component I s
I(x)=I d +I s =w d (x)S(x)E(x)+w s (x) E (x) equation one
Wherein E (x) is the intensity of incident light, w d (x) And w s (x) The weight coefficients of diffuse reflection and specular reflection, respectively, S (x) is the local diffuse reflectance;
modeling in the following way, acquiring a 2D attack face from a real face image again:
I'(x)=I' d +I' s =F(I(x))+w' s (x) E' (x) equation II
Since diffuse reflection can be determined by the distortion of the original image, F (I (x)) replaces I' d
For an attack face of a printed photo, I (x) is firstly converted into the intensity of printing ink on paper, and then the final image intensity is achieved through diffuse reflection on the surface of the paper; for video attacks, I (x) translates to the radiation intensity at the pixels of the LCD screen; also, specular reflection is also distinguished from real faces by the difference in the surface of the attack medium; firstly separating out specular reflection components for a single image, and then calculating the proportion of pixel points in the specular reflection components, the average intensity and variance of specular reflection pixels as specular reflection characteristics;
g. convolution characteristics: sorting data for training a convolutional network; inputting training data into the established convolutional network to start training; inputting the image read in by the camera into a trained convolutional neural network model; extracting a feature vector output by a convolutional neural network;
step S3, model design stage:
the residual-mlp model was designed as follows: the model is integrally marked as M and consists of a deep neural network A and a residual structure C; a residual structure C is added between every two layers of the neural network A, a function H (x) to be learned of the original neural network is converted into F (x) +x, additional parameters and calculated quantity are not added to the network by adding the residual structure, but optimization of F (x) is simpler than that of H (x), training speed of a model is increased to a great extent, training effect is improved, and the problem of vanishing gradient is well solved when the layer number of the model is deepened;
step S4, training phase:
the training phase comprises the following steps:
s41, dividing the human face living body detection image set D with the labeling information into a training set T and a verification set V;
s42, marking the residual-mlp network model as M, and sharing M1 and … Mn layers; the model extracts a human face micro-texture feature combination I from an input human face image P, outputs a recognition result O after passing through each layer of the model, wherein each layer of network consists of a plurality of neurons, each neuron has a preset weight, and then carries out model training by applying a batch random gradient descent method algorithm according to the label difference between the output and input features of the current network, and continuously adjusts the weights;
s43, verifying the training effect of the model by using the verification set V, namely stopping training when the model obtains good living body detection precision on the verification set V and the precision can not be promoted again along with the training process; finally training is completed to obtain a model M';
s44, when the human face living body detection is carried out, the support vector machine finds an optimal super plane f (x) =x x w t+b=0 of linear classification; firstly, finding out a constraint condition of f (x) through the nearest points of two classification points, then solving through a Lagrange multiplier method and KKT conditions, and finally training to obtain a model N';
s45, a trained residual-mlp network M 'and a support vector machine N' can have a relatively good recognition result on the micro-texture features of the pictures in the training set; selecting a relatively good fusion weight on a training set according to the identification results and the confidence coefficient of the two classifiers, fusing the residual-mlp network with a support vector machine, and finally completing training to obtain a model B;
step S5, a prediction stage:
firstly, an RGB image P is read in through a camera, the image P is input into a face detector, if a face exists in the image, the detected face is normalized, and a normalized face image C is obtained; seven micro-texture features I of the normalized face image C are extracted; inputting the micro-texture features I into a fusion classifier B obtained in the step S45, and predicting the result of human face living body detection.
The invention uses the residual-mlp network and the human face micro texture feature in combination with the support vector machine, thereby greatly improving the accuracy and speed of human face living body detection; in particular, the increased residual structure can enable the model to achieve better detection effect. In addition, the living body detection method used by the invention can reach the ultra-real-time human face living body detection speed without adding hardware equipment except a camera and matching with a person to be detected, and has the advantages of high detection speed, high spoof prevention recognition precision and detection cost saving compared with the traditional technology.
Drawings
FIG. 1 is a schematic overall flow chart of the present invention.
FIG. 2 is a process presentation diagram of the residual-mlp model design.
Detailed Description
The invention will be described in further detail with reference to the drawings and the detailed description.
An anti-fraud method in a face recognition system shown in fig. 1 specifically comprises the following steps:
step S1, acquiring an image and carrying out normalization processing:
acquiring an RGB image through camera equipment, and then inputting the acquired RGB image to a cascades CNN face detection module; the face detection module performs face detection on the RGB image, if a face is detected, a face region picture in the picture is input to the deep neural network to perform face key point positioning, and affine transformation from a key point to a standard key point is calculated to transform face pictures in different postures into face pictures in standard postures;
step S2, a feature extraction stage:
the following seven features are extracted:
a. color diversity feature: since attack media often lose color diversity when a face is presented, two features are extracted from the color distribution: (1) The number of pixels of 60 colors which most commonly occur is a percentage of the number of total pixel points; (2) the number of all colors appearing in the face picture;
b. blur degree characteristics: the attack medium is usually close to the camera, so that the face is cheated and scattered, and therefore, the invention takes the blurring degree as a clue of living body detection, and the algorithm for extracting the characteristics is as follows: firstly, calculating the color change degree between adjacent pixels of an input picture, then carrying out low-pass filter processing on the picture, and calculating the color change degree between the adjacent pixels after the low-pass filter processing; comparing the sum of the variation degrees between adjacent pixels of the original input picture and the fuzzy picture, and taking the comparison result as a fuzzy degree characteristic;
c. picture moment characteristics: saving the first two and three central moment characteristics of each color channel of an RGB (red, green and blue) picture; the first moment feature is the mean, i.e. the average color of the picture, the second moment feature is the variance of each color channel, and the third central moment feature is the skewness of each color channel;
d. sharpness characteristics: and calculating the definition degree of the human face by using a Tenengard gradient method. The method utilizes a solel operator to calculate gradients in the horizontal direction and the vertical direction respectively, and the higher the gradient value is in the same scene, the clearer the image is
e. Spectral characteristics: the skin color of the face of the person caused by the blood flow is slightly changed, and a signal generated by the slight change is called PPG (photoplethysmogram) signal, and the PPG signal can only be detected in a real face video. After passing through the skin, some light will be reflected, and if there is an object covered on the face, the light source will be reflected or absorbed by the object covered, so that no sufficient light source will be detected, and the feature extraction algorithm steps are as follows: (1) Tracking a human face in a green channel of an RGB video, detecting key points of the human face, and selecting five areas of the forehead, the left cheek, the right cheek, the left side of the ear and the right side of the ear to detect PPG signals; (2) Then calculating spectral characteristics, after obtaining PPG signals of five areas, carrying out average reduction operation, and passing through a band-pass filter of 0.5Hz to 5Hz so as to become five groups of new signals; (3) characterizing the five new sets of signals as spectral features;
f. mirror surface characteristics: based on the bicolor reflection model, the reflectivity I of the illumination at a specific position x of the object can be decomposed into a diffuse reflection component I d And specular reflection component I s
I(x)=I d +I s =w d (x)S(x)E(x)+w s (x) E (x) equation one
Wherein E (x) is the intensity of incident light, w d (x) And w s (x) The weight coefficients of diffuse reflection and specular reflection, respectively, S (x) is the local diffuse reflectance;
modeling in the following way, acquiring a 2D attack face from a real face image again:
I'(x)=I' d +I' s =F(I(x))+w' s (x) E' (x) equation II
Since diffuse reflection can be determined by the distortion of the original image, F (I (x)) replaces I' d
For an attack face of a printed photo, I (x) is firstly converted into the intensity of printing ink on paper, and then the final image intensity is achieved through diffuse reflection on the surface of the paper; for video attacks, I (x) translates to the radiation intensity at the pixels of the LCD screen; also, specular reflection is also distinguished from real faces by the difference in the surface of the attack medium; firstly separating out specular reflection components for a single image, and then calculating the proportion of pixel points in the specular reflection components, the average intensity and variance of specular reflection pixels as specular reflection characteristics;
g. convolution characteristics: the convolutional neural network is a feedforward neural network, and can obtain effective representation of an original image, so that CNN can accurately identify rules on a picture directly from the original pixels, and the convolutional neural network has excellent performance on image processing. The main steps of extracting the convolutional neural network characteristics are as follows: (1) collating data for training a convolutional network; (2) Inputting training data into the established convolutional network to start training; (3) Inputting the image read in by the camera into a trained convolutional neural network model; (4) extracting a characteristic vector output by the convolutional neural network;
step S3, model design stage:
the residual-mlp model was designed as follows: the model is denoted as M in its entirety and consists of two parts, namely a deep neural network A and a residual structure C, as shown in FIG. 2; a residual structure C is added between every two layers of the neural network A, a function H (x) to be learned of the original neural network is converted into F (x) +x, additional parameters and calculated quantity are not added to the network by adding the residual structure, but optimization of F (x) is simpler than that of H (x), training speed of a model is increased to a great extent, training effect is improved, and the problem of vanishing gradient is well solved when the layer number of the model is deepened;
step S4, training phase:
the training phase comprises the following steps:
s41, dividing the human face living body detection image set D with the labeling information into a training set T and a verification set V;
s42, marking the residual-mlp network model as M, and sharing M1 and … Mn layers; the model extracts a human face micro-texture feature combination I from an input human face image P, outputs a recognition result O after passing through each layer of the model, wherein each layer of network consists of a plurality of neurons, each neuron has a preset weight, and then carries out model training by applying a batch random gradient descent method algorithm according to the label difference between the output and input features of the current network, and continuously adjusts the weights;
s43, verifying the training effect of the model by using the verification set V, namely stopping training when the model obtains good living body detection precision on the verification set V and the precision can not be promoted again along with the training process; finally training is completed to obtain a model M';
s44, when the human face living body detection is carried out, the support vector machine finds an optimal super plane f (x) =x x w t+b=0 of linear classification; firstly, finding out a constraint condition of f (x) through the nearest points of two classification points, then solving through a Lagrange multiplier method and KKT conditions, and finally training to obtain a model N';
s45, a trained residual-mlp network M 'and a support vector machine N' can have a relatively good recognition result on the micro-texture features of the pictures in the training set; selecting a relatively good fusion weight on a training set according to the identification results and the confidence coefficient of the two classifiers, fusing the residual-mlp network with a support vector machine, and finally completing training to obtain a model B; the effect of fusing the two classifiers is better than the identification effect of a single classifier;
step S5, a prediction stage:
firstly, an RGB image P is read in through a camera, the image P is input into a face detector, if a face exists in the image, the detected face is normalized, and a normalized face image C is obtained; seven micro-texture features I of the normalized face image C are extracted; inputting the micro-texture features I into a fusion classifier B obtained in the step S45, and predicting the result of human face living body detection.
The invention provides a classifier of residual-mlp and a traditional support vector machine, and a method and a system for detecting human face living bodies by utilizing human face texture features, which are characterized in that compared with the prior art, the method has the following key points and innovation points:
1. micro-texture features of a human face: 1) The micro-texture features of the face include: blur degree features, picture moment features, color diversity, image sharpness features, specular reflection features, spectral features, convolution features; 2) The feature is calculated from the normalized face picture. By utilizing the characteristics, the accuracy and the speed of the face living body detection can be greatly improved, and particularly, the method does not need to add hardware equipment except a camera and does not need to be matched with a person to be tested.
2. residual-mlp biopsy framework: according to the living body detection framework, a residual structure is added on the basis of a traditional neural network, so that the problem that the error rate is improved when the number of layers of the traditional living body detection model is deepened can be solved, and a better detection effect is achieved.
3. Classifier of residual-mlp fused with traditional support vector machine: the classifier fused by the residual-mlp and the traditional support vector machine is used for classifying the authenticity of the human face, and the fusion of the two has better living body detection effect compared with a single classifier.
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above examples, but is also intended to be limited to the following claims.

Claims (1)

1. An anti-deception method in a face recognition system is characterized in that: the method comprises the following specific steps:
step S1, acquiring an image and carrying out normalization processing:
acquiring an RGB image through camera equipment, and then inputting the acquired RGB image to a cascades CNN face detection module; the face detection module performs face detection on the RGB image, if a face is detected, a face region picture in the picture is input to the deep neural network to perform face key point positioning, and affine transformation from a key point to a standard key point is calculated to transform face pictures in different postures into face pictures in standard postures;
step S2, a feature extraction stage:
the following seven features are extracted:
a. color diversity feature: two features are extracted from the color distribution: the number of pixels of 60 colors which most commonly occur is a percentage of the number of total pixel points; the number of all colors appearing in the face picture;
b. blur degree characteristics: firstly, calculating the color change degree between adjacent pixels of an input picture, then carrying out low-pass filter processing on the picture, and calculating the color change degree between the adjacent pixels after the low-pass filter processing; comparing the sum of the variation degrees between adjacent pixels of the original input picture and the fuzzy picture, and taking the comparison result as a fuzzy degree characteristic;
c. picture moment characteristics: storing first two and three central moment characteristics of each color channel of the RGB picture; the first moment feature is the mean, i.e. the average color of the picture, the second moment feature is the variance of each color channel, and the third central moment feature is the skewness of each color channel;
d. sharpness characteristics: calculating the definition degree of the face by using a Tenengard gradient method;
e. spectral characteristics: tracking a human face in a green channel of an RGB video, detecting key points of the human face, and selecting five areas of the forehead, the left cheek, the right cheek, the left side of the ear and the right side of the ear to detect PPG signals; then calculating spectral characteristics, after obtaining PPG signals of five areas, carrying out average reduction operation, and passing through a band-pass filter of 0.5Hz to 5Hz so as to become five groups of new signals; taking the five groups of new signals as spectral features;
f. mirror surface characteristics: based on the bicolor reflection model, the reflectivity I of the illumination at a specific position x of the object can be decomposed into a diffuse reflection component I d And specular reflection component I s
I(x)=I d +I s =w d (x)S(x)E(x)+w s (x) E (x) equation one
Wherein E (x) is the intensity of incident light, w d (x) And w s (x) The weight coefficients of diffuse reflection and specular reflection, respectively, S (x) is the local diffuse reflectance;
modeling in the following way, acquiring a 2D attack face from a real face image again:
I'(x)=I' d +I' s =F(I(x))+w' s (x) E' (x) equation II
Since diffuse reflection can be determined by the distortion of the original image, F (I (x)) replaces I' d
For an attack face of a printed photo, I (x) is firstly converted into the intensity of printing ink on paper, and then the final image intensity is achieved through diffuse reflection on the surface of the paper; for video attacks, I (x) translates to the radiation intensity at the pixels of the LCD screen; also, specular reflection is also distinguished from real faces by the difference in the surface of the attack medium; firstly separating out specular reflection components for a single image, and then calculating the proportion of pixel points in the specular reflection components, the average intensity and variance of specular reflection pixels as specular reflection characteristics;
g. convolution characteristics: sorting data for training a convolutional network; inputting training data into the established convolutional network to start training; inputting the image read in by the camera into a trained convolutional neural network model; extracting a feature vector output by a convolutional neural network;
step S3, model design stage:
the residual-mlp model was designed as follows: the model is integrally marked as M and consists of a deep neural network A and a residual structure C; a residual structure C is added between every two layers of the neural network A, a function H (x) to be learned of the original neural network is converted into F (x) +x, additional parameters and calculated quantity are not added to the network by adding the residual structure, but optimization of F (x) is simpler than that of H (x), training speed of a model is increased to a great extent, training effect is improved, and the problem of vanishing gradient is well solved when the layer number of the model is deepened;
step S4, training phase:
the training phase comprises the following steps:
s41, dividing the human face living body detection image set D with the labeling information into a training set T and a verification set V;
s42, marking the residual-mlp network model as M, and sharing M1 and … Mn layers; the model extracts a human face micro-texture feature combination I from an input human face image P, outputs a recognition result O after passing through each layer of the model, wherein each layer of network consists of a plurality of neurons, each neuron has a preset weight, and then carries out model training by applying a batch random gradient descent method algorithm according to the label difference between the output and input features of the current network, and continuously adjusts the weights;
s43, verifying the training effect of the model by using the verification set V, namely stopping training when the model obtains good living body detection precision on the verification set V and the precision can not be promoted again along with the training process; finally training is completed to obtain a model M';
s44, when the human face living body detection is carried out, the support vector machine finds an optimal super plane f (x) =x x w t+b=0 of linear classification; firstly, finding out a constraint condition of f (x) through the nearest points of two classification points, then solving through a Lagrange multiplier method and KKT conditions, and finally training to obtain a model N';
s45, a trained residual-mlp network M 'and a support vector machine N' can have a relatively good recognition result on the micro-texture features of the pictures in the training set; selecting a relatively good fusion weight on a training set according to the identification results and the confidence coefficient of the two classifiers, fusing the residual-mlp network with a support vector machine, and finally completing training to obtain a model B;
step S5, a prediction stage:
firstly, an RGB image P is read in through a camera, the image P is input into a face detector, if a face exists in the image, the detected face is normalized, and a normalized face image C is obtained; seven micro-texture features I of the normalized face image C are extracted; inputting the micro-texture features I into a fusion classifier B obtained in the step S45, and predicting the result of human face living body detection.
CN201711375804.XA 2017-12-19 2017-12-19 Anti-deception method in face recognition system Active CN108038456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711375804.XA CN108038456B (en) 2017-12-19 2017-12-19 Anti-deception method in face recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711375804.XA CN108038456B (en) 2017-12-19 2017-12-19 Anti-deception method in face recognition system

Publications (2)

Publication Number Publication Date
CN108038456A CN108038456A (en) 2018-05-15
CN108038456B true CN108038456B (en) 2024-01-26

Family

ID=62099948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711375804.XA Active CN108038456B (en) 2017-12-19 2017-12-19 Anti-deception method in face recognition system

Country Status (1)

Country Link
CN (1) CN108038456B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875618A (en) * 2018-06-08 2018-11-23 高新兴科技集团股份有限公司 A kind of human face in-vivo detection method, system and device
CN108921071A (en) * 2018-06-24 2018-11-30 深圳市中悦科技有限公司 Human face in-vivo detection method, device, storage medium and processor
CN109271863B (en) * 2018-08-15 2022-03-18 北京小米移动软件有限公司 Face living body detection method and device
CN109255322B (en) * 2018-09-03 2019-11-19 北京诚志重科海图科技有限公司 A kind of human face in-vivo detection method and device
CN109558813A (en) * 2018-11-14 2019-04-02 武汉大学 A kind of AI depth based on pulse signal is changed face video evidence collecting method
CN109598242B (en) * 2018-12-06 2023-04-18 中科视拓(北京)科技有限公司 Living body detection method
CN109795830A (en) * 2019-03-04 2019-05-24 北京旷视科技有限公司 It is automatically positioned the method and device of logistics tray
CN109948566B (en) * 2019-03-26 2023-08-18 江南大学 Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN109977865B (en) * 2019-03-26 2023-08-18 江南大学 Fraud detection method based on face color space and metric analysis
CN109993124B (en) * 2019-04-03 2023-07-14 深圳华付技术股份有限公司 Living body detection method and device based on video reflection and computer equipment
CN111967289A (en) * 2019-05-20 2020-11-20 高新兴科技集团股份有限公司 Uncooperative human face in-vivo detection method and computer storage medium
CN110263681B (en) * 2019-06-03 2021-07-27 腾讯科技(深圳)有限公司 Facial expression recognition method and device, storage medium and electronic device
CN110569737A (en) * 2019-08-15 2019-12-13 深圳华北工控软件技术有限公司 Face recognition deep learning method and face recognition acceleration camera
CN110516619A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of cos-attack recognition of face attack algorithm
CN110688946A (en) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 Public cloud silence in-vivo detection device and method based on picture identification
CN111091047B (en) * 2019-10-28 2021-08-27 支付宝(杭州)信息技术有限公司 Living body detection method and device, server and face recognition equipment
CN110796648B (en) * 2019-10-28 2023-06-09 南京泓图人工智能技术研究院有限公司 Automatic facial chloasma area segmentation method based on melanin extraction
CN110929680B (en) * 2019-12-05 2023-05-26 四川虹微技术有限公司 Human face living body detection method based on feature fusion
CN110956149A (en) * 2019-12-06 2020-04-03 中国平安财产保险股份有限公司 Pet identity verification method, device and equipment and computer readable storage medium
CN111460419B (en) * 2020-03-31 2020-11-27 深圳市微网力合信息技术有限公司 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN111738735B (en) * 2020-07-23 2021-07-13 腾讯科技(深圳)有限公司 Image data processing method and device and related equipment
CN113449707B (en) * 2021-08-31 2021-11-30 杭州魔点科技有限公司 Living body detection method, electronic apparatus, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819733A (en) * 2012-08-09 2012-12-12 中国科学院自动化研究所 Rapid detection fuzzy method of face in street view image
CN103593598A (en) * 2013-11-25 2014-02-19 上海骏聿数码科技有限公司 User online authentication method and system based on living body detection and face recognition
CN104665849A (en) * 2014-12-11 2015-06-03 西南交通大学 Multi-physiological signal multi-model interaction-based high-speed railway dispatcher stress detecting method
CN106650669A (en) * 2016-12-27 2017-05-10 重庆邮电大学 Face recognition method for identifying counterfeit photo deception
CN106651750A (en) * 2015-07-22 2017-05-10 美国西门子医疗解决公司 Method and system used for 2D/3D image registration based on convolutional neural network regression
CN106778683A (en) * 2017-01-12 2017-05-31 西安电子科技大学 Based on the quick Multi-angle face detection method for improving LBP features
CN106874898A (en) * 2017-04-08 2017-06-20 复旦大学 Extensive face identification method based on depth convolutional neural networks model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080260212A1 (en) * 2007-01-12 2008-10-23 Moskal Michael D System for indicating deceit and verity
US9354778B2 (en) * 2013-12-06 2016-05-31 Digimarc Corporation Smartphone-based methods and systems

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819733A (en) * 2012-08-09 2012-12-12 中国科学院自动化研究所 Rapid detection fuzzy method of face in street view image
CN103593598A (en) * 2013-11-25 2014-02-19 上海骏聿数码科技有限公司 User online authentication method and system based on living body detection and face recognition
CN104665849A (en) * 2014-12-11 2015-06-03 西南交通大学 Multi-physiological signal multi-model interaction-based high-speed railway dispatcher stress detecting method
CN106651750A (en) * 2015-07-22 2017-05-10 美国西门子医疗解决公司 Method and system used for 2D/3D image registration based on convolutional neural network regression
CN106650669A (en) * 2016-12-27 2017-05-10 重庆邮电大学 Face recognition method for identifying counterfeit photo deception
CN106778683A (en) * 2017-01-12 2017-05-31 西安电子科技大学 Based on the quick Multi-angle face detection method for improving LBP features
CN106874898A (en) * 2017-04-08 2017-06-20 复旦大学 Extensive face identification method based on depth convolutional neural networks model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Peter Wild等.Robust multimodal face and fingerprint fusion in the presence of spoofing attacks.Pattern Recognition: The Journal of the Pattern Recognition Society.2016,第17-25页. *
吴继鹏等.基于FS-LBP特征的人脸活体检测方法.集美大学学报.2017,第22卷(第5期),第65-72页. *

Also Published As

Publication number Publication date
CN108038456A (en) 2018-05-15

Similar Documents

Publication Publication Date Title
CN108038456B (en) Anti-deception method in face recognition system
Zhang et al. Face spoofing detection based on color texture Markov feature and support vector machine recursive feature elimination
NL1016006C2 (en) Method and device for detecting eyes and body of a speaking person.
US7715596B2 (en) Method for controlling photographs of people
US11354917B2 (en) Detection of fraudulently generated and photocopied credential documents
CN108549886A (en) A kind of human face in-vivo detection method and device
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
JP7197485B2 (en) Detection system, detection device and method
JP6822482B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
CN108416291A (en) Face datection recognition methods, device and system
CN109858375A (en) Living body faces detection method, terminal and computer readable storage medium
Yeh et al. Face liveness detection based on perceptual image quality assessment features with multi-scale analysis
Hadiprakoso et al. Face anti-spoofing using CNN classifier & face liveness detection
Chang et al. Face anti-spoofing detection based on multi-scale image quality assessment
Tian et al. Face anti-spoofing by learning polarization cues in a real-world scenario
TWI427545B (en) Face recognition method based on sift features and head pose estimation
TWM592541U (en) Image recognition system
KR101343623B1 (en) adaptive color detection method, face detection method and apparatus
Peng et al. Presentation attack detection based on two-stream vision transformers with self-attention fusion
Erdogmus et al. Spoofing attacks to 2D face recognition systems with 3D masks
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
Hadiprakoso Face anti-spoofing method with blinking eye and hsv texture analysis
Berbar Skin colour correction and faces detection techniques based on HSL and R colour components
CN111832464A (en) Living body detection method and device based on near-infrared camera
Kryszczuk et al. Color correction for face detection based on human visual perception metaphor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant