CN107103266A - The training of two-dimension human face fraud detection grader and face fraud detection method - Google Patents

The training of two-dimension human face fraud detection grader and face fraud detection method Download PDF

Info

Publication number
CN107103266A
CN107103266A CN201610098933.8A CN201610098933A CN107103266A CN 107103266 A CN107103266 A CN 107103266A CN 201610098933 A CN201610098933 A CN 201610098933A CN 107103266 A CN107103266 A CN 107103266A
Authority
CN
China
Prior art keywords
image
face
fraud detection
dimensional
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610098933.8A
Other languages
Chinese (zh)
Other versions
CN107103266B (en
Inventor
李松斌
袁海聪
邓浩江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanhai Research Station Institute Of Acoustics Chinese Academy Of Sciences
Institute of Acoustics CAS
Original Assignee
Institute of Acoustics CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Acoustics CAS filed Critical Institute of Acoustics CAS
Priority to CN201610098933.8A priority Critical patent/CN107103266B/en
Publication of CN107103266A publication Critical patent/CN107103266A/en
Application granted granted Critical
Publication of CN107103266B publication Critical patent/CN107103266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides the generation method of two-dimension human face fraud detection model, methods described includes:First, all people's face picture in training set is pre-processed, obtains normalizing facial image;Second, extract LBP characteristic vectors, Gabor wavelet feature vector sum one-dimensional pixel characteristic vector from each normalization facial image;3rd, these three characteristic vectors are carried out to be spliced to form final characteristic vector;4th, using SVMs to being trained based on being spliced to form final characteristic vector, obtain two-dimension human face fraud detection grader;This method is extracted the characteristic information of the difference of face and photo;Feature extraction is simply efficient, it is not necessary to which user purposely coordinates, and good effect can be obtained in the case of low resolution.In addition, the two-dimension human face fraud detection grader obtained based on the above method, the invention also provides face fraud detection method, this method has the advantages that detection accuracy is high, can effectively prevent face fraud.

Description

Training of two-dimensional face fraud detection classifier and face fraud detection method
Technical Field
The invention relates to the field of computer vision and graphic image processing, in particular to a training of a two-dimensional face fraud detection classifier and a face fraud detection method.
Background
At present, two-dimensional biometric identification technology (i.e. identification based on two-dimensional human face biometric features) is an important research field. Visual angle transformation, shielding and complex outdoor light are always difficult points of face recognition. Although much work has been done to address these issues, the vulnerability to fraudulent attacks by face recognition systems has been overlooked by most systems. The face recognition system relies on the plane graph to detect the identity, and the system is easily attacked by fraud of printing photos or electronic photos. For example, wonderful and toshiba Windows XP and Vista laptops have network cameras and biometric systems built in them, and authenticate users by scanning their faces. At the black hat congress in 2009 (the leading global conference of technical security), the security hole research group of the university in river demonstrated how to easily fool the Face Recognition systems of notebooks (the associative verispace III, SmartLogon V1.0.0005 of large china, Face Recognition 2.0.2.32 of toshiba) into notebooks using photographs of faces of legitimate users, and the highest security level employed by these systems. This vulnerability has now been incorporated into the national vulnerability database by the national institute of standards and technology. Therefore, it is an urgent need to solve the problem of fraud attack to improve the security and robustness of the face recognition system and to apply it to practice.
A fraudulent attack refers to an attempt by a person to gain illegal access by masquerading data as someone else. For example, a person takes a picture, video, mask, 3D model of a legitimate user in front of a camera to spoof a face recognition system. Although one person can also use makeup, face-lifting and other fraudulent methods, photo attacks are the most common fraudulent attacks because photos of the face can be conveniently obtained through internet downloading and candid shooting.
At present, the cheating attack vulnerability attracts more and more people, and an 'IJCB 2011 two-dimensional face cheating attack defense competition' is specially held in 2011. Although there has been increasing research in this area and some public open databases have been released in succession, the standard databases that can provide relatively objective development tests for fraud detection algorithms are not numerous and the area is not mature enough and so far there is no uniform consensus on the best fraud detection algorithms. The existing face fraud detection research method is either too complex and has no practicability (real-time rapid processing is required in practical use), or some unconventional imaging systems (multi-spectral imaging) and high-resolution cameras are adopted, which have no practical application condition.
Disclosure of Invention
The invention aims to overcome the defects of the conventional two-dimensional face fraud detection method, and highlights the difference by finding out the subtle difference between the face and the corresponding photo and designing a feature space. In fact, the face photos all contain printing quality defects to a certain extent, and can be well detected by using textures. The invention provides a training method of a two-dimensional face fraud detection classifier, which is inspired by image quality, printed matter characteristics and light reflection difference. The method comprises the steps of extracting corresponding feature vectors in three feature spaces of LBP, Gabor wavelets and pixel features from a picture, synthesizing the three feature vectors into a final feature vector, and finally sending the feature vector to a nonlinear SVM classifier for classification training to obtain a two-dimensional face fraud detection classifier. Based on the two-dimensional face fraud detection classifier, the invention also provides a face fraud detection method, which can judge whether the input image is a face or a fraud image.
In order to achieve the above object, the present invention provides a training method of a two-dimensional face fraud detection classifier, the method comprising: firstly, preprocessing all face pictures in a training set to obtain a normalized face image; secondly, extracting LBP characteristic vectors, Gabor wavelet characteristic vectors and one-dimensional pixel characteristic vectors from each normalized face image; thirdly, splicing the three eigenvectors to form a final eigenvector; fourthly, training the spliced final feature vector by using a support vector machine to obtain the two-dimensional face fraud detection classifier.
In the above technical solution, the method specifically includes:
step S1) preprocessing the ith personal face picture in the training set, wherein i is more than or equal to 1 and less than or equal to L, and obtaining a normalized face image z with 64 × 64 pixel sizei
Step S2) from the normalized face image ziExtracting LBP feature vector L (z)i);
Step S3) from the normalized face image ziExtracting Gabor wavelet characteristic vector G (z)i);
Step S4) will normalize the face image ziScaling to 8 × 8 size, and converting the two-dimensional image structure into a one-dimensional pixel feature vector P (z)i);
Step S5) splicing the three texture features extracted in the steps S2), S3) and S4) into a final feature vector D (z)i)=(L(zi),G(zi),P(zi));
Step S6) all the feature vectors D (z) based on the regression algorithm of the support vector machinei) And training to obtain a two-dimensional face fraud detection classifier, wherein i is more than or equal to 1 and less than or equal to L.
In the above technical solution, the step S1) specifically includes:
step S1-1), carrying out image graying processing on the face picture:
traversing the face picture, processing each pixel point to obtain the RGB value of each pixel, respectively extracting the red, blue and green values through operation, and calculating the gray value of each pixel after conversion:
Grey=(9798R+19235G+3735B)/32768
wherein, gray represents the converted gray value, and R, G, B represents the red component, the green component and the blue component of each pixel point in the image respectively;
step S1-2) adjusting the size of the gray image to 64 x 64 by adopting a bilinear interpolation method;
step S1-3) enhancing the image after size adjustment:
modifying the image histogram by utilizing the statistical data of the histogram, and changing the pixel value of each gray level in the image by adjusting the equal occurrence probability of each gray level pixel of the image, thereby realizing image enhancement;
step S1-4) extracting an image pixel matrix from the enhanced image to obtain a normalized face image zi
In the above technical solution, the step S2) specifically includes:
step S2-1) for normalized face image ziApplication ofAn operator obtains an LBP image, the LBP image is divided into 3 × 3 overlapped regions, 59-dimensional statistical histograms are extracted from each region respectively, and a 531-dimensional statistical histogram feature vector is synthesized;
step S2-2) for normalized face image ziApplication ofAn operator is used for extracting 59-dimensional statistical histogram feature vectors;
step S2-3) for normalized face image ziApplication ofAn operator, which extracts the characteristic vector of the 243-dimensional statistical histogram;
step S2-4) synthesizing the eigenvectors obtained in step S2-1), step S2-2) and step S2-3) into one eigenvector L (z)i) The dimension of the feature vector is 59 × 9+59+243, which is 833.
In the above technical solution, the step S3) specifically includes:
step S3-1) is to normalize the face image ziScaled to 32 × 32 size, and Gabor wavelet transformed on the scaled image:
processing the zoomed image by taking p Gabor filters with different directions and q different scales, wherein each pixel point t0P × q Gabor amplitude features can be obtained, p × q Gabor amplitude features are cascaded to be called a Jet, which is abbreviated as J, and then pixel points t in the image0Jet of (2) is:
J(t0)=(M0,0(t0),...,M0,7(t0),...,M4,0(t0),...,M4,7(t0))
cascading the Gabor amplitude features of all the pixel points to obtain a feature vector F (z) of the face imagei):
F(zi)={J(t0):t0∈zi}
Step S3-2) determining the eigenvector F (z) obtained in step S3-1)i) Is reduced, feature vector F (z) is analyzed based on principal componenti) Reducing the dimension to obtain a reduced Gabor wavelet feature vector G (z)i)。
In the above technical solution, the step S3-2) specifically includes:
step S3-2-1) feature vector F (z) obtained in step S3-1)i) The dimension d is divided into n equal parts, and the value range of the new dimension d' is determined;
original feature vector F (z)i) The dimension of (d) is d, the value of d is divided into n equal parts, and the value set is as follows:
wherein ""represents an integer rounding operation;
the obtained characteristic after dimension reduction is G (z)i) Then its dimension d' takes these n values respectively:
step S3-2-2) d' sequentially takes each value in the set, and calculates the face fraud detection average absolute error set { MAE (maximum absolute error) corresponding to all the pictures in the training setm};
For L pictures in the training set, calculating the average absolute error of face fraud detection corresponding to all pictures in the training set when d' takes each value in the set as follows:
wherein j represents the j th picture in the training set, k represents the k th value in the d' sampling set, namelyljIndicating the first in the training setCategory values corresponding to j pictures: 0 represents a fraudulent image, 1 represents a real face image,representing the category estimation value of the jth picture in the training set; finally, different MAE value sets { MAEmWhere m ∈ 1, 2.., n;
step S3-2-3) takes the set { MAEmMinimum value of (M) } MAEminWith MAEminThe corresponding d' is used as the final dimensionality reduction;
step S3-2-4) feature vector F (z) is analyzed by principal component analysis based on d' obtained in step S3-2-3)i) Reducing the dimension to obtain a reduced Gabor wavelet feature vector G (z)i)。
In the above technical solution, the step S6) specifically includes:
step S6-1) constructing an optimization problem based on a support vector machine regression algorithm;
assume model training set samples as { x(i),y(i)}(i=1,2,...,L),x(i)Representing a normalized face image ziFeature vector D (z)i),y(i)Indicates the category to which the image corresponds: face images or fraud images; assuming the sample dimension is N, thenThe objective of the support vector machine regression algorithm is to solve the two-dimensional face fraud detection classifier f (x) and make f (x)(i)) And y(i)The difference value is not more than the threshold value, and the maximum error between the actual label value and the predicted estimation value is controlled; then, f (x) is defined as follows:
wherein "·" represents the vector inner product; w and b are parameters for solving;
and step S6-2), converting the optimization solving problem (1) by using a Lagrange multiplier method into an expression for solving the dual problem to obtain the two-dimensional face fraud detection classifier f (x).
The invention also provides a face fraud detection method based on the two-dimensional face fraud detection classifier obtained by the training of the method, and the method comprises the following steps:
step T1) to pre-process the face picture to be detected, and to obtain a normalized face image z with 64 × 64 pixel size0
Step T2) from the normalized face image z0Extracting LBP feature vector L (z)0);
Step T3) from the normalized face image z0Extracting Gabor wavelet characteristic vector G (z)0);
Step T4) will normalize the face image z0Scaling to 8 × 8 size, and converting the two-dimensional image structure into a one-dimensional pixel feature vector P (z)0);
Step T5) splicing the three texture features extracted in the steps T2), T3) and T4) into a final feature vector D (z)0)=(L(z0),G(z0),P(z0));
Step T6) and the feature vector D (z) obtained in the step T5)0) Input into a two-dimensional face fraud detection classifier to obtain a detection result: face images or fraud images.
The invention has the advantages that:
1. the invention provides a training method of a two-dimensional face fraud detection classifier; capturing subtle differences between the human face and the corresponding photo by fusing various texture features; the complementary attributes of two strong texture operators of LBP and Gabor wavelets are absorbed, wherein the LBP comprises micro-texture information, the Gabor wavelets comprise macroscopic information, and the pixel characteristics provide global information, so that the trained feature vectors fully extract the information of the difference between the human face and the photo;
2. in the training method of the two-dimensional face fraud detection classifier provided by the invention, the feature extraction is simple and efficient, the user does not need to be matched intentionally, and a good effect can be achieved under the condition of low resolution; lower errors can be obtained when the two-dimensional face feature vector extraction is combined with the SVR regression algorithm to realize face fraud detection, and the requirements of practical application scenes are met;
3. the texture features adopted by the invention can also be used for face recognition, and a unique feature space is provided for face fraud detection and face recognition;
4. the face fraud detection method has the advantage of high detection accuracy, and can effectively prevent face fraud;
5. the method can be applied to various occasions such as a face verification system, safety monitoring and the like.
Drawings
FIG. 1 is a flow chart of a training method of a two-dimensional face fraud detection classifier of the present invention.
Detailed Description
The technical concept related to the present invention will be briefly described below.
The most basic LBP image coding is defined as that in a window of 3 × 3, the central pixel of the window is used as a threshold value, the gray values of adjacent 8 pixels are compared with the central pixel, if the peripheral pixel value is greater than the central pixel value, the position of the pixel is marked as 1, otherwise, the pixel is 0, thus, 8 points in the 3 × 3 neighborhood can generate 8-bit Binary numbers (usually converted into decimal numbers, namely LBP codes) through comparisonIn addition, the improved LBP operator allows any number of pixel points in a circular neighborhood with the radius of R and in a domain, the LBP equivalent pattern (uniform pattern) can reduce dimensions of the LBP feature vector, when a cyclic binary number corresponding to a local binary pattern is from 0 to 1 or from 1 to 0 for at most two times, the binary number corresponding to the local binary pattern is called an equivalent pattern class, such as 000000000000, 11111111 and 10001111 are all equivalent pattern classes, patterns except the equivalent pattern class are classified into another class, called the other class, and the classes of the local binary pattern are greatly reducedThe equivalent mode operator represents P points within a circular field of radius R.
Gabor wavelets are widely used in the fields of image recognition and image processing, and in the field of pattern recognition, Gabor wavelet transform is also a very effective feature descriptor. The Gabor wavelet is sensitive to the edge of an image, can provide good direction selection and scale selection characteristics, is insensitive to illumination change, and can provide good adaptability to the illumination change. Meanwhile, the two-dimensional Gabor function can enhance information of key parts (eyes, nose, mouth, etc.) of the face, thereby making it possible to enhance local characteristics while preserving overall face information. In the space domain, a 2-dimensional Gabor filter is a product of a sinusoidal plane wave and a gaussian kernel function, has the characteristic of obtaining optimal localization in the space domain and the frequency domain simultaneously, is very similar to the human biological visual characteristic, and therefore can well describe local structural information corresponding to spatial frequency (scale), spatial position and direction selectivity. In essence, the Gabor wavelet transform uses a Gauss function as a window function in order to extract local information of the Fourier transform of the signal, and since the Fourier transform of a Gauss function is also a Gauss function, the inverse Fourier transform is also local. Through the selection of the frequency parameter and the Gaussian function parameter, the Gabor transformation can select the characteristic information of a plurality of parts. After the input face image is subjected to Gabor wavelet transform, a group of Gabor wavelet response image clusters can be obtained.
The principle of Principal Component Analysis (PCA) dimensionality reduction method is to remove the correlation among data components in the original data, remove redundant information and retain the most main components. PCA calculates the eigenvectors corresponding to the largest eigenvalues in the covariance matrix of the original data and obtains the corresponding subspaces, and projects the eigenvectors to the subspaces to achieve the purpose of describing the sample space by using a small number of characteristics. When the support vector machine solves the classification problem (generally, the two-classification problem), an optimal classification hyperplane is searched based on the structural risk minimization criterion, and the sample is divided into two parts, so that the maximum inter-class interval among different classes is achieved. Unlike the classification problem, in which discrete integer values are often used to represent the class of a sample, the label of each sample in the regression problem is a continuous real number. Therefore, Support Vector machine Regression (SVR) aims to find a hyperplane, can accurately predict the distribution of samples, and approximates to the sample data.
The invention will now be further described with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a method for training a two-dimensional face fraud detection classifier includes:
step S1) the ith (i is more than or equal to 1 and less than or equal to L) personal face picture in the training set is preprocessed in sequence to obtain the normalized face image z with 64 × 64 pixel sizei(i is more than or equal to 1 and less than or equal to L); the method specifically comprises the following steps:
step S1-1), carrying out image graying processing on the ith (i is more than or equal to 1 and less than or equal to L) personal face picture:
the face picture comprises: and (4) really acquiring the face picture and the secondary imaging (counterfeiting) face picture.
Traversing the face image, processing each pixel point to obtain the RGB value of each pixel, extracting the red, blue and green values respectively through calculation, and according to different sensitivity degrees of human eyes to the three colors of red, blue and green, the optimal gray level conversion formula is as follows:
Grey=(9798R+19235G+3735B)/32768
wherein, gray represents the converted gray value, and R, G, B represents the red component, the green component and the blue component of each pixel point in the image respectively;
step S1-2) adjusting the size of the gray image to 64 x 64 by adopting a bilinear interpolation method;
step S1-3) enhancing the image after size adjustment:
modifying the image histogram by utilizing the statistical data of the histogram, and changing the pixel value of each gray level in the image by adjusting the equal occurrence probability of each gray level pixel of the image, thereby realizing image enhancement;
step S1-4) extracting an image pixel matrix from the enhanced image to obtain a normalized face image zi(1≤i≤L)。
Step S2) from the normalized face image ziExtracting LBP feature vector L (z)i);
The step S2) specifically includes:
step S2-1) for normalized face image ziApplication ofThe operator obtains LBP image, then divides the LBP image into 3 × 3 overlapping regions, extracts 59-dimensional statistical histogram from each region respectively and finally synthesizesA final 531-dimensional statistical histogram feature vector;
wherein,representing an equivalent mode operator at 8 points in the circular field with radius 1.
Step S2-2) for normalized face image ziApplication ofAn operator is used for extracting 59-dimensional statistical histogram feature vectors;
wherein,representing an equivalent mode operator at 8 points in the circular field with radius 2.
Step S2-3) for normalized face image ziApplication ofAn operator, which extracts the characteristic vector of the 243-dimensional statistical histogram;
wherein,representing an equivalent mode operator at 16 points in the circular domain with radius 2.
Step S2-4) synthesizing the eigenvectors obtained in step S2-1), step S2-2) and step S2-3) into one eigenvector L (z)i) Feature vector L (z)i) Is 59 x 9+59+243, 833 dimensions.
Step S3) extracting Gabor wavelet feature vector G (z) from normalized face imagei);
The step S3) specifically includes:
step S3-1) is to normalize the face image ziZoom to 32 × 32 bigSmall, Gabor wavelet transform is performed on the scaled image:
processing the zoomed image by taking p Gabor filters with different directions and q different scales, wherein each pixel point t0P × q Gabor amplitude features can be obtained, p × q Gabor amplitude features are cascaded to be generally called a Jet, namely J, so that a pixel point t in the image0Jet of (2) is:
J(t0)=(M0,0(t0),...,M0,7(t0),...,M4,0(t0),...,M4,7(t0))
cascading the Gabor amplitude features of all the pixel points to obtain a feature vector F (z) of the face imagei):
F(zi)={J(t0):t0∈zi}
Step S3-2) determining the eigenvector F (z) obtained in step S3-1)i) Is reduced, feature vector F (z) is analyzed based on principal componenti) Reducing the dimension to obtain a reduced Gabor wavelet feature vector G (z)i) (ii) a The method specifically comprises the following steps:
step S3-2-1) feature vector F (z) obtained in step S3-1)i) The dimension d is divided into n equal parts, and the value range of the new dimension d' is determined;
original feature vector F (z)i) The dimension of (d) is d, the value of d is divided into n equal parts, and the value set is as follows:
wherein ""represents an integer rounding operation;
the obtained characteristic after dimension reduction is G (z)i) Then it isDimension d' takes these n values respectively:
step S3-2-2) d' sequentially takes each value in the set, and calculates the face fraud detection average absolute error set { MAE (maximum absolute error) corresponding to all the pictures in the training setm};
For L pictures in the training set, calculating the average absolute error of face fraud detection corresponding to all pictures in the training set when d' takes each value in the set as follows:
wherein j represents the j th picture in the training set, k represents the k th value in the d' sampling set, namelyljThe class value corresponding to the jth picture in the training set is shown (0 represents a fraud image, 1 represents a real face image),representing the category estimation value of the jth picture in the training set; finally, different MAE value sets { MAEmWhere m ∈ 1, 2.., n;
step S3-2-3) takes the set { MAEmMinimum value of (M) } MAEminWith MAEminThe corresponding d' is used as the final dimensionality reduction;
step S3-2-4) feature vector F (z) is analyzed by principal component analysis based on d' obtained in step S3-2-3)i) Reducing the dimension to obtain a reduced Gabor wavelet feature vector G (z)i)。
Step S4) will normalize the face image ziZooming to 8 × 8 size, and converting two-dimensional image structure intoOne-dimensional pixel feature vector P (z)i);
Step S5) splicing the three texture features extracted in the steps S2), S3) and S4) into a final feature vector D (z)i)=(L(zi),G(zi),P(zi));
Step S6) all the feature vectors D (z) based on the regression algorithm of the support vector machinei) Training (i is more than or equal to 1 and less than or equal to L) to obtain a two-dimensional face fraud detection classifier; the method specifically comprises the following steps:
step S6-1) constructing an optimization problem based on a support vector machine regression algorithm;
assume model training set samples as { x(i),y(i)}(i=1,2,...,L),x(i)Representing a normalized face image ziFeature vector D (z)i),y(i)Indicates the category to which the image corresponds: face images or fraud images. Assuming the sample dimension is N, thenThe objective of SVR is to solve two-dimensional face fraud detection classifier f (x) and make f (x)(i)) And y(i)The difference between the actual tag value and the predicted estimate value is not greater than, and is a very small number that controls the maximum error between the actual tag value and the predicted estimate value. Then, f (x) is defined as follows:
wherein "·" represents the vector inner product; w and b are parameters for solving; the solved w should enable | | w | | non-woven phosphor2Minimum; the hyperplane model is generally referred to as-SVR; then-the optimization solution problem for SVR can be expressed as the following equation:
s.t.|w·x(i)+b-y(i)|≤,i∈(1,2,...,m)
SVR introduces penalty coefficients and relaxation variables to adjust:
s.t.w·x(i)+b-y(i)≤+ξi(3)
y(i)-w·x(i)-b≤+ξi *
ξii *≥0,i=1,2,...,m
wherein,
representing a loss function;
in this embodiment, the penalty coefficient C in the SVR parameter is set to 128, the learning parameter g is set to 0.1, and an RBF (radial gaussian basis) kernel function is used for training;
step S6-2) converting the optimization solving problem by using a Lagrange multiplier method into a dual problem for solving the dual problem to obtain an expression of a two-dimensional face fraud detection classifier f (x);
the following lagrange function was introduced:
whereinRepresentation αiAndrepresentation ηiAndand αiηiAndthe solution of equation (9) belongs to the category of convex quadratic programming problem, and by first solving the minimum value transformation of the L (w, α, b) function pair w, b, ξ, the 'saddle point' is solved, which satisfies that the partial derivatives of the L (w, α, b) function pair w, b, ξ are 0 respectively:
can obtainTaken back in equation (9) to obtain:
thus, solving the optimal solution problem of equation (9) translates into solving the following dual problem:
thus, the problem is transformed into an optimization problem comprising only one α parameter, and after obtaining the value of α, the corresponding w is found, and the final f (x) is:
according to the KKT condition, the solution of the dual problem is equivalent to that of the original problem only when the dual problem of the SVR meets the following conditions:
αi(+ξi-y(i)+w·x+b)=0
(C-αii=0 (10)
it can be seen that when αiWhen being C, ξiIs not equal to 0, at which point the sample point falls (i.e., the outlier), then:
then the value of b satisfies:
it can also be seen from the KKT condition that for | f (x) -y(i)|=+ξi (*)Sample points of (2), corresponding theretoIs not equal to 0; these sample points are the support vectors.
When the kernel function is K (x)(i)X), the detection function f (x) becomes:
after the SVR model is obtained through training, only the points corresponding to the support vectors determine the predicted value of regression.
The invention also provides a two-dimensional face fraud detection method based on the two-dimensional face fraud detection classifier obtained by the training of the method, and the method comprises the following steps:
step T1) to pre-process the human face picture to be detected collected by the camera to obtain a normalized human face image z with 64 × 64 pixel size0
Step T2) from the normalized face image z0Extracting LBP feature vector L (z)0);
Step T3) from the normalized face image z0Extracting Gabor wavelet characteristic vector G (z)0);
Step T4) will normalize the face image z0Scaling to 8 × 8 size, and converting the two-dimensional image structure into a one-dimensional pixel feature vector P (z)0);
Step T5) splicing the three texture features extracted in the steps T2), T3) and T4) into a final feature vector D (z)0)=(L(z0),G(z0),P(z0));
Step T6) obtaining the result of the step T5)Feature vector D (z)0) Inputting the two-dimensional face fraud detection classifier f (x) obtained in the step S6), and obtaining a detection result: face images or fraud images.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A method of training a two-dimensional face fraud detection classifier, the method comprising: firstly, preprocessing all face pictures in a training set to obtain a normalized face image; secondly, extracting LBP characteristic vectors, Gabor wavelet characteristic vectors and one-dimensional pixel characteristic vectors from each normalized face image; thirdly, splicing the three eigenvectors to form a final eigenvector; fourthly, training the spliced final feature vector by using a support vector machine to obtain the two-dimensional face fraud detection classifier.
2. The training method of the two-dimensional face fraud detection classifier according to claim 1, characterized in that the method specifically comprises:
step S1) preprocessing the ith personal face picture in the training set, wherein i is more than or equal to 1 and less than or equal to L, and obtaining a normalized face image z with 64 × 64 pixel sizei
Step S2) from the normalized face image ziExtracting LBP feature vector L (z)i);
Step S3) from the normalized face image ziExtracting Gabor wavelet characteristic vector G (z)i);
Step S4) will normalize the face image ziScaling to 8 × 8 size, and converting the two-dimensional image structure into a one-dimensional pixel feature vector P (z)i);
Step S5) splicing the three texture features extracted in the steps S2), S3) and S4) into a final feature vector D (z)i)=(L(zi),G(zi),P(zi));
Step S6) all the feature vectors D (z) based on the regression algorithm of the support vector machinei) And training to obtain a two-dimensional face fraud detection classifier, wherein i is more than or equal to 1 and less than or equal to L.
3. The training method of the two-dimensional face fraud detection classifier according to claim 2, wherein the step S1) specifically includes:
step S1-1), carrying out image graying processing on the face picture:
traversing the face picture, processing each pixel point to obtain the RGB value of each pixel, respectively extracting the red, blue and green values through operation, and calculating the gray value of each pixel after conversion:
Grey=(9798R+19235G+3735B)/32768
wherein, gray represents the converted gray value, and R, G, B represents the red component, the green component and the blue component of each pixel point in the image respectively;
step S1-2) adjusting the size of the gray image to 64 x 64 by adopting a bilinear interpolation method;
step S1-3) enhancing the image after size adjustment:
modifying the image histogram by utilizing the statistical data of the histogram, and changing the pixel value of each gray level in the image by adjusting the equal occurrence probability of each gray level pixel of the image, thereby realizing image enhancement;
step S1-4) extracting an image pixel matrix from the enhanced image to obtain a normalized face image zi
4. The training method of the two-dimensional face fraud detection classifier according to claim 3, wherein the step S2) specifically comprises:
step S2-1) for normalized face image ziApplication ofAn operator obtains an LBP image, the LBP image is divided into 3 × 3 overlapped regions, 59-dimensional statistical histograms are extracted from each region respectively, and a 531-dimensional statistical histogram feature vector is synthesized;
step S2-2) for normalized face image ziApplication ofAn operator is used for extracting 59-dimensional statistical histogram feature vectors;
step S2-3) for normalized face image ziApplication ofAn operator, which extracts the characteristic vector of the 243-dimensional statistical histogram;
step S2-4) synthesizing the eigenvectors obtained in step S2-1), step S2-2) and step S2-3) into one eigenvector L (z)i) The dimension of the feature vector is 59 × 9+59+243, which is 833.
5. The training method of the two-dimensional face fraud detection classifier according to claim 3, wherein the step S3) specifically comprises:
step S3-1) is to normalize the face image ziScaled to 32 × 32 size, and Gabor wavelet transformed on the scaled image:
processing the zoomed image by taking p Gabor filters with different directions and q different scales, wherein each pixel point t0P × q Gabor amplitude features can be obtained, p × q Gabor amplitude features are cascaded to be called a Jet, which is abbreviated as J, and then pixel points t in the image0Jet of (2) is:
J(t0)=(M0,0(t0),...,M0,7(t0),...,M4,0(t0),...,M4,7(t0))
cascading the Gabor amplitude features of all the pixel points to obtain a feature vector F (z) of the face imagei):
F(zi)={J(t0):t0∈zi}
Step S3-2) determining the eigenvector F (z) obtained in step S3-1)i) Is reduced, feature vector F (z) is analyzed based on principal componenti) Reducing the dimension to obtain a reduced Gabor wavelet feature vector G (z)i)。
6. The training method of the two-dimensional face fraud detection classifier according to claim 5, wherein the step S3-2) specifically comprises:
step S3-2-1) feature vector F (z) obtained in step S3-1)i) The dimension d is divided into n equal parts, and the value range of the new dimension d' is determined;
original feature vector F (z)i) The dimension of (d) is d, the value of d is divided into n equal parts, and the value set is as follows:
wherein,representing an integer rounding operation;
the obtained characteristic after dimension reduction is G (z)i) Then its dimension d' takes these n values respectively:
step S3-2-2) d' sequentially takes each value in the set, and calculates the face fraud detection average absolute error set { MAE (maximum absolute error) corresponding to all the pictures in the training setm};
For L pictures in the training set, calculating the average absolute error of face fraud detection corresponding to all pictures in the training set when d' takes each value in the set as follows:
<mrow> <msub> <mi>MAE</mi> <mi>k</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>L</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mo>|</mo> <msub> <mi>l</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mover> <mi>l</mi> <mo>^</mo> </mover> <mi>j</mi> </msub> <mo>|</mo> </mrow>
wherein j represents the j th picture in the training set, k represents the k th value in the d' sampling set, namelyljThe category value corresponding to the jth picture in the training set is represented as follows: 0 represents a fraudulent image, 1 represents a real face image,representing the category estimation value of the jth picture in the training set; finally, different MAE value sets { MAEmWhere m ∈ 1, 2.., n;
step S3-2-3) takes the set { MAEmMinimum value of (M) } MAEminWith MAEminThe corresponding d' is used as the final dimensionality reduction;
step S3-2-4) feature vector F (z) is analyzed by principal component analysis based on d' obtained in step S3-2-3)i) Reducing the dimension to obtain a reduced Gabor wavelet feature vector G (z)i)。
7. The training method of the two-dimensional face fraud detection classifier according to claim 3, wherein the step S6) specifically comprises:
step S6-1) constructing an optimization problem based on a support vector machine regression algorithm;
assume model training set samples as { x(i),y(i)}(i=1,2,...,L),x(i)Representing a normalized face image ziFeature vector D (z)i),y(i)Indicates the category to which the image corresponds: face images or fraud images; assuming the sample dimension is N, thenThe objective of the support vector machine regression algorithm is to solve the two-dimensional face fraud detection classifier f (x) and make f (x)(i)) And y(i)The difference value is not more than the threshold value, and the maximum error between the actual label value and the predicted estimation value is controlled; then, f (x) is defined as follows:
f(x)=w·x+b
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <mo>&amp;ForAll;</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>,</mo> <mo>|</mo> <mi>f</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <mo>&amp;le;</mo> <mi>&amp;epsiv;</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
wherein "·" represents the vector inner product; w and b are parameters for solving;
and step S6-2), converting the optimization solving problem (1) by using a Lagrange multiplier method into an expression for solving the dual problem to obtain the two-dimensional face fraud detection classifier f (x).
8. A face fraud detection method implemented on the basis of a two-dimensional face fraud detection classifier trained by the method of any one of claims 1 to 7, the method comprising:
step T1) to pre-process the face picture to be detected, and to obtain a normalized face image z with 64 × 64 pixel size0
Step T2) from the normalized face image z0Extracting LBP feature vector L (z)0);
Step T3) from the normalized face image z0Extracting Gabor wavelet characteristic vector G (z)0);
Step T4) will normalize the face image z0Scaling to 8 × 8 size, and converting the two-dimensional image structure into a one-dimensional pixel feature vector P (z)0);
Step T5) splicing the three texture features extracted in the steps T2), T3) and T4) into a final feature vector D (z)0)=(L(z0),G(z0),P(z0));
Step T6) and the feature vector D (z) obtained in the step T5)0) Input into a two-dimensional face fraud detection classifier to obtain a detection result: face images or fraud images.
CN201610098933.8A 2016-02-23 2016-02-23 The training of two-dimension human face fraud detection classifier and face fraud detection method Active CN107103266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610098933.8A CN107103266B (en) 2016-02-23 2016-02-23 The training of two-dimension human face fraud detection classifier and face fraud detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610098933.8A CN107103266B (en) 2016-02-23 2016-02-23 The training of two-dimension human face fraud detection classifier and face fraud detection method

Publications (2)

Publication Number Publication Date
CN107103266A true CN107103266A (en) 2017-08-29
CN107103266B CN107103266B (en) 2019-08-20

Family

ID=59658429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610098933.8A Active CN107103266B (en) 2016-02-23 2016-02-23 The training of two-dimension human face fraud detection classifier and face fraud detection method

Country Status (1)

Country Link
CN (1) CN107103266B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704834A (en) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 Householder method, device and storage medium are examined in micro- expression face
CN107886070A (en) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 Verification method, device and the equipment of facial image
CN108009531A (en) * 2017-12-28 2018-05-08 北京工业大学 A kind of face identification method of more tactful antifraud
CN108038413A (en) * 2017-11-02 2018-05-15 平安科技(深圳)有限公司 Cheat probability analysis method, apparatus and storage medium
CN109558794A (en) * 2018-10-17 2019-04-02 平安科技(深圳)有限公司 Image-recognizing method, device, equipment and storage medium based on moire fringes
WO2020097834A1 (en) * 2018-11-14 2020-05-22 北京比特大陆科技有限公司 Feature processing method and apparatus, storage medium and program product
CN111428666A (en) * 2020-03-31 2020-07-17 齐鲁工业大学 Intelligent family accompanying robot system and method based on rapid face detection
CN113743365A (en) * 2021-09-17 2021-12-03 支付宝(杭州)信息技术有限公司 Method and device for detecting fraudulent behavior in face recognition process

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021900A (en) * 2007-03-15 2007-08-22 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
CN105095833A (en) * 2014-05-08 2015-11-25 中国科学院声学研究所 Network constructing method for human face identification, identification method and system
CN105117688A (en) * 2015-07-29 2015-12-02 重庆电子工程职业学院 Face identification method based on texture feature fusion and SVM

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021900A (en) * 2007-03-15 2007-08-22 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
CN100492399C (en) * 2007-03-15 2009-05-27 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
CN105095833A (en) * 2014-05-08 2015-11-25 中国科学院声学研究所 Network constructing method for human face identification, identification method and system
CN105117688A (en) * 2015-07-29 2015-12-02 重庆电子工程职业学院 Face identification method based on texture feature fusion and SVM

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
J. MAATTA 等: ""Face spoofing detection from single images using texture and local shape analysis"", 《IET BIOMETRICS》 *
J.MAATTA等: ""Face Spoofing Detection From Single Images Using Micro-Texture Analysis"", 《2011 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS》 *
吴仰波: ""基于人脸图像特征表达的年龄估计模型算法研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陆丽: ""基于人脸图像的性别识别与年龄估计研究"", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704834A (en) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 Householder method, device and storage medium are examined in micro- expression face
CN107704834B (en) * 2017-10-13 2021-03-30 深圳壹账通智能科技有限公司 Micro-surface examination assisting method, device and storage medium
CN108038413A (en) * 2017-11-02 2018-05-15 平安科技(深圳)有限公司 Cheat probability analysis method, apparatus and storage medium
WO2019085331A1 (en) * 2017-11-02 2019-05-09 平安科技(深圳)有限公司 Fraud possibility analysis method, device, and storage medium
CN107886070A (en) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 Verification method, device and the equipment of facial image
CN108009531A (en) * 2017-12-28 2018-05-08 北京工业大学 A kind of face identification method of more tactful antifraud
CN108009531B (en) * 2017-12-28 2022-01-07 北京工业大学 Multi-strategy anti-fraud face recognition method
CN109558794A (en) * 2018-10-17 2019-04-02 平安科技(深圳)有限公司 Image-recognizing method, device, equipment and storage medium based on moire fringes
WO2020097834A1 (en) * 2018-11-14 2020-05-22 北京比特大陆科技有限公司 Feature processing method and apparatus, storage medium and program product
CN112868019A (en) * 2018-11-14 2021-05-28 北京比特大陆科技有限公司 Feature processing method and device, storage medium and program product
CN111428666A (en) * 2020-03-31 2020-07-17 齐鲁工业大学 Intelligent family accompanying robot system and method based on rapid face detection
CN113743365A (en) * 2021-09-17 2021-12-03 支付宝(杭州)信息技术有限公司 Method and device for detecting fraudulent behavior in face recognition process

Also Published As

Publication number Publication date
CN107103266B (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN107103266B (en) The training of two-dimension human face fraud detection classifier and face fraud detection method
Jourabloo et al. Face de-spoofing: Anti-spoofing via noise modeling
Peng et al. Face presentation attack detection using guided scale texture
Qiu et al. Finger vein presentation attack detection using total variation decomposition
Qureshi et al. A bibliography of pixel-based blind image forgery detection techniques
Chakraborty et al. An overview of face liveness detection
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
CN102332084B (en) Identity identification method based on palm print and human face feature extraction
Sun et al. A face spoofing detection method based on domain adaptation and lossless size adaptation
Yeh et al. Face liveness detection based on perceptual image quality assessment features with multi-scale analysis
Wang et al. Hand vein recognition based on multi-scale LBP and wavelet
Barni et al. Iris deidentification with high visual realism for privacy protection on websites and social networks
Juneja Multiple feature descriptors based model for individual identification in group photos
Alshaikhli et al. Face-Fake-Net: The Deep Learning Method for Image Face Anti-Spoofing Detection: Paper ID 45
Lian Pedestrian detection using quaternion histograms of oriented gradients
CN105740838A (en) Recognition method in allusion to facial images with different dimensions
Huang et al. Multi-Teacher Single-Student Visual Transformer with Multi-Level Attention for Face Spoofing Detection.
Mohamed et al. Automated face recogntion system: Multi-input databases
Chinchu et al. A novel method for real time face spoof recognition for single and multiple user authentication
CN111914750A (en) Face living body detection method for removing highlight features and directional gradient histograms
Sino et al. Face Recognition of Low-Resolution Video Using Gabor Filter & Adaptive Histogram Equalization
Nourmohammadi-Khiarak et al. An ear anti-spoofing database with various attacks
Majeed et al. A novel method to enhance color spatial feature extraction using evolutionary time-frequency decomposition for presentation-attack detection
Mousa Pasandi Face, Age and Gender Recognition Using Local Descriptors
Zahran et al. High performance face recognition using PCA and ZM on fused LWIR and VISIBLE images on the wavelet domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220729

Address after: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee after: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES

Patentee after: NANHAI RESEARCH STATION, INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES

Address before: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee before: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES