CN108960088A - The detection of facial living body characteristics, the recognition methods of specific environment - Google Patents

The detection of facial living body characteristics, the recognition methods of specific environment Download PDF

Info

Publication number
CN108960088A
CN108960088A CN201810637801.7A CN201810637801A CN108960088A CN 108960088 A CN108960088 A CN 108960088A CN 201810637801 A CN201810637801 A CN 201810637801A CN 108960088 A CN108960088 A CN 108960088A
Authority
CN
China
Prior art keywords
image
formula
facial
pixel
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810637801.7A
Other languages
Chinese (zh)
Inventor
李素梅
秦龙斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201810637801.7A priority Critical patent/CN108960088A/en
Publication of CN108960088A publication Critical patent/CN108960088A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention belongs to field of image processings to be suitable for the application environment that legitimate user is few, security level is high to propose a kind of face identification system based on application circumstances.In vivo detection part has reached the application requirement that trained pictures are few, detection time is short, detection accuracy is high in the application circumstances face identification system proposed.Thus, the present invention, the detection of facial living body characteristics, the recognition methods of specific environment, the feature extracting method based on local binary patterns LBP, histograms of oriented gradients HOG, gray level co-occurrence matrixes GLCM and gray level co-occurrence matrixes GLCM+ wavelet transformation Haar is used first in the In vivo detection stage, then calling is that the detection module that each legitimate user generates through support vector machines training classifies to tested facial image, and supporting vector is the real human face image and photo Face image synthesis by single user in each In vivo detection module.Present invention is mainly applied to image procossing occasions.

Description

The detection of facial living body characteristics, the recognition methods of specific environment
Technical field
The invention belongs to field of image processing, it is related to the improvement to face identification system in specific environment.Especially legal User is less, requires high field of employment to detection accuracy, can provide similar " customization " based on actual application environment Face identification system.
Background technique
In recent years, biometrics identification technology is gradually dissolved into the various aspects of social life, wherein based on face, refers to The identification technology of line, iris etc. is widely applied, and enhances the reliability of identifying system[1][2].And face characteristic has easily Acquisition, the favor without excessively interactive characteristic, by more users[3].But illegal user is using false face attack Defect[4]The further development of face authentication system is hindered, how efficiently and accurately fraud attack to be detected, Research hotspot as the field.Common fictitious users attack pattern has facial image (photo) attack, video playback attack With threedimensional model attack etc.[5].Human face photo in facial image attack is easier to obtain compared with other two ways, and fictitious users are only Need to be presented before identifying system legitimate user facial image can fraud system, achieve the purpose that attack;Video playback is attacked Hitting is that illegal user passes through the facial video for obtaining legitimate user to cheat identifying system, which has stronger deception Property[6], but obtain difficulty and increased compared with photo attack;Facial threedimensional model generally passes through production plastic pattern and obtains, Ke Yigeng It imitates the stereoscopic features of legitimate user to the life, but this deception mode technical requirements are high, realizes that difficulty is big[7].The present invention is main Study the face authentication system attacked based on human face photo.
Face authentication system (Face Authentication System, FAS) generally comprises two modules: In vivo detection Module and face recognition module.Conventional face authentication system is substantially advanced pedestrian's face In vivo detection, then carries out face Identification[8].Currently, facial biopsy method is substantially based on the system, reliability is not high in practical application, computation complexity Greatly, lack the refering in particular to property to legitimate user, can not resist because of error caused by application environment.Recognize carrying out identity using face The particular place that certain security requirements of card (gate inhibition's verifying, the certification and unlatching of equipment) are high does not allow In vivo detection occur Mistake (such as prison, rifle library, concerning security matters department or some individual control equipment, such as tablet computer, mobile phone), although by Face recognition attack belongs to small probability event, once but success attack to country, society and personal will cause can not to estimate Consequence.Therefore, the present invention devises a set of face authentication system (Face Authentication based on application circumstances System Based on Special Application, FASSA), and give the face living body inspection based on the specific environment Survey model.The result shows that the biopsy method based on the particular system has accuracy high, sample requirement amount is few, calculates Complexity is low, especially suitable for the facial identity authorization system of special sector, application scenarios specific aim with higher, and pairing Method user has refering in particular to property.
Summary of the invention
In order to overcome the deficiencies of the prior art, the present invention is directed to propose a kind of face identification system based on application circumstances (FASSA system), is suitable for the application environment that legitimate user is few, security level is high.In the application circumstances face proposed In vivo detection part has reached the application requirement that trained pictures are few, detection time is short, detection accuracy is high in identifying system.For This, the technical solution adopted by the present invention is that, the detection of facial living body characteristics, the recognition methods of specific environment, in the In vivo detection stage First using based on local binary patterns LBP, histograms of oriented gradients HOG, gray level co-occurrence matrixes GLCM and gray level co-occurrence matrixes The feature extracting method of GLCM+ wavelet transformation Haar, then calling is that each legitimate user generates through support vector machines training Detection module classify to tested facial image, supporting vector is by the true of single user in each In vivo detection module Facial image and photo Face image synthesis.
Specifically, local binary patterns LBP
LBP operator formula such as formula (1):
Wherein P indicates surrounding pixel point number, and R indicates the radius of neighbourhood, and (P, R) value is (8,2), window size 3*3.
Specifically, histograms of oriented gradients Hog
Any one pixel (x, y), gradient calculation formula such as formula (2)~(5) are taken in facial image
Gx(x, y)=H (x+1, y)-H (x-1, y) (2)
Gy(x, y)=H (x, y+1)-H (x, y-1) (3)
Wherein Gx(x, y) is the horizontal direction gradient at pixel (x, y), Gy(x, y) is vertical at pixel (x, y) Direction gradient, H (x, y) be pixel (x, y) at pixel value, G (x, y) be pixel (x, y) at gradient magnitude, α (x, Y) for the gradient direction at pixel (x, y), for the cell factory size used for (8,8), gradient direction is quantified as 9, adjacent Histogram number is 4.
Specifically, gray level co-occurrence matrixes GLCM
If secondary facial image f (x, a y) size is M*N, grey level G, in the picture any capture vegetarian refreshments P (x1, y1), pixel i arbitrarily takes another point P ' (x2,y2), pixel j then calculates the gray level co-occurrence matrixes P of various spacing and angle (i, j, d, 8) expression formula such as formula (6)
P (i, j, d, θ)={ (x1,y1),(x2,y2)∈M*N|f(x1,y1)=i, f (x2,y2)=j } (6)
In formula (2), d is the distance between pixel P and P ', and θ is the generation direction of gray level co-occurrence matrixes, utilizes symbiosis square The statistical nature of battle array can with the texture information of directviewing description image, i.e., energy, correlation, contrast, unfavourable balance away from and entropy, energy: The quadratic sum of gray level co-occurrence matrixes all elements value, such as formula (7)
Correlation: the similarity degree of gray level co-occurrence matrixes element, when element value is equal, relevance values are with regard to big;Otherwise it is related Property value is just small, such as formula (8)
Wherein:
Contrast: reflecting the readability of image and the degree of the texture rill depth, and texture ditch depth, contrast is big, depending on Feel that effect is clear;Conversely, contrast is small, rill is shallow, and effect is fuzzy, such as formula (9)
Unfavourable balance away from: measurement image texture localized variation number, value show to change between the different zones of image texture greatly compared with It is small, local uniform, such as formula (10)
Entropy: being the measurement of information content possessed by image, all elements have maximum randomness, space total in co-occurrence matrix When all values are almost equal in raw matrix, in co-occurrence matrix when element dispersed distribution, entropy is larger, it illustrates texture in image Non-uniform degree or complexity, such as formula (11)
For the calculation amount for reducing image GLCM transformation, gray compression is carried out to image first, reduces the gray level of image.
Specifically, haar wavelet transformation
After wavelet decomposition, the low-frequency component of facial image, LH, tri- direction difference of HL, HH have been concentrated in the direction LL of image The horizontal edge details, vertical edge details and beveled edge details of image are maintained, L indicates the low frequency part of signal, and H indicates letter Number high frequency section is the facial image of (m*n) pixel to sizeCarry out two-dimensional discrete wavelet conversion (2D-DWT) point Solution, such as formula (12)~(15)
Wherein, h is low-pass filter, and g is bandpass filter, and p is the series of wavelet decomposition, It is the identical different components of size obtained after decomposing.
The wavelet decomposition that image is carried out using Haar wavelet basis, extracts the average value of a decomposing H H1 and second decomposition HH2 With standard deviation as feature vector and 10 feature vectors of GLCM, amount to 14 feature vector training SVM classifiers.
The features of the present invention and beneficial effect are:
The present invention, according to the application environment demand of special sector, proposes first in the structure of existing face identification system Face In vivo detection based on application circumstances face identification system (Face recognition system after recognition of face based on special application)。
In the In vivo detection part proposed by the invention based in application circumstances face identification system (FASSA), Using the different data preprocessing methods based on textural characteristics, wherein the data prediction based on LBP, HOG and GLCM+Haar Method has reached the requirement of application circumstances, is greatly lowered on computation complexity with time complexity, test accuracy height, The single testing time is short, training set picture number is small in system.
Face recognition scheme support staff proposed by the present invention changes few, the high application environment of safety requirements, by original people The general applicability of face identifying system (Face Recognition System) extends to as application circumstances custom-built system, The safety of whole system is improved while extending face In vivo detection application range.
Detailed description of the invention:
Fig. 1 face authentication system.(a) conventional face authentication system;(b) the face authentication system based on application circumstances System.
The In vivo detection module of Fig. 2 .FASSA system.
Fig. 3 face In vivo detection frame.
Fig. 4 .LBP window.
The living body faces (the first row) and photo face in NUAA database 0002-0006 file that Fig. 5 chooses herein (the second row).
Specific embodiment
The invention belongs to field of image processing, it is related to the improvement to face identification system in specific environment, especially legal User is less, requires high field of employment to detection accuracy, can provide similar " customization " based on actual application environment Face identification system.
The present invention proposes to be based on application circumstances face recognition scheme (FASSA) according to the demand of specific application environment, As shown in Fig. 1 (b).Recognition of face is carried out first in FASSA system, after legitimate user's individual, then to different conjunctions Method user carries out In vivo detection respectively.Compared with the face authentication system (FAS) in Fig. 1 (a), facial image to be measured passes through After the face recognition module of FASSA system, reduces inhuman face image and illegal user's facial image does next module It disturbs.
For the face authentication system (FASSA) proposed by the present invention based on application circumstances, living body inspection is studied emphatically Part is surveyed, such as Fig. 1 (b).In the In vivo detection stage first using based on one of tetra- kinds of LBP, HOG, GLCM or GLCM+Haar Feature extracting method, then calling is detection module of each legitimate user through support vector machines training generation to tested person Face image is classified.Supporting vector is the real human face image and photo face by single user in each In vivo detection module Image generates.The In vivo detection module of FASSA system becomes one-to-one from original multi-to-multi, has reached the mesh of " single-minded " , to improve the accuracy rate of In vivo detection.In actual application environment, the face living body that can provide similar " customization " is examined Examining system improves the ability of face authentication system reply complicated applications environment.
The In vivo detection module of FASSA system, including image preprocessing and SVM classify two parts automatically.In pretreatment rank Section, mainly by four kinds of local binary model, histograms of oriented gradients, gray level co-occurrence matrixes, haar wavelet transform methods A kind of method carries out the feature extraction of texture, gradient, edge and profile, to obtain representative feature;In sorting phase To each legal individual training detection model.In order to more effectively classify to living body and non-living body, the present invention passes through four kinds Different feature extracting methods devises four kinds of biopsy methods, as shown in Figure 3.Simple introduction hereafter is done to four kinds of models:
1 local binary patterns (LBP)
Assuming that point centered on a certain pixel, by its gray value gcAs threshold value, if the gray value of R pixel adjacent thereto gpLess than threshold value gc, it is marked as 0;Greater than threshold value gcGpLabeled as 1.By the two of central pixel point and the pixel of surrounding System number is changed into decimal number, finally obtains LBP value [9].
LBP operator formula such as formula (1):
Wherein P indicates surrounding pixel point number, and R indicates that the radius of neighbourhood, T.Ojala etc. take office the extension of LBP window size Meaning neighborhood, common (P, R) value have (8,1), (16,2), (24,3).Classical LBP operator window takes 3*3.
Two-value model (LBP), computation complexity is small, with the characteristic that invariable rotary and gray scale are constant[10].In FASSA system In the face collection process of system, it is not necessary to which, it is required that measurand is deliberately remain stationary, facial head portrait feature obtains more convenient and quicker.This Literary (P, R) value is (8,2), window size 3*3, as shown in Figure 4.
2 histograms of oriented gradients (Hog)
HOG algorithm basic thought is that the presentation of localized target and shape can be by distribution of the gradient intensity on gradient direction It describes well[11].The cell factory for dividing the image into a certain size acquires the gradient of each pixel or side in each cell factory The histogram of edge (gradient is primarily present in edge), combines histogram to form profiler.The histogram for calculating part exists Density in the section of N adjacent histogram, according to this density degree of comparing normalizing in N number of adjacent interval respectively Change.
Any one pixel (x, y), gradient calculation formula such as formula (2)~(5) are taken in facial image
Gx(x, y)=H (x+1, y)-H (x-1, y) (2)
Gy(x, y)=H (x, y+1)-H (x, y-1) (3)
Wherein Gx(x, y) is the horizontal direction gradient at pixel (x, y), Gy(x, y) is vertical at pixel (x, y) Direction gradient, H (x, y) be pixel (x, y) at pixel value, G (x, y) be pixel (x, y) at gradient magnitude, α (x, It y) is the gradient direction at pixel (x, y).Cell factory size used herein is (8,8), and gradient direction is quantified as 9, Adjacent histogram number is 4.
HOG has the characteristic of geometric invariance and optics invariance, and measurand is allowed to have subtle movement, does not influence Detection effect[12].Using this feature, the In vivo detection module of FASSA system does not require to be collected in face collection process The head of object deliberately keeps vertically, shortening the human-computer interaction time, improving the efficiency of man face image acquiring link.
3 gray level co-occurrence matrixes (GLCM)
In piece image, a new matrix, referred to as symbiosis square are generated with the joint probability density of two position pixels Battle array reflects brightness and same luminance pixel or close to the position distribution feature between luminance pixel.Gray level co-occurrence matrixes Method exactly obtains its co-occurrence matrix by calculating gray level image, obtains partial feature value, respectively represent certain textures of image Feature.
If secondary facial image f (x, a y) size is M*N, grey level G.Any capture vegetarian refreshments P (x in the picture1, y1), pixel i arbitrarily takes another point P ' (x2,y2), pixel j then calculates the gray level co-occurrence matrixes P of various spacing and angle (i, j, d, θ) expression formula such as formula (6)
P (i, j, d, θ)={ (x1,y1),(x2,y2)∈M*N|f(x1,y1)=i, f (x2,y2)=j } (6)
In formula (2), d is the distance between pixel P and P ', and θ is the generation direction of gray level co-occurrence matrixes.Utilize symbiosis square The statistical nature of battle array can with the texture information of directviewing description image, i.e., energy, correlation, contrast, unfavourable balance away from and entropy.Energy: The quadratic sum of gray level co-occurrence matrixes all elements value, such as formula (7)
Correlation: the similarity degree of gray level co-occurrence matrixes element, when element value is equal, relevance values are with regard to big;Otherwise it is related Property value is just small, such as formula (8)
Wherein:
Contrast: the readability of image and the degree of the texture rill depth are reflected.Texture ditch depth, contrast is big, depending on Feel that effect is clear;Conversely, contrast is small, rill is shallow, and effect is fuzzy.Such as formula (9)
Unfavourable balance away from[16]: measurement image texture localized variation number.Change between the big different zones for showing image texture of value It is smaller, local uniform.Such as formula (10)
Entropy: being the measurement of information content possessed by image, all elements have maximum randomness, space total in co-occurrence matrix When all values are almost equal in raw matrix, in co-occurrence matrix when element dispersed distribution, entropy is larger.It illustrates texture in image Non-uniform degree or complexity.Such as formula (11)
Gray level co-occurrence matrixes (GLCM) have preferable distinguishing ability, but computation complexity is higher.Input FASSA system The facial image of In vivo detection module carries out gray compression to image first, reduces the gray level of image, so that it may reduce image The calculation amount of GLCM transformation[11]
In the present invention, d=1 is taken, original image is compressed to 16 tonal gradations, respectively at 0 °, 45 °, 90 °, 135 ° of gray scale 5 energy, correlation, contrast, homogeney, entropy texture eigenvalues are taken on co-occurrence matrix.Take the same characteristic value of 4 angles Mean value and variance as feature vector, totally 10 feature vectors.
4haar wavelet transformation
Wavelet analysis compares Fourier transformation, Gabor transformation, it has good localization in time domain and frequency domain simultaneously Matter[14].After wavelet decomposition, the low-frequency component of facial image, LH have been concentrated in the direction LL of image, and HL, HH are protected in tri- directions respectively The horizontal edge details, vertical edge details and beveled edge details of image are held[13].It is the face figure of (m*n) pixel to size PictureIt carries out two-dimensional discrete wavelet conversion (2D-DWT) to decompose, such as formula (12)~(15)
Wherein, h is low-pass filter, and g is bandpass filter, and p is the series of wavelet decomposition, It is the identical different components of size obtained after decomposing.
Haar wavelet transform (Haar) is more sensitive to image border, line segment, overcomes statistical method and structural approach and people The shortcomings that class vision mechanism mutually disconnects.Since there are mirror-reflections for reproduction facial image, the edge of image is more fuzzy, true people After Haar transform, beveled edge details (HH) feature has biggish difference for face image and reproduction facial image.It uses herein Haar wavelet basis carries out the wavelet decomposition of image, and the average and standard deviation for extracting decomposing H H1 and second decomposition HH2 is made For feature vector and 10 feature vectors of GLCM, amount to 14 feature vector training SVM classifiers.
Table 1 is the test accuracy of the In vivo detection module in the present invention, and accuracy is substantially attained by 100%, reaches The specific environment In vivo detection application requirement of the FASSA default designed to herein.
Table 1. is based on the FASSA system testing accuracy rate and textural characteristics number of dimensions designed herein
2 couples of present invention of table are compared with document [10], the obtained In vivo detection accuracy rate of document [13].It can from table 2 To find out, herein based on the method for GLCM, testing result improves nearly 4.8%;Use the feature extraction of LBP and GLCM+Haar Method, testing result have reached 100%, and 6.13% and 3.03% has been respectively increased in testing result.According to the experimental results, The In vivo detection module of FASSA system achieves good testing result.
Detection accuracy of 2. pertinent literature of table based on different pretreatments method
The quality of testing result is obtained to further verify FASSA system in terms of In vivo detection, table 3 is to document [10], FRR, FAR, HTER value of [13], [15] and this paper design method compare [10].FRR is indicated living body faces Sample is determined as the percentage of photo face sample, and FAR indicates the percentage that photo face sample is determined as to living body faces sample Than the calculation method of HTER[15] such as formula (16)
Primary experimental result and analysis are randomly selected for 0004 individual in table 3.Due to choosing training using K-fold Test set, different training and test set will lead to the slight variation of testing result.
The performance parameter that 3. pertinent literature of table is provided using distinct methods
Based on the In vivo detection module of LBP, HOG and GLCM+Haar data preprocessing method in FASSA, FRR, FAR, HTER value is 0%.Compared with the FAS system In vivo detection module that document [10] [15] uses, the present invention has very strong refer in particular to Property, detection errors number has and significantly reduces very much.
Table 4, table 5 illustrate reliability and validity of the invention, and image pattern quantity used and list will be tested in the present invention The time of secondary test compares with document [10], document [13].From table 4, it can be seen that it is directed to a certain legitimate user, The data set quantity that the In vivo detection module training more conventional FAS in part of FASSA system is used at least reduces one third.With Document [17] is compared, and since the method for the present invention has refering in particular to property, using simple LBP feature extracting method, Detection accuracy is same Sample has reached 100%.Greatly reduce the training complexity for each individual segregation model, and for different legal User, the present invention in the In vivo detection module training stage can use multi-core CPU carry out concurrent operation, shorten the instruction of system Practice the time, reduces computation complexity, as shown in table 5.On the basis of the single testing time, four kinds of distinct methods living body inspections herein The testing time for surveying module declines to a great extent.Single testing time TtCalculation such as formula (17)
Tt=Ft+Ct (17)
FtFor single feature extraction time, CtFor the classification time of single pass SVM classifier.The present invention and document [13] GLCM+Haar feature extracting method is used simultaneously, and the In vivo detection calculating time reduces an order of magnitude in the present invention, has very High practical application value.
Sample size of the table 4. based on NUAA database used in the training test of FASSA system In vivo detection module
The In vivo detection single testing time (s) in 5. pertinent literature of table
Present invention test is tested using the NUAA database of Nanjing Aero-Space University, positive sample face in database Using the real human face sample that computer camera acquires, it is divided into 15 files;Negative sample face utilizes real human face sample This, by rotation, side etc., reproduction is obtained under different illumination intensity, is divided into 15 files.By Face datection, eyes Position, cut after obtain 64*64 Gray Face photo.In order to test detection effect of the FASSA system in In vivo detection, test In do not tested according to NUAA database description document, but select in living body faces sample set and photo face sample set 0002~No. 0006 file human face photo.Training, test sample collection are selected using the method for crosscheck (K-fold) It selects.
Experimental situation of the present invention is i5 750CPU, DDR3 8GB memory, 240 video card of NVDIAGT.It is main in engineer application It is as follows to implement technical step:
One, the conjunction under the conditions of different natural lightings is acquired (under same background) in the installation environment of face authenticating apparatus Method user's face picture, and print;
Two, the face picture of printing is subjected to reproduction under different conditions (rotation, side, different illumination);
Three, the human face photo of directly collected facial picture and reproduction is converted into gray level image, using LBP, HOG, One of GLCM, GLCM+Haar carry out data prediction to gray level image;
Four, in relevant Software Development Tools, different legitimate users are generated respectively using Lib_SVM corresponding Model;
Five, the Model that each legitimate user generates respectively is transplanted in the face authenticating apparatus based on FASSA system.
Bibliography
[1] In vivo detection technical research [D] the Zhejiang University in Sun Lin recognition of face, 2010.
[2] the new multi-pose Face detection of Ruan Jin and Expression Recognition key technology research [D] South China Science & Engineering University, 2010.
[3] In vivo detection technology [J] Information in disk sea tinkling of pieces of jade face authentication system, 2015, (10): 226.
[4] poplar builds human face in-vivo detection method research [D] Beijing University of Post & Telecommunication of the big towards recognition of face, and 2014
[5]Chingovska I,Yang J,Lei Z,et al.The 2nd competition on counter measures to 2D face spoofing attacks[C]//International Conference on Biometrics.IEEE,2013:1-6.
[6] appoint the section of lip reading recognizer research [D] China in beautiful strong high security human face recognizing identity authentication system Chongqing Institute of Green and Intelligent Technology of institute, 2016.
[7] Xie Zhe resist secondary reproduction face In vivo detection study University Of Ningbo, 2015
[8] research [D] the University Of Tianjin of the In vivo detection technology in Li Bing face authentication system, 2016.
[9]J.,Hadid,A.,M.Face spoofing detection from single images using texture and local shape analysis(2012)IET Biometrics,1(1),pp.3-10.
[10]Kose N,Dugelay J L.Classification of captured and recaptured images to detect photograph spoofing[J].IEEE,2012,36(1):1027-1032.
[11] Liu Li, outline of rectifying image texture characteristic extracting method summary [J] Journal of Image and Graphics, 2009, (04):622-635.
[12]XuChao.Bus passenger flow calculation algorithm based on HOG and SVM.Key Laboratory of Intelligent Computing&Signal Processing, Ministry of Education, Anhui University, 2015
[13] Cao Yu, Tu Ling do not stand the living body faces of gray level co-occurrence matrixes and wavelet analysis detection in virtue authentication and calculate Method [J] signal processing, 2014, (07): 830-835.
[14]Liu Wei.FastAlgorithm for Fingerprint Identification BasedonHaar- wavelet Transform,University ofScience and Technology ofChina,2007
[15]Alotaibi A,Mahmood A.Deep face liveness detection based on nonlinear diffusion using convolution neural network[J].Signal Image&Video Processing,2016, 11(4):1-8.
[16] the high gloomy anti-fake seal of automatically generate with identifying system research [D] HeFei University of Technology, 2012.
[17]Wu L,Xu Y,Xu X,et al.A Face Liveness Detection Scheme to Combining Static and Dynamic Features[M]//Biometric Recognition.Springer International Publishing,2016。。

Claims (6)

1. a kind of facial living body characteristics of specific environment detect, recognition methods, characterized in that used first in the In vivo detection stage Based on local binary patterns LBP, histograms of oriented gradients HOG, gray level co-occurrence matrixes GLCM and gray level co-occurrence matrixes GLCM+ small echo The feature extracting method of Haar is converted, then calling is detection mould of each legitimate user through support vector machines training generation Block classifies to tested facial image, and supporting vector is the real human face image by single user in each In vivo detection module With photo Face image synthesis.
2. the facial living body characteristics of specific environment as described in claim 1 detect, recognition methods, characterized in that specifically, office Portion's binary pattern LBP operator formula such as formula (1):,
Wherein P indicates surrounding pixel point number, and R indicates the radius of neighbourhood, and (P, R) value is (8,2), window size 3*3.
3. the facial living body characteristics of specific environment as described in claim 1 detect, recognition methods, characterized in that specifically, side To histogram of gradients Hog: taking any one pixel (x, y), gradient calculation formula such as formula (2)~(5) in facial image
Gx(x, y)=H (x+1, y)-H (x-1, y) (2)
Gy(x, y)=H (x, y+1)-H (x, y-1) (3)
Wherein Gx(x, y) is the horizontal direction gradient at pixel (x, y), Gy(x, y) is the vertical direction at pixel (x, y) Gradient, H (x, y) are the pixel value at pixel (x, y), and G (x, y) is the gradient magnitude at pixel (x, y), and α (x, y) is picture Gradient direction at vegetarian refreshments (x, y), the cell factory size used is (8,8), and gradient direction is quantified as 9, adjacent histogram Number is 4.
4. the facial living body characteristics of specific environment as described in claim 1 detect, recognition methods, characterized in that specifically, ash Spending secondary facial image f (x, the y) size of co-occurrence matrix GLCM: one is M*N, grey level G, in the picture any capture vegetarian refreshments P (x1, y1), pixel i arbitrarily takes another point P ' (x2, y2), pixel j then calculates the gray scale symbiosis square of various spacing and angle Battle array P (i, j, d, θ) expression formula such as formula (6)
P (i, j, d, θ)={ (x1, y1), (x2, y2)∈M*N|f(x1, y1)=i, f (x2, y2)=j } (6)
In formula (2), d is the distance between pixel P and P ', and θ is the generation direction of gray level co-occurrence matrixes, utilizes co-occurrence matrix Statistical nature can with the texture information of directviewing description image, i.e., energy, correlation, contrast, unfavourable balance away from and entropy, energy: gray scale The quadratic sum of co-occurrence matrix all elements value, such as formula (7):
Correlation: the similarity degree of gray level co-occurrence matrixes element, when element value is equal, relevance values are with regard to big;Otherwise relevance values With regard to small, such as formula (8)
Wherein:
Contrast: reflecting the readability of image and the degree of the texture rill depth, and texture ditch depth, contrast is big, vision effect Fruit is clear;Conversely, contrast is small, rill is shallow, and effect is fuzzy, such as formula (9)
Unfavourable balance away from: measurement image texture localized variation number, change smaller, office between the big different zones for showing image texture of value Portion is uniform, such as formula (10)
Entropy: being the measurement of information content possessed by image, and all elements have maximum randomness, space symbiosis square in co-occurrence matrix When all values are almost equal in battle array, in co-occurrence matrix when element dispersed distribution, entropy is larger, it illustrate texture in image it is non- Even degree or complexity, such as formula (11)
For the calculation amount for reducing image GLCM transformation, gray compression is carried out to image first, reduces the gray level of image.
5. the facial living body characteristics of specific environment as described in claim 1 detect, recognition methods, characterized in that specifically, Haar wavelet transformation: after wavelet decomposition, the low-frequency component of facial image, LH, tri- directions HL, HH have been concentrated in the direction LL of image The horizontal edge details, vertical edge details and beveled edge details of image are maintained respectively, and L indicates the low frequency part of signal, H table Show signal high frequency section, is the facial image of (m*n) pixel to sizeIt carries out two-dimensional discrete wavelet conversion (2D-DWT) It decomposes, such as formula (12)~(15)
Wherein, h is low-pass filter, and g is bandpass filter, and p is the series of wavelet decomposition, It is the identical different components of size obtained after decomposing.
6. the facial living body characteristics of specific environment as described in claim 1 detect, recognition methods, characterized in that rice Haar Wavelet basis carries out the wavelet decomposition of image, extracts the average and standard deviation of a decomposing H H1 and second decomposition HH2 as feature 10 feature vectors of vector and GLCM amount to 14 feature vector training SVM classifiers.
CN201810637801.7A 2018-06-20 2018-06-20 The detection of facial living body characteristics, the recognition methods of specific environment Pending CN108960088A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810637801.7A CN108960088A (en) 2018-06-20 2018-06-20 The detection of facial living body characteristics, the recognition methods of specific environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810637801.7A CN108960088A (en) 2018-06-20 2018-06-20 The detection of facial living body characteristics, the recognition methods of specific environment

Publications (1)

Publication Number Publication Date
CN108960088A true CN108960088A (en) 2018-12-07

Family

ID=64490601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810637801.7A Pending CN108960088A (en) 2018-06-20 2018-06-20 The detection of facial living body characteristics, the recognition methods of specific environment

Country Status (1)

Country Link
CN (1) CN108960088A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829434A (en) * 2019-01-31 2019-05-31 杭州创匠信息科技有限公司 Method for anti-counterfeit and device based on living body texture
CN110781913A (en) * 2019-09-11 2020-02-11 西安电子科技大学 Zipper cloth belt defect detection method
CN110837802A (en) * 2019-11-06 2020-02-25 齐鲁工业大学 Facial image feature extraction method based on gray level co-occurrence matrix
CN111259792A (en) * 2020-01-15 2020-06-09 暨南大学 Face living body detection method based on DWT-LBP-DCT characteristics
CN111429304A (en) * 2020-02-28 2020-07-17 鄂尔多斯市斯创网络科技有限责任公司 Food safety supervision platform
CN111582197A (en) * 2020-05-07 2020-08-25 贵州省邮电规划设计院有限公司 Living body based on near infrared and 3D camera shooting technology and face recognition system
CN111914750A (en) * 2020-07-31 2020-11-10 天津大学 Face living body detection method for removing highlight features and directional gradient histograms
CN113689661A (en) * 2020-05-19 2021-11-23 深圳市中兴系统集成技术有限公司 Hooking child behavior early warning system based on video analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159016A (en) * 2007-11-26 2008-04-09 清华大学 Living body detecting method and system based on human face physiologic moving
CN103605958A (en) * 2013-11-12 2014-02-26 北京工业大学 Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN103886301A (en) * 2014-03-28 2014-06-25 中国科学院自动化研究所 Human face living detection method
CN105046224A (en) * 2015-07-16 2015-11-11 东华大学 Block self-adaptive weighted histogram of orientation gradient feature based face recognition method
CN106599870A (en) * 2016-12-22 2017-04-26 山东大学 Face recognition method based on adaptive weighting and local characteristic fusion
CN107392187A (en) * 2017-08-30 2017-11-24 西安建筑科技大学 A kind of human face in-vivo detection method based on gradient orientation histogram

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101159016A (en) * 2007-11-26 2008-04-09 清华大学 Living body detecting method and system based on human face physiologic moving
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN103605958A (en) * 2013-11-12 2014-02-26 北京工业大学 Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN103886301A (en) * 2014-03-28 2014-06-25 中国科学院自动化研究所 Human face living detection method
CN105046224A (en) * 2015-07-16 2015-11-11 东华大学 Block self-adaptive weighted histogram of orientation gradient feature based face recognition method
CN106599870A (en) * 2016-12-22 2017-04-26 山东大学 Face recognition method based on adaptive weighting and local characteristic fusion
CN107392187A (en) * 2017-08-30 2017-11-24 西安建筑科技大学 A kind of human face in-vivo detection method based on gradient orientation histogram

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829434A (en) * 2019-01-31 2019-05-31 杭州创匠信息科技有限公司 Method for anti-counterfeit and device based on living body texture
CN110781913A (en) * 2019-09-11 2020-02-11 西安电子科技大学 Zipper cloth belt defect detection method
CN110837802A (en) * 2019-11-06 2020-02-25 齐鲁工业大学 Facial image feature extraction method based on gray level co-occurrence matrix
CN111259792A (en) * 2020-01-15 2020-06-09 暨南大学 Face living body detection method based on DWT-LBP-DCT characteristics
CN111259792B (en) * 2020-01-15 2023-05-12 暨南大学 DWT-LBP-DCT feature-based human face living body detection method
CN111429304A (en) * 2020-02-28 2020-07-17 鄂尔多斯市斯创网络科技有限责任公司 Food safety supervision platform
CN111582197A (en) * 2020-05-07 2020-08-25 贵州省邮电规划设计院有限公司 Living body based on near infrared and 3D camera shooting technology and face recognition system
CN113689661A (en) * 2020-05-19 2021-11-23 深圳市中兴系统集成技术有限公司 Hooking child behavior early warning system based on video analysis
CN111914750A (en) * 2020-07-31 2020-11-10 天津大学 Face living body detection method for removing highlight features and directional gradient histograms

Similar Documents

Publication Publication Date Title
CN108960088A (en) The detection of facial living body characteristics, the recognition methods of specific environment
CN106778586B (en) Off-line handwritten signature identification method and system
Debiasi et al. PRNU-based detection of morphed face images
Chen et al. Segmentation of fingerprint images using linear classifier
Chakraborty et al. An overview of face liveness detection
Tome et al. The 1st competition on counter measures to finger vein spoofing attacks
Ghiani et al. Fingerprint liveness detection by local phase quantization
Qiu et al. Finger vein presentation attack detection using total variation decomposition
Chugh et al. Fingerprint spoof detection using minutiae-based local patches
CN103605958A (en) Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
Sequeira et al. Iris liveness detection methods in mobile applications
Zhang et al. Face anti-spoofing detection based on DWT-LBP-DCT features
Bhilare et al. A study on vulnerability and presentation attack detection in palmprint verification system
Yuan et al. Fingerprint liveness detection using histogram of oriented gradient based texture feature
Marasco et al. Fingerphoto presentation attack detection: Generalization in smartphones
CN110222660B (en) Signature authentication method and system based on dynamic and static feature fusion
CN115797970B (en) Dense pedestrian target detection method and system based on YOLOv5 model
Qiu et al. Finger vein presentation attack detection using convolutional neural networks
CN106845500A (en) A kind of human face light invariant feature extraction method based on Sobel operators
Kumar Signature verification using neural network
Zuo et al. A model based, anatomy based method for synthesizing iris images
Zhen-Yan Chinese character recognition method based on image processing and hidden markov model
Gupta et al. Energy deviation measure: a technique for digital image forensics
Hajare et al. Face Anti-Spoofing Techniques and Challenges: A short survey
Majidpour et al. Unreadable offline handwriting signature verification based on generative adversarial network using lightweight deep learning architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181207

RJ01 Rejection of invention patent application after publication