CN103942540A - False fingerprint detection algorithm based on curvelet texture analysis and SVM-KNN classification - Google Patents

False fingerprint detection algorithm based on curvelet texture analysis and SVM-KNN classification Download PDF

Info

Publication number
CN103942540A
CN103942540A CN201410143345.2A CN201410143345A CN103942540A CN 103942540 A CN103942540 A CN 103942540A CN 201410143345 A CN201410143345 A CN 201410143345A CN 103942540 A CN103942540 A CN 103942540A
Authority
CN
China
Prior art keywords
mrow
munderover
math
sigma
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410143345.2A
Other languages
Chinese (zh)
Inventor
张永良
方珊珊
谢瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU JINGLIANWEN TECHNOLOGY Co Ltd
Original Assignee
HANGZHOU JINGLIANWEN TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU JINGLIANWEN TECHNOLOGY Co Ltd filed Critical HANGZHOU JINGLIANWEN TECHNOLOGY Co Ltd
Priority to CN201410143345.2A priority Critical patent/CN103942540A/en
Publication of CN103942540A publication Critical patent/CN103942540A/en
Pending legal-status Critical Current

Links

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a false fingerprint detection algorithm based on curvelet texture analysis and SVM-KNN classification. The false fingerprint detection algorithm based on the curvelet texture analysis and the SVM-KNN classification comprises the following steps of curvelet transformation; curvelet reconstruction; curvelet coefficient feature extraction, extraction of coefficient energy, entropy and the like, and coefficient first-order statistical magnitude extraction; textural feature extraction, and extraction of a first-order statistical magnitude, a gray-level co-occurrence matrix, MRF features and the like; training of classifiers; performance evaluation of the classifiers; false fingerprint detection, wherein testing is carried out on samples by utilizing the trained classifiers. The false fingerprint detection algorithm based on the curvelet texture analysis and the SVM-KNN classification is suitable for various fingerprint acquisition instruments of 500 dpi resolution, analysis is respectively carried out on image high-frequency noise information and texture information after denoising, and the noise and texture differences between true fingerprints and false fingerprints are quantified.

Description

False fingerprint detection algorithm based on cockle texture analysis and SVM-KNN classification
Technical Field
The invention relates to the field of image processing, pattern recognition and other related technologies, in particular to a false fingerprint detection algorithm based on texture analysis and SVM-KNN classification, and aims to detect fingerprints to distinguish true fingerprints from false fingerprints.
Background
The false fingerprint generally refers to a fingerprint which is made of materials such as silica gel, latex, plasticine and the like and simulates human fingerprint textures, and the algorithm mainly comprises the important links such as image processing, feature extraction, classifier training, image classification and the like.
With the development of economy and society, biometric identification technology, especially fingerprint technology, is being widely used. However, the appearance of false fingerprints made of cheap materials and the improvement of the manufacturing technology thereof bring potential safety hazards to fingerprint feature identification. Research shows that the traditional fingerprint acquisition instrument can be deceived by using a fake fingerprint film made of cheap materials such as silica gel, gelatin, plasticine and the like according to fingerprints left by a user. Attempts to create unauthorized access, known as spoofing, are made using a false fingerprint containing information about the identity of the authorized user. Such attack methods against the acquisition instrument are often employed when one wishes to conceal the true identity or gain the privilege of an authorized user. In order to resist the attack of false fingerprints on the fingerprint identification system, a false fingerprint detection technology is developed. The false fingerprint detection technology is a method for judging whether a fingerprint sample comes from a live fingerprint, and the detection method can be divided into two types: the first type uses characteristics of finger temperature, skin conductivity, pulse blood oxygen and the like, which can be detected by adding extra hardware equipment on a fingerprint acquisition instrument, but the cost of the acquisition instrument is increased, so that the method is called a hardware-based false fingerprint detection method; the second method is to perform additional processing on the fingerprint sample picture, such as gray level analysis, texture feature analysis, etc., in order to detect activity information of the fingerprint image, and is called a software-based method. Software-based methods include static and dynamic feature detection methods, where static features are extracted from one or more images (a finger is placed on the acquirer one or more times for acquisition), and dynamic features are extracted from multiple fingerprint image frames (a finger is placed on the acquirer for a period of time to acquire a sequence of images for analysis). The software-based approach is low cost, less intrusive to the user, and can be used with existing fingerprint collectors. Therefore, the method has great practical value and popularization significance for the research of the software-based false fingerprint detection method.
Moon provides a false fingerprint detection method for analyzing the roughness degree of the surface of a high-resolution fingertip image. The method is only effective for high-resolution acquisition instruments (1000 dpi and above), and has low applicability to common 500dpi acquisition instruments popular in the market. Due to the existence of sweat pores and the sweat phenomenon, the gray level of the ridge line of the real fingerprint is obviously changed, while the gray level of the ridge line of the artificial fingerprint made of artificial materials such as gelatin and silica gel is slowly changed. Based on the above idea, Nikam and Agrwal propose a texture-based method for analyzing the liveness of a fingerprint image using the gray levels associated with the fingerprint pixels. This method shows good performance when the center point (core point) is accurately located, whereas existing center point detection algorithms do not perform well when dealing with low quality images and too dry or too wet fingerprint images. Abhyankar and Schuckers propose a method based on multi-resolution texture analysis and texture frequency analysis to quantify the gray level distribution changes as the physical structure changes using different texture characteristics. However, this method has a limitation in practical application because the calculation of the local ridge frequency is influenced by weather and is also related to different skin conditions.
At present, texture analysis is a mainstream method for false fingerprint detection, filtering and denoising are generally performed on a fingerprint image before texture analysis, and common texture features include first-order statistics, Local Binary Pattern (LBP) and the like. However, the LBP operator has a limitation of a small spatial support area, and it is not preferable to perform a pure texture analysis instead of discarding the analysis of the noise difference of the acquired image due to the difference of materials between true and false fingerprints.
Disclosure of Invention
The invention aims to solve the problems and provides a false fingerprint detection algorithm based on curved fingerprint analysis and SVM-KNN classification, the algorithm is suitable for various fingerprint collection instruments with the resolution of 500dpi, high-frequency noise information of images and denoised texture information are respectively analyzed, noise and texture differences between true and false fingerprints are quantified, and compared with an LBP algorithm, the algorithm can better express the tiny deviation between noise distribution of the true fingerprints and the false fingerprints caused by false fingerprint manufacturing materials.
The technical conception of the invention is as follows: currently, the mainstream method for false fingerprint detection is texture analysis, for example, in the text "a comprehensive description of texture features with classification based on fed distribution" of t.ojala, correlation analysis is performed on false fingerprint detection using Local Binary Pattern (LBP) features, LBP is an effective texture description operator, measures and extracts Local texture information of an image, the illumination is not changed, the textures of the true and false fingerprints have no obvious difference, the expression forms of the textures have slight difference, the fingerprint images are quantitatively analyzed by extracting LBP characteristics to distinguish the true and false fingerprints, the test results of this algorithm showed an average false acceptance rate of 14.85%, but the LBP operator had the limitation of a small spatial support region, and the method carries out simple texture analysis, and abandons the analysis of the noise difference of the collected images caused by different materials of the true and false fingerprints, which is also not advisable; because the material of the false fingerprint is different from the skin of a human body, the noise distribution of the fingerprint image acquired usually has larger difference, the invention provides the false fingerprint detection algorithm for carrying out SVM-KNN classification by utilizing the curvelet coefficient characteristic and the curvelet reconstruction image texture characteristic. Firstly, performing curvelet transformation on a fingerprint image to extract coefficient characteristics of each scale and each direction domain, wherein curvelet coefficients can be divided into a plurality of scale layers, the innermost layer, namely the first layer, is called a Coarse scale layer and is a matrix composed of low-frequency coefficients, the outermost layer is called a Fine scale layer and is a matrix composed of high-frequency coefficients, the middle layers are called Detail scale layers, each layer of coefficients is a matrix composed of medium-high frequency coefficients, and the high-frequency and medium-high frequency coefficients in the Fine layer and the Detail layer contain noise information of the image; reconstructing a fingerprint image and extracting texture features and coefficient features such as first-order statistics, a gray level co-occurrence matrix (GLCM) and a Markov Random Field (MRF) to form a feature vector; and then training by using an SVM (support vector machine), and introducing SVM-KNN (support vector machine-K nearest neighbor) classification to detect false fingerprints due to instability of the SVM in classifying samples near the hyperplane.
In order to achieve the purpose, the invention adopts the following technical scheme:
the false fingerprint detection algorithm based on the moire texture analysis and SVM-KNN classification comprises the following steps:
s1, curvelet transformation
Carrying out discrete curvelet transformation on the target image by utilizing a Wrap algorithm to obtain a curvelet system c (j, l, k);
s2, curvelet reconstruction
Denoising before curvelet transformation, and then reconstructing curvelet to obtain a reconstructed image f;
s3, coefficient feature extraction
Extracting coefficient characteristics of the curvelet coefficient c (j, l, k), which comprises the following processes:
s3.1, dividing the curvelet coefficient into a plurality of scale layers, wherein the innermost layer, namely the first layer is called a Coarse scale layer and is a matrix formed by low-frequency coefficients; the outermost layer, also called Fine scale layer, is a matrix composed of high frequency coefficients; the middle layers are called Detail scale layers, and the coefficient of each layer is a matrix consisting of medium and high frequency coefficients; the high-frequency and medium-high-frequency coefficients in the Fine layer and the Detail layer contain graphical noise information, coefficient characteristics are extracted from the Fine layer and the Detail layer, energy E, a maximum value MAX, a minimum value MIN, a Mean value Mean and a variance Var of each layer of system are counted, and the square sum of absolute values of the coefficients of each layer is calculated to serve as energy;
s3.2, performing first-order statistic feature extraction on each layer of series, wherein the first-order measurement comprises the following steps:
energy: <math> <mrow> <mi>e</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>H</mi> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math>
entropy: <math> <mrow> <mi>s</mi> <mo>=</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mi>log</mi> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
variance: <math> <mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
skewness: <math> <mrow> <msub> <mi>&gamma;</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>&sigma;</mi> <mn>3</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>3</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
kurtosis: <math> <mrow> <msub> <mi>&gamma;</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <mover> <mi>&Sigma;</mi> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>4</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
s4, extracting texture features
Three types of texture features are extracted from the reconstructed image f, namely a first-order measurement matrix, a gray level co-occurrence matrix and an MRF (maximum likelihood ratio), and the method comprises the following steps:
s4.1, calculating the variation degree among pixels through a histogram, extracting first-order statistics, and quantifying the variation of gray level distribution when the physical structure of the image is changed to measure the probability of the occurrence of a certain gray level value at a random position in the image, wherein the correlation among the pixels can indicate the truth of a fingerprint;
s4.2, the pixel with the gray level x in the image is located at (i, k), the distance between the pixel and the pixel is D, the direction is theta, and the pixel with the gray level y is counted (i + D)i,k+Dk) The mathematical expression of the number of occurrences p (x, y, d, θ) is:
p(x,y,d,θ)={[(i,k),(i+Di,k+Dk)|f(i,k)=x,
f(i+Di,k+Dk)=y]}
wherein x, y =1, 2, ·, L denotes a gray level in an image;
i, K =1, 2, ·, K denotes pixel coordinates;
d is the step length for generating the gray level co-occurrence matrix;
Di,Dkis the position offset;
the generation direction theta can be any direction, so that the co-occurrence matrixes in different directions are generated, and the gray level co-occurrence matrix is normalized:
<math> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
the extraction distance d is 1-4, theta is respectively taken in four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees, and 16 gray level co-occurrence matrixes are counted; then respectively calculating corresponding energy, contrast, entropy, local stationary, autocorrelation and dissimilarity:
energy:
<math> <mrow> <mi>ASM</mi> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msup> <mrow> <mo>(</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
contrast ratio:
<math> <mrow> <mi>COM</mi> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msup> <mi>n</mi> <mn>2</mn> </msup> <mo>{</mo> <msub> <mi>&Sigma;</mi> <mrow> <mo>|</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>|</mo> <mo>=</mo> <mi>n</mi> </mrow> </msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </math>
entropy:
<math> <mrow> <mi>ENT</mi> <mo>=</mo> <mo>-</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>log</mi> <mi> G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
local stability:
<math> <mrow> <mi>IDM</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </math>
self-correlation:
<math> <mrow> <mi>COR</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> </mrow> <mrow> <msub> <mi>s</mi> <mi>x</mi> </msub> <msub> <mi>s</mi> <mi>y</mi> </msub> </mrow> </mfrac> </mrow> </math>
dissimilarity:
<math> <mrow> <mi>DIS</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mo>|</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>|</mo> </mrow> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
wherein,
<math> <mrow> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>x</mi> <mo>&CenterDot;</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>y</mi> <mo>&CenterDot;</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mi>s</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
<math> <mrow> <msubsup> <mi>s</mi> <mi>y</mi> <mn>2</mn> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
the texture features can effectively describe the texture features of the fingerprint image, and have better identification capability, and the gray level co-occurrence matrix of the image can reflect the comprehensive information of the gray level of the image about the change amplitude, direction and adjacent intervals, and is the basis for analyzing the local pattern and the arrangement rule of the image;
s4.3, from the structural analysis perspective, MRF textural features are to find out interdependencies between texture primitives and texture primitives, the primitive relationships of the MRF textural features can be represented by a conditional probability model, the MRF is a two-dimensional lattice, each point can be described by the probability model, the assumption of the MRF is that the pixel value of each point in the lattice only depends on the pixel value of pixels in the field, and the Markov random field can be described by the following local conditional Probability Density (PDF):
p(z(s)|z(m),m=1,2,…,N×M,s≠m)
=p(z(s)|z(m),m∈N(s))
where z(s) is the pixel value of point s in the lattice nxm, N(s) is a domain set of pixel points centered at s, the value of p (z (s)) is affected by z (M), if the PDF obeys gaussian distribution, the markov random field is gaussian-markov random field (GMRF), and the neighborhood information is used to estimate the symmetric difference equation for the gray value of the pixel point:
z(s)=∑βs,m[z(s+m)+z(s-m)]+es
wherein beta iss,mWeight, e, of gray value of center pixel contributing to each neighborhood pixelsIs the gaussian noise with mean 0, m is the deviation from the center point s, and is expressed as a matrix notation formula:
z(s)=βTQs+es
wherein beta is represented bys,mVector of composition, QsIs defined as follows:
Q s = z ( s + m 1 ) + z ( s - m 1 ) z ( s + m 2 ) + z ( s - m 2 ) z ( s + m 3 ) + z ( s - m 3 ) . . .
the texture characteristic quantity is calculated by utilizing a least square method,
<math> <mrow> <mi>&beta;</mi> <mo>=</mo> <msup> <mrow> <mo>[</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>f</mi> </mrow> </munder> <msub> <mi>Q</mi> <mi>s</mi> </msub> <msubsup> <mi>Q</mi> <mi>s</mi> <mi>T</mi> </msubsup> <mo>]</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>f</mi> </mrow> </munder> <msub> <mi>Q</mi> <mi>s</mi> </msub> <mi>z</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </math>
using GMRF to carry out second-order parameter estimation on the fingerprint image, wherein any 3 x 3 window in the image is the image sampling template range, thereby determining QsF represents a fingerprint image, each 3 multiplied by 3 window beta is a characteristic value, the MRF characteristic value is insensitive to gray level change, the MRF reflects different aggregations of texture elements, textures among different aggregations correspond to different statistical parameters, and random characteristics of the textures can be well described by applying the mathematical MRF;
s5.SVM-KNN classifier formation
For a sample alpha to be identified, alpha and two types of support vector representative points alpha are calculated+And alpha-When the distance difference is smaller than a given threshold value, namely the alpha is closer to a separation interface, the SVM only calculates the distance between the SVM and a representative point taken by the two classes to classify the samples easily, and classification results are obtained by classifying the test samples by KNN and calculating the distance between the samples to be identified and each support vector;
s6, classifier training
Estimating relevant parameters of a classification model by analyzing a training set with known classes, deducing the class of test data according to the relevant parameters, training by using LIBSVM (LiBSVM), obtaining a training model A by using a feature vector, and classifying and scoring results by using SVM (support vector machine) -KNN (K nearest neighbor) to respectively test samples through the training model
S7, evaluating performance of classifier
Dividing a fingerprint image database into two parts, wherein 50% of fingerprint image database is used for training, 50% of fingerprint image database is used for testing, and after testing, the different epsilon values of the database have certain difference, wherein epsilon =0.2 in the invention, and an SVM-KNN classifier is used for training and classifying, so that the classification performance of a simple SVM classifier is improved to a certain extent;
s8. false fingerprint detection
The image to be detected is subjected to the operations of steps S1, S2 and S3, and the obtained feature vectors are classified by a classifier.
The invention has the following beneficial effects: the texture and the noise information of the image are analyzed in multiple aspects, an SVM classifier is used for fusing KNN to improve the performance of the single SVM classifier, and the reliability of the classification result is good.
Drawings
FIG. 1 is a flow chart of a false fingerprint image detection algorithm in a false fingerprint detection algorithm based on a curved texture analysis and SVM-KNN classification.
FIG. 2 is a SVM-KNN classification schematic diagram in a fake fingerprint detection algorithm based on the curved wave texture analysis and SVM-KNN classification.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.
False fingerprint detection algorithm based on the moire texture analysis and SVM-KNN classification, comprising the following steps (see FIG. 1):
s1, curvelet transformation: carrying out discrete curvelet transformation on the target image by utilizing a Wrap algorithm to obtain a curvelet system c (j, l, k);
where j denotes the scale, l denotes the angle, and k denotes the kth coefficient.
S2, curvelet reconstruction: the curvelet reconstruction process is a curvelet inverse transformation process, and before curvelet transformation, denoising is carried out firstly, and then curvelet reconstruction is carried out to obtain a reconstructed image f.
S3, coefficient feature extraction: extracting coefficient characteristics of the curvelet coefficient c (j, l, k), which comprises the following processes:
s3.1, dividing the curvelet coefficient into a plurality of scale layers, wherein the innermost layer, namely the first layer is called a Coarse scale layer and is a matrix formed by low-frequency coefficients; the outermost layer, also called Fine scale layer, is a matrix composed of high frequency coefficients; the middle layers are called Detail scale layers, and the coefficient of each layer is a matrix consisting of medium and high frequency coefficients; the high-frequency and medium-high-frequency coefficients in the Fine layer and the Detail layer contain graphical noise information, coefficient characteristics are extracted from the Fine layer and the Detail layer, energy E, a maximum value MAX, a minimum value MIN, a Mean value Mean and a variance Var of each layer of system are counted, and the square sum of absolute values of the coefficients of each layer is calculated to serve as the energy.
S3.2, performing first-order statistic feature extraction on each layer of series, wherein the first-order measurement comprises the following steps:
energy: <math> <mrow> <mi>e</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>H</mi> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math>
entropy: <math> <mrow> <mi>s</mi> <mo>=</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mi>log</mi> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
variance: <math> <mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
skewness: <math> <mrow> <msub> <mi>&gamma;</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>&sigma;</mi> <mn>3</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>3</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
kurtosis: <math> <mrow> <msub> <mi>&gamma;</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <mover> <mi>&Sigma;</mi> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>4</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
s4, extracting texture features: three types of texture features are extracted from the reconstructed image f, namely a first-order measurement matrix, a gray level co-occurrence matrix and an MRF (maximum likelihood ratio), and the method comprises the following steps:
s4.1, calculating the variation degree among pixels through a histogram, extracting first-order statistics, and aiming at quantifying the variation of gray level distribution when the physical structure of the image is changed, wherein the first-order statistics can be applied to general one-dimensional data feature extraction and can also be applied to a two-dimensional data matrix of a similar picture to measure the probability of the occurrence of a certain gray value at a random position in the image, and the correlation among the pixels can indicate the truth of a fingerprint.
S4.2, counting a pixel (i + D) with a gray level of y and a distance of D, a direction of theta, wherein the position of the pixel is (i, k) and the gray level of the pixel is x in the imagei,k+Dk) The mathematical expression of the number of occurrences p (x, y, d, θ) is:
p(x,y,d,θ)={[(i,k),(i+Di,k+Dk)|f(i,k)=x,
f(i+Di,k+Dk)=y]}
wherein x, y =1, 2, ·, L denotes a gray level in an image;
i, K =1, 2, ·, K denotes pixel coordinates;
d is the step length for generating the gray level co-occurrence matrix;
Di,Dkis the position offset;
the generation direction theta can be any direction, so that the co-occurrence matrixes in different directions are generated, and the gray level co-occurrence matrix is normalized:
<math> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
the extraction distance d is 1-4, theta is respectively taken in four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees, and 16 gray level co-occurrence matrixes are counted; then respectively calculating corresponding energy, contrast, entropy, local stationary, autocorrelation and dissimilarity:
energy:
<math> <mrow> <mi>ASM</mi> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msup> <mrow> <mo>(</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
contrast ratio:
<math> <mrow> <mi>COM</mi> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msup> <mi>n</mi> <mn>2</mn> </msup> <mo>{</mo> <msub> <mi>&Sigma;</mi> <mrow> <mo>|</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>|</mo> <mo>=</mo> <mi>n</mi> </mrow> </msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </math>
entropy:
<math> <mrow> <mi>ENT</mi> <mo>=</mo> <mo>-</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>log</mi> <mi> G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
local stability:
<math> <mrow> <mi>IDM</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </math>
self-correlation:
<math> <mrow> <mi>COR</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> </mrow> <mrow> <msub> <mi>s</mi> <mi>x</mi> </msub> <msub> <mi>s</mi> <mi>y</mi> </msub> </mrow> </mfrac> </mrow> </math>
dissimilarity:
<math> <mrow> <mi>DIS</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mo>|</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>|</mo> </mrow> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
wherein,
<math> <mrow> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>x</mi> <mo>&CenterDot;</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>y</mi> <mo>&CenterDot;</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mi>s</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
<math> <mrow> <msubsup> <mi>s</mi> <mi>y</mi> <mn>2</mn> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
the texture features can effectively describe the texture features of the fingerprint image, and have better identification capability, and the gray level co-occurrence matrix of the image can reflect the comprehensive information of the gray level of the image about the change amplitude, direction and adjacent intervals, and is the basis for analyzing the local pattern and the arrangement rule of the image;
s4.3, from the structural analysis perspective, MRF textural features are to find out interdependencies between texture primitives and texture primitives, the primitive relationships of the MRF textural features can be represented by a conditional probability model, the MRF is a two-dimensional lattice, each point can be described by the probability model, the assumption of the MRF is that the pixel value of each point in the lattice only depends on the pixel value of pixels in the field, and the Markov random field can be described by the following local conditional Probability Density (PDF):
p(z(s)|z(m),m=1,2,…,N×M,s≠m)
=p(z(s)|z(m),m∈N(s))
where z(s) is the pixel value of point s in the lattice nxm, N(s) is a domain set of pixel points centered at s, the value of p (z (s)) is affected by z (M), if the PDF obeys gaussian distribution, the markov random field is gaussian-markov random field (GMRF), and the neighborhood information is used to estimate the symmetric difference equation for the gray value of the pixel point:
z(s)=∑βs,m[z(s+m)+z(s-m)]+es
wherein beta iss,mWeight, e, of gray value of center pixel contributing to each neighborhood pixelsIs the gaussian noise with mean 0, m is the deviation from the center point s, and is expressed as a matrix notation formula:
z(s)=βTQs+es
wherein beta is represented bys,mVector of composition, QsIs defined as follows:
Q s = z ( s + m 1 ) + z ( s - m 1 ) z ( s + m 2 ) + z ( s - m 2 ) z ( s + m 3 ) + z ( s - m 3 ) . . .
the texture characteristic quantity is calculated by utilizing a least square method,
<math> <mrow> <mi>&beta;</mi> <mo>=</mo> <msup> <mrow> <mo>[</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>f</mi> </mrow> </munder> <msub> <mi>Q</mi> <mi>s</mi> </msub> <msubsup> <mi>Q</mi> <mi>s</mi> <mi>T</mi> </msubsup> <mo>]</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>f</mi> </mrow> </munder> <msub> <mi>Q</mi> <mi>s</mi> </msub> <mi>z</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </math>
using GMRF to carry out second-order parameter estimation on the fingerprint image, wherein any 3 x 3 window in the image is the image sampling template range, thereby determining Qs(ii) a f represents a fingerprint image, each 3 multiplied by 3 window beta is a characteristic value, the MRF characteristic value is insensitive to gray level change, the MRF reflects different aggregations of texture elements, textures among different aggregations correspond to different statistical parameters, and random characteristics of the textures can be well described by applying the mathematical MRF.
S5, forming an SVM-KNN classifier: the traditional SVM classification algorithm has one defect: when the sample distance classification hyperplane is smaller than a given threshold epsilon, the classification accuracy is reduced. In the SVM classification process, the classification accuracy of the SVM is determined according to the error degree of the obtained representative points of each type of support vector, and samples which are easy to deviate are classified through KNN to improve the classification accuracy. Specifically, for a sample alpha to be identified, alpha and two types of support directions are calculatedQuantity representative Point α+And alpha-When the distance difference is smaller than a given threshold value, namely the alpha is closer to the separation interface, the SVM only calculates the distance between the SVM and one representative point taken by the two classes to classify the samples, and classification is easy to be mistaken.
S6, classifier training: estimating relevant parameters of a classification model by analyzing a training set with known classes, deducing the class of test data according to the relevant parameters, training by using LIBSVM (LiBSVM), obtaining a training model A by using a feature vector, and classifying and scoring results by using SVM (support vector machine) -KNN (K nearest neighbor) to respectively test samples through the training model
S7, evaluating the performance of the classifier: the fingerprint image database is divided into two parts, 50% of fingerprint image database is used for training, 50% of fingerprint image database is used for testing, and after testing, the different epsilon values of the database have certain difference, wherein epsilon =0.2, and an SVM-KNN classifier is used for training and classifying, so that the classification performance of a simple SVM classifier is improved to a certain extent.
Referring to fig. 2, the SVM-KNN classification principle means:
the traditional SVM classification algorithm has the following defects: when the sample distance classification hyperplane is smaller than a given threshold epsilon, the classification accuracy is reduced. In the SVM classification process, the classification accuracy of the SVM is determined according to the error degree of the obtained representative points of each type of support vector, and samples which are easy to deviate are classified through KNN to improve the classification accuracy. Specifically, for a sample alpha to be identified, alpha and two types of support vector representative points alpha are calculated+And alpha-When the distance difference is smaller than a given threshold value, that is, α is closer to the separation interface and falls into a region III (as shown in fig. 2), the SVM only calculates the distance to a representative point of the two classes for classification, which is easy to be classified, and the KNN is used to classify the test sample, and the distance between the sample to be identified and each support vector is calculated to obtain the classification result.
S8. false fingerprint detection
The image to be detected is subjected to the operations of steps S1, S2 and S3, and the obtained feature vectors are classified by a classifier.
The final table is the results of this test on the LivDet2011 game image library.
TABLE 1 false recognition rate of the algorithm for each false fingerprint data
TABLE 2 false reject rate of this algorithm to true fingerprint data
TABLE 3 false fingerprint average false acceptance rate for the algorithm herein
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiment, and all technical solutions belonging to the principle of the present invention belong to the protection scope of the present invention. Modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention.

Claims (1)

1. The false fingerprint detection algorithm based on the cockle texture analysis and SVM-KNN classification is characterized by comprising the following steps:
s1, curvelet transformation
Carrying out discrete curvelet transformation on the target image by utilizing a Wrap algorithm to obtain a curvelet system c (j, l, k);
s2, curvelet reconstruction
Denoising before curvelet transformation, and then reconstructing curvelet to obtain a reconstructed image f;
s3, coefficient feature extraction
Extracting coefficient characteristics of the curvelet coefficient c (j, l, k), which comprises the following processes:
s3.1, dividing the curvelet coefficient into a plurality of scale layers, wherein the innermost layer, namely the first layer is called a Coarse scale layer and is a matrix formed by low-frequency coefficients; the outermost layer, also called Fine scale layer, is a matrix composed of high frequency coefficients; the middle layers are called Detail scale layers, and the coefficient of each layer is a matrix consisting of medium and high frequency coefficients; the high-frequency and medium-high-frequency coefficients in the Fine layer and the Detail layer contain graphical noise information, coefficient characteristics are extracted from the Fine layer and the Detail layer, energy E, a maximum value MAX, a minimum value MIN, a Mean value Mean and a variance Var of each layer of system are counted, and the square sum of absolute values of the coefficients of each layer is calculated to serve as energy;
s3.2, performing first-order statistic feature extraction on each layer of series, wherein the first-order measurement comprises the following steps:
energy: <math> <mrow> <mi>e</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>H</mi> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math>
entropy: <math> <mrow> <mi>s</mi> <mo>=</mo> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mi>log</mi> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
variance: <math> <mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
skewness: <math> <mrow> <msub> <mi>&gamma;</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>&sigma;</mi> <mn>3</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>3</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
kurtosis: <math> <mrow> <msub> <mi>&gamma;</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <mover> <mi>&Sigma;</mi> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </mover> <msup> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>4</mn> </msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
s4, extracting texture features
Three types of texture features are extracted from the reconstructed image f, namely a first-order measurement matrix, a gray level co-occurrence matrix and an MRF (maximum likelihood ratio), and the method comprises the following steps:
s4.1, calculating the variation degree among pixels through a histogram, extracting first-order statistics, and quantifying the variation of gray level distribution when the physical structure of the image is changed to measure the probability of the occurrence of a certain gray level value at a random position in the image, wherein the correlation among the pixels can indicate the truth of a fingerprint;
s4.2, the pixel with the gray level x in the image is located at (i, k), the distance between the pixel and the pixel is D, the direction is theta, and the pixel with the gray level y is counted (i + D)i,k+Dk) The mathematical expression of the number of occurrences p (x, y, d, θ) is:
p(x,y,d,θ)={[(i,k),(i+Di,k+Dk)|f(i,k)=x,
f(i+Di,k+Dk)=y]}
wherein x, y =1, 2, ·, L denotes a gray level in an image;
i, K =1, 2, ·, K denotes pixel coordinates;
d is the step length for generating the gray level co-occurrence matrix;
Di,Dkis the position offset;
the generation direction theta can be any direction, so that the co-occurrence matrixes in different directions are generated, and the gray level co-occurrence matrix is normalized:
<math> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>p</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
the extraction distance d is 1-4, theta is respectively taken in four directions of 0 degree, 45 degrees, 90 degrees and 135 degrees, and 16 gray level co-occurrence matrixes are counted; then respectively calculating corresponding energy, contrast, entropy, local stationary, autocorrelation and dissimilarity:
energy:
<math> <mrow> <mi>ASM</mi> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msup> <mrow> <mo>(</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
contrast ratio:
<math> <mrow> <mi>COM</mi> <mo>=</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msup> <mi>n</mi> <mn>2</mn> </msup> <mo>{</mo> <msub> <mi>&Sigma;</mi> <mrow> <mo>|</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>|</mo> <mo>=</mo> <mi>n</mi> </mrow> </msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </math>
entropy:
<math> <mrow> <mi>ENT</mi> <mo>=</mo> <mo>-</mo> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </msubsup> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>log</mi> <mi> G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
local stability:
<math> <mrow> <mi>IDM</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </math>
self-correlation:
<math> <mrow> <mi>COR</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> </mrow> <mrow> <msub> <mi>s</mi> <mi>x</mi> </msub> <msub> <mi>s</mi> <mi>y</mi> </msub> </mrow> </mfrac> </mrow> </math>
dissimilarity:
<math> <mrow> <mi>DIS</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mfrac> <mrow> <mo>|</mo> <mi>x</mi> <mo>-</mo> <mi>y</mi> <mo>|</mo> </mrow> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
wherein,
<math> <mrow> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>x</mi> <mo>&CenterDot;</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>y</mi> <mo>&CenterDot;</mo> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msubsup> <mi>s</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>x</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
<math> <mrow> <msubsup> <mi>s</mi> <mi>y</mi> <mn>2</mn> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <mi>G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <msub> <mi>&mu;</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
the texture features can effectively describe the texture features of the fingerprint image, and have better identification capability, and the gray level co-occurrence matrix of the image can reflect the comprehensive information of the gray level of the image about the change amplitude, direction and adjacent intervals, and is the basis for analyzing the local pattern and the arrangement rule of the image;
s4.3, from the structural analysis perspective, MRF textural features are to find out interdependencies between texture primitives and texture primitives, the primitive relationships of the MRF textural features can be represented by a conditional probability model, the MRF is a two-dimensional lattice, each point can be described by the probability model, the assumption of the MRF is that the pixel value of each point in the lattice only depends on the pixel value of pixels in the field, and the Markov random field can be described by the following local conditional Probability Density (PDF):
p(z(s)|z(m),m=1,2,…,N×M,s≠m)
=p(z(s)|z(m),m∈N(s))
where z(s) is the pixel value of point s in the lattice nxm, N(s) is a domain set of pixel points centered at s, the value of p (z (s)) is affected by z (M), if the PDF obeys gaussian distribution, the markov random field is gaussian-markov random field (GMRF), and the neighborhood information is used to estimate the symmetric difference equation for the gray value of the pixel point:
z(s)=∑βs,m[z(s+m)+z(s-m)]+es
wherein beta iss,mWeight, e, of gray value of center pixel contributing to each neighborhood pixelsIs the gaussian noise with mean 0, m is the deviation from the center point s, and is expressed as a matrix notation formula:
z(s)=βTQs+es
wherein beta is represented bys,mVector of composition, QsIs defined as follows:
Q s = z ( s + m 1 ) + z ( s - m 1 ) z ( s + m 2 ) + z ( s - m 2 ) z ( s + m 3 ) + z ( s - m 3 ) . . .
the texture characteristic quantity is calculated by utilizing a least square method,
<math> <mrow> <mi>&beta;</mi> <mo>=</mo> <msup> <mrow> <mo>[</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>f</mi> </mrow> </munder> <msub> <mi>Q</mi> <mi>s</mi> </msub> <msubsup> <mi>Q</mi> <mi>s</mi> <mi>T</mi> </msubsup> <mo>]</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>f</mi> </mrow> </munder> <msub> <mi>Q</mi> <mi>s</mi> </msub> <mi>z</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </math>
using GMRF to carry out second-order parameter estimation on the fingerprint image, wherein any 3 x 3 window in the image is the image sampling template range, thereby determining QsF represents a fingerprint image, each 3 multiplied by 3 window beta is a characteristic value, the MRF characteristic value is insensitive to gray level change, the MRF reflects different aggregations of texture elements, textures among different aggregations correspond to different statistical parameters, and random characteristics of the textures can be well described by applying the mathematical MRF;
s5.SVM-KNN classifier formation
For a sample alpha to be identified, alpha and two types of support vector representative points alpha are calculated+And alpha-When the distance difference is smaller than a given threshold value, namely the alpha is closer to a separation interface, the SVM only calculates the distance between the SVM and a representative point taken by the two classes to classify the samples easily, and classification results are obtained by classifying the test samples by KNN and calculating the distance between the samples to be identified and each support vector;
s6, classifier training
Estimating relevant parameters of a classification model by analyzing a training set with known classes, deducing the class of test data according to the relevant parameters, training by using LIBSVM (LiBSVM), obtaining a training model A by using a feature vector, and classifying and scoring results by using SVM (support vector machine) -KNN (K nearest neighbor) to respectively test samples through the training model
S7, evaluating performance of classifier
Dividing a fingerprint image database into two parts, wherein 50% of fingerprint image database is used for training, 50% of fingerprint image database is used for testing, and after testing, the different epsilon values of the database have certain difference, wherein epsilon =0.2 in the invention, and an SVM-KNN classifier is used for training and classifying, so that the classification performance of a simple SVM classifier is improved to a certain extent;
s8. false fingerprint detection
The image to be detected is subjected to the operations of steps S1, S2 and S3, and the obtained feature vectors are classified by a classifier.
CN201410143345.2A 2014-04-10 2014-04-10 False fingerprint detection algorithm based on curvelet texture analysis and SVM-KNN classification Pending CN103942540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410143345.2A CN103942540A (en) 2014-04-10 2014-04-10 False fingerprint detection algorithm based on curvelet texture analysis and SVM-KNN classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410143345.2A CN103942540A (en) 2014-04-10 2014-04-10 False fingerprint detection algorithm based on curvelet texture analysis and SVM-KNN classification

Publications (1)

Publication Number Publication Date
CN103942540A true CN103942540A (en) 2014-07-23

Family

ID=51190204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410143345.2A Pending CN103942540A (en) 2014-04-10 2014-04-10 False fingerprint detection algorithm based on curvelet texture analysis and SVM-KNN classification

Country Status (1)

Country Link
CN (1) CN103942540A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376320A (en) * 2014-11-13 2015-02-25 南京信息工程大学 Feature extraction method for detection of artificial fingerprints
CN105528591A (en) * 2016-01-14 2016-04-27 电子科技大学 Living fingerprint identification method based on multi-quadrant coding
CN106104574A (en) * 2016-02-25 2016-11-09 深圳市汇顶科技股份有限公司 Fingerprint identification method, device and terminal
CN106295555A (en) * 2016-08-08 2017-01-04 深圳芯启航科技有限公司 A kind of detection method of vital fingerprint image
CN106355190A (en) * 2015-07-13 2017-01-25 西北工业大学 Method for detecting spatial selective attention on basis of grey theories
CN106951922A (en) * 2017-03-16 2017-07-14 太原理工大学 A kind of real-time screening system of astronomic graph picture based on SVMs
CN107992800A (en) * 2017-11-10 2018-05-04 杭州晟元数据安全技术股份有限公司 A kind of fingerprint image quality determination methods based on SVM and random forest
CN109063572A (en) * 2018-07-04 2018-12-21 南京信息工程大学 It is a kind of based on multiple dimensioned and multireel lamination Fusion Features fingerprint activity test methods
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109255318A (en) * 2018-08-31 2019-01-22 南京信息工程大学 Based on multiple dimensioned and multireel lamination Fusion Features fingerprint activity test methods
CN109472216A (en) * 2018-10-18 2019-03-15 中国人民解放军91977部队 Radiation source feature extraction and individual discrimination method based on signal non-Gaussian feature
CN109617843A (en) * 2018-12-28 2019-04-12 上海铿诚智能科技有限公司 A kind of elastic optical network modulation format recognition methods based on KNN
CN109886189A (en) * 2019-02-20 2019-06-14 Oppo广东移动通信有限公司 Fingerprint template acquisition methods and relevant apparatus
CN111201537A (en) * 2017-10-18 2020-05-26 指纹卡有限公司 Distinguishing live fingers from spoofed fingers by machine learning in fingerprint analysis
CN111724376A (en) * 2020-06-22 2020-09-29 陕西科技大学 Paper defect detection method based on texture feature analysis
WO2021249520A1 (en) * 2020-06-12 2021-12-16 华为技术有限公司 Image processing method and apparatus, and storage medium
CN118136050A (en) * 2024-02-26 2024-06-04 安徽中科昊音智能科技有限公司 Equipment fault voice recognition method, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930549A (en) * 2010-08-20 2010-12-29 西安电子科技大学 Second generation curvelet transform-based static human detection method
US20110026804A1 (en) * 2009-08-03 2011-02-03 Sina Jahanbin Detection of Textural Defects Using a One Class Support Vector Machine
CN103116744A (en) * 2013-02-05 2013-05-22 浙江工业大学 Fake fingerprint detection method based on markov random field (MRF) and support vector machine-k nearest neighbor (SVM-KNN) classification
CN103324944A (en) * 2013-06-26 2013-09-25 电子科技大学 Fake fingerprint detecting method based on SVM and sparse representation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110026804A1 (en) * 2009-08-03 2011-02-03 Sina Jahanbin Detection of Textural Defects Using a One Class Support Vector Machine
CN101930549A (en) * 2010-08-20 2010-12-29 西安电子科技大学 Second generation curvelet transform-based static human detection method
CN103116744A (en) * 2013-02-05 2013-05-22 浙江工业大学 Fake fingerprint detection method based on markov random field (MRF) and support vector machine-k nearest neighbor (SVM-KNN) classification
CN103324944A (en) * 2013-06-26 2013-09-25 电子科技大学 Fake fingerprint detecting method based on SVM and sparse representation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵阳: ""曲波变换在人脸识别中的应用"", 《中国优秀硕士论文全文数据库 信息科技辑》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376320A (en) * 2014-11-13 2015-02-25 南京信息工程大学 Feature extraction method for detection of artificial fingerprints
CN106355190A (en) * 2015-07-13 2017-01-25 西北工业大学 Method for detecting spatial selective attention on basis of grey theories
CN106355190B (en) * 2015-07-13 2020-04-10 西北工业大学 Space selectivity attention detection method based on grey theory
CN105528591B (en) * 2016-01-14 2019-04-16 电子科技大学 Living body finger print recognition methods based on more quadrants coding
CN105528591A (en) * 2016-01-14 2016-04-27 电子科技大学 Living fingerprint identification method based on multi-quadrant coding
WO2017143571A1 (en) * 2016-02-25 2017-08-31 深圳市汇顶科技股份有限公司 Fingerprint identification method, device, and terminal
CN106104574A (en) * 2016-02-25 2016-11-09 深圳市汇顶科技股份有限公司 Fingerprint identification method, device and terminal
CN106104574B (en) * 2016-02-25 2018-06-12 深圳市汇顶科技股份有限公司 Fingerprint identification method, device and terminal
CN106295555A (en) * 2016-08-08 2017-01-04 深圳芯启航科技有限公司 A kind of detection method of vital fingerprint image
CN106951922A (en) * 2017-03-16 2017-07-14 太原理工大学 A kind of real-time screening system of astronomic graph picture based on SVMs
CN111201537B (en) * 2017-10-18 2023-11-17 指纹卡安娜卡敦知识产权有限公司 Differentiating live fingers from spoof fingers by machine learning in fingerprint analysis
CN111201537A (en) * 2017-10-18 2020-05-26 指纹卡有限公司 Distinguishing live fingers from spoofed fingers by machine learning in fingerprint analysis
CN107992800A (en) * 2017-11-10 2018-05-04 杭州晟元数据安全技术股份有限公司 A kind of fingerprint image quality determination methods based on SVM and random forest
CN109063572A (en) * 2018-07-04 2018-12-21 南京信息工程大学 It is a kind of based on multiple dimensioned and multireel lamination Fusion Features fingerprint activity test methods
CN109086718A (en) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 Biopsy method, device, computer equipment and storage medium
CN109255318A (en) * 2018-08-31 2019-01-22 南京信息工程大学 Based on multiple dimensioned and multireel lamination Fusion Features fingerprint activity test methods
CN109472216A (en) * 2018-10-18 2019-03-15 中国人民解放军91977部队 Radiation source feature extraction and individual discrimination method based on signal non-Gaussian feature
CN109472216B (en) * 2018-10-18 2021-10-15 中国人民解放军91977部队 Radiation source feature extraction and individual identification method based on signal non-Gaussian characteristics
CN109617843B (en) * 2018-12-28 2021-08-10 上海铿诚智能科技有限公司 KNN-based elastic optical network modulation format identification method
CN109617843A (en) * 2018-12-28 2019-04-12 上海铿诚智能科技有限公司 A kind of elastic optical network modulation format recognition methods based on KNN
CN109886189A (en) * 2019-02-20 2019-06-14 Oppo广东移动通信有限公司 Fingerprint template acquisition methods and relevant apparatus
CN109886189B (en) * 2019-02-20 2021-06-04 Oppo广东移动通信有限公司 Fingerprint template acquisition method and related device
WO2021249520A1 (en) * 2020-06-12 2021-12-16 华为技术有限公司 Image processing method and apparatus, and storage medium
CN111724376A (en) * 2020-06-22 2020-09-29 陕西科技大学 Paper defect detection method based on texture feature analysis
CN111724376B (en) * 2020-06-22 2024-02-13 陕西科技大学 Paper disease detection method based on texture feature analysis
CN118136050A (en) * 2024-02-26 2024-06-04 安徽中科昊音智能科技有限公司 Equipment fault voice recognition method, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN103942540A (en) False fingerprint detection algorithm based on curvelet texture analysis and SVM-KNN classification
CN103116744B (en) Based on the false fingerprint detection method of MRF and SVM-KNN classification
CN106778586B (en) Off-line handwritten signature identification method and system
US20160070968A1 (en) Image-based liveness detection for ultrasonic fingerprints
CN103605958A (en) Living body human face detection method based on gray scale symbiosis matrixes and wavelet analysis
CN100385451C (en) Deformed fingerprint identification method based on local triangle structure characteristic collection
CN112200159B (en) Non-contact palm vein identification method based on improved residual error network
CN101866421A (en) Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding
CN112488211A (en) Fabric image flaw classification method
Jiang On orientation and anisotropy estimation for online fingerprint authentication
Keyvanpour et al. An analytical review of texture feature extraction approaches
CN106557740A (en) The recognition methods of oil depot target in a kind of remote sensing images
CN104361319A (en) Fake fingerprint detection method based on SVM-RFE (support vector machine-recursive feature elimination)
CN103942526B (en) Linear feature extraction method for discrete data point set
Kaur et al. Handwritten signature verification based on surf features using HMM
Tao et al. Illumination-insensitive image representation via synergistic weighted center-surround receptive field model and weber law
CN112257688A (en) GWO-OSELM-based non-contact palm in-vivo detection method and device
Houtinezhad et al. Off-line signature verification system using features linear mapping in the candidate points
Ray et al. Palm print recognition using hough transforms
CN105550677B (en) A kind of 3D palmprint authentications method
Naser et al. Artificial Neural Network-Based Fingerprint Classification and Recognition.
CN109165640B (en) Hand back vein identification method and identification system based on bit plane internal block mutual information
CN111860268A (en) Counterfeit image detection and identification method based on machine learning
Tandon et al. An efficient age-invariant face recognition
Huo et al. Face recognition using curvelet and selective PCA

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140723

RJ01 Rejection of invention patent application after publication