CN101303728A - Method for identifying fingerprint facing image quality - Google Patents

Method for identifying fingerprint facing image quality Download PDF

Info

Publication number
CN101303728A
CN101303728A CNA2008101381170A CN200810138117A CN101303728A CN 101303728 A CN101303728 A CN 101303728A CN A2008101381170 A CNA2008101381170 A CN A2008101381170A CN 200810138117 A CN200810138117 A CN 200810138117A CN 101303728 A CN101303728 A CN 101303728A
Authority
CN
China
Prior art keywords
mrow
msub
munderover
math
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101381170A
Other languages
Chinese (zh)
Other versions
CN100592323C (en
Inventor
尹义龙
杨公平
骆功庆
张宇
詹小四
任春晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN200810138117A priority Critical patent/CN100592323C/en
Publication of CN101303728A publication Critical patent/CN101303728A/en
Application granted granted Critical
Publication of CN100592323C publication Critical patent/CN100592323C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a fingerprint identifying method facing to the quality of an image, which solves the defect that the compatibility of the existing fingerprint identifying method to an ideal fingerprint image and a non ideal fingerprint image is not good. The method of the invention includes: (1) reading the collected fingerprint image g(x, y), wherein, g (x, y) is the gray value of the pixel points (x, y); (2) carrying out quality characteristic extracting on the fingerprint images and respectively extracting three characteristics of grads consistency QT, the frequency spectrum characteristic QF and the gray standard difference Qs; (3) adopting an SVM sorter to carry out learning and sorting on the qualities of the fingerprint images and confirming the qualities of the fingerprint images as two defined quality types; (4) respectively adopting a matching arithmetic based on detail points and a matching arithmetic based on textures to the two fingerprints with better and poorer qualities to accomplish the identification.

Description

Fingerprint identification method facing image quality
Technical Field
The invention relates to a fingerprint identification method, in particular to a fingerprint identification method facing to image quality.
Background
In the automatic fingerprint identification technology, the quality of an image is an important factor influencing the identification performance. In the existing fingerprint identification processing method, the minutiae of the fingerprint image are usually acquired for matching, but the performance of the method is obviously reduced for the image with poor quality. While other recognition methods, such as the recognition method based on the lines and the recognition method based on the textures, have a certain effect on the image with poor quality, but for the image with good quality, not only the performance is not greatly improved, but also the occupied resources and the time complexity are relatively large. Therefore, a fingerprint identification method for distinguishing image quality is urgently needed, which not only ensures that a certain identification accuracy is achieved, but also can save resources to a greater extent.
Disclosure of Invention
The invention aims to provide a fingerprint identification method facing image quality, aiming at solving the defect that the prior fingerprint identification method has poor compatibility with ideal and non-ideal fingerprint images. The method is a processing method based on image quality judgment, and divides fingerprint images into two types of better quality and poorer quality, and further adopts different identification algorithms to carry out fingerprint identification.
In order to achieve the purpose, the invention adopts the following technical scheme:
(1) reading a collected fingerprint image g (x, y), wherein g (x, y) is the gray value of a pixel point (x, y);
(2) extracting the quality characteristics of the fingerprint image, and respectively extracting the gradient consistency QTSpectral characteristic QFStandard deviation of gray scale QsThree features in total;
(3) learning and classifying the quality of the fingerprint image by adopting an SVM (support vector machine) classifier, and determining the quality as two defined quality types;
(4) and for the two fingerprints with better quality and poorer quality, respectively adopting a minutiae-based matching algorithm and a texture-based matching algorithm to finish the identification.
In the step (2), the three characteristics respectively reflect the quality from different aspects, and the specific calculation is as follows,
<math> <mrow> <msub> <mi>Q</mi> <mi>T</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>r</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>r</mi> </munderover> <msub> <mover> <mi>k</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> r is the total number of foreground blocks,
Figure A20081013811700042
the gradient consistency of one block in the block image is calculated by the following formula: k ~ = ( j 11 - j 22 ) 2 + 4 j 12 2 ( j 11 + j 22 ) 2 , wherein j is11,j12,j21,j22Are elements in the gradient vector covariance matrix J. If the image block size is b x b, all b in the block2Gradient vector covariance matrix of points <math> <mrow> <mi>J</mi> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>b</mi> <mn>2</mn> </msup> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>B</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>s</mi> </msub> <msubsup> <mi>g</mi> <mi>s</mi> <mi>T</mi> </msubsup> <mo>&equiv;</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>j</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>j</mi> <mn>12</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>j</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>j</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math> b2Is the size of the block image, s is the point in the block, B is the set of all the pixel points in the block, gsIs the gradient vector of point s, gs TIs a transpose of the gradient vector; the final quality characteristic of the whole image is the mean of the gradient consistency of all blocks.
QFThe calculation formula of (2) is as follows: <math> <mrow> <msub> <mi>Q</mi> <mi>F</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>9</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>=</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>-</mo> <mn>4</mn> </mrow> <mrow> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>+</mo> <mn>4</mn> </mrow> </munderover> <mi>Q</mi> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein, <math> <mrow> <mi>Q</mi> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>#</mo> <msub> <mi>C</mi> <mi>r</mi> </msub> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>r</mi> </msub> </mrow> </munder> <mo>|</mo> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>|</mo> </mrow> </math> as a function of the intensity of the energy,
Figure A20081013811700054
is a ring (r)0-4<=r<=r0+4) number of pixels within, and
<math> <mrow> <mo>|</mo> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>|</mo> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mi>cos</mi> <msup> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mi>sin</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
|G(u,v)i reflects the energy intensity at the point (u, v) in the frequency domain image after Fourier transform, | G(u,v)L constitutes the intensity spectrum of the frequency domain. Let g (x, y) denote the gray value of a pixel point with coordinates (x, y) in a digital image with size NxN, the distance of g (x, y)The scattered Fourier transform (DFT) G (u, v) is defined as,
<math> <mrow> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mn>2</mn> <mi>&pi;j</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> </mrow> </msup> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>j</mi> <mi>sin</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
QFthe quality is represented by calculating the energy of the bright ring zone in the spectrum image and using the size of the energy.
QsThe calculation formula of (2) is as follows: <math> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>,</mo> </mrow> </math>
wherein, <math> <mrow> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <msup> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> </munderover> <msup> <mrow> <mo>(</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math> the standard deviation of the k block in the block image is shown, g (x, y) is the gray value of the pixel point (x, y), g (k) is the gray average value of the k block, and w is the block side length of the block image. QsThe quality of the whole image is represented by averaging the standard deviations of all the block images.
In the step (4), both the minutiae-based matching algorithm and the texture-based matching algorithm are classic algorithms.
The block size of the block image in the processing method of the invention is 8 multiplied by 8.
The invention has the beneficial effects that: the processing method integrates the characteristics of a plurality of aspects such as a fingerprint image space domain, a frequency domain and the like, so that the quality of the fingerprint image can be well distinguished, and the matching algorithm based on the minutiae and the matching algorithm based on the texture have better adaptability.
Drawings
FIG. 1 is a flow chart of the identification method of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
In fig. 1, a fingerprint identification method for image quality includes the steps of:
(1) reading a collected fingerprint image g (x, y), wherein g (x, y) is the gray value of a pixel point (x, y);
(2) extracting the quality characteristics of the fingerprint image, and respectively extracting the gradient consistency QTSpectral characteristic QFStandard deviation of gray scale QsThree features in total;
(3) forming a three-dimensional characteristic vector by the three characteristic indexes extracted in the step (2) and using the three characteristic indexes as input vectors of a support vector machine; learning and classifying the quality of the fingerprint image by adopting an SVM (support vector machine) classifier, and determining the quality as two defined quality types;
(4) and for the two fingerprints with better quality and poorer quality, respectively adopting a minutiae-based matching algorithm and a texture-based matching algorithm to finish the identification.
In the step (2), the three characteristics respectively reflect the quality from different aspects, and the specific calculation is as follows,
<math> <mrow> <msub> <mi>Q</mi> <mi>T</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>r</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>r</mi> </munderover> <msub> <mover> <mi>k</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> r is the total number of foreground blocks (where r is 40),
Figure A20081013811700062
the gradient consistency of one block in the block image is calculated by the following formula: k ~ = ( j 11 - j 22 ) 2 + 4 j 12 2 ( j 11 + j 22 ) 2 , wherein j is11,j12,j21,j22Are elements in the gradient vector covariance matrix J. Gradient vector covariance matrix of all points in a block <math> <mrow> <mi>J</mi> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>b</mi> <mn>2</mn> </msup> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>B</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>s</mi> </msub> <msubsup> <mi>g</mi> <mi>s</mi> <mi>T</mi> </msubsup> <mo>&equiv;</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>j</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>j</mi> <mn>12</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>j</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>j</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math> Where b is264, i.e. the size of the block image, s is the point in the block, B is the set of all pixel points in the block, gsIs the gradient vector of point s, gs TIs a transpose of the gradient vector; the final quality characteristic of the whole image is the mean of the gradient consistency of all blocks.
QFThe calculation formula of (2) is as follows: <math> <mrow> <msub> <mi>Q</mi> <mi>F</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>9</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>=</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>-</mo> <mn>4</mn> </mrow> <mrow> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>+</mo> <mn>4</mn> </mrow> </munderover> <mi>Q</mi> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein, <math> <mrow> <mi>Q</mi> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>#</mo> <msub> <mi>C</mi> <mi>r</mi> </msub> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>r</mi> </msub> </mrow> </munder> <mo>|</mo> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>|</mo> </mrow> </math> as a function of the intensity of the energy,is the number of pixel points within the ring (r0-4 r0+4), and
<math> <mrow> <mo>|</mo> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>|</mo> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mi>cos</mi> <msup> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mi>sin</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
|G(u,v)i reflects the energy intensity at the point (u, v) in the frequency domain image after Fourier transform, | G(u,v)L constitutes the intensity spectrum of the frequency domain. Let G (x, y) denote the gray value of a pixel point with coordinates (x, y) in a digital image with size NxN, then the Discrete Fourier Transform (DFT) G (u, v) of G (x, y) is defined as,
<math> <mrow> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mn>2</mn> <mi>&pi;j</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> </mrow> </msup> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>j</mi> <mi>sin</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
QFthe quality is represented by calculating the energy of the bright ring zone in the spectrum image and using the size of the energy.
QsThe calculation formula of (2) is as follows: <math> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>,</mo> </mrow> </math>
wherein, <math> <mrow> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <msup> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> </munderover> <msup> <mrow> <mo>(</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math> the standard deviation of the k-th block in the block image is shown, g (x, y) is the gray value of the pixel point (x, y), g (k) is the gray average value of the k-th block, and w is the block edge length of the block image (where w is 8). QsRepresenting the quality of the entire image by averaging the standard deviations of all the block imagesAmount of the compound (A).
In the step (4), both the minutiae-based matching algorithm and the texture-based matching algorithm are classic algorithms.

Claims (3)

1. A fingerprint identification method facing image quality is characterized in that the method comprises the following steps:
(1) reading an acquired fingerprint image g (x, y), wherein g (x, y) is the gray value of a pixel point (x, y);
(2) extracting the quality characteristics of the fingerprint image, and respectively extracting the gradient consistency QTSpectral characteristic QFStandard deviation of gray scale QsThree features in total;
(3) learning and classifying the quality of the fingerprint image by adopting an SVM (support vector machine) classifier, and determining the quality of the fingerprint image into two quality types of better quality or poorer quality;
(4) and for the two fingerprints with better quality and poorer quality, respectively adopting a minutiae-based matching algorithm and a texture-based matching algorithm to finish the identification.
2. The image quality-oriented fingerprint recognition method according to claim 1, wherein in said step (2), three features respectively reflect the quality from different aspects, and are calculated as follows,
<math> <mrow> <msub> <mi>Q</mi> <mi>T</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>r</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>r</mi> </munderover> <msub> <mover> <mi>k</mi> <mo>~</mo> </mover> <mi>i</mi> </msub> <mo>,</mo> </mrow> </math> r is the total number of foreground blocks,
Figure A2008101381170002C2
the gradient consistency of one block in the block image is calculated by the following formula: k ~ = ( j 11 - j 22 ) 2 + 4 j 12 2 ( j 11 + j 22 ) 2 , wherein j is11,j12,j21,j22Is an element in the gradient vector covariance matrix J;
if the image block size is b x b, all b in the block2Gradient vector covariance matrix of points
<math> <mrow> <mi>J</mi> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>b</mi> <mn>2</mn> </msup> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>B</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>s</mi> </msub> <msubsup> <mi>g</mi> <mi>s</mi> <mi>T</mi> </msubsup> <mo>&equiv;</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>j</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>j</mi> <mn>12</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>j</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>j</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math> Wherein b is2Is the size of the block image, s is the point in the block, B is the set of all the pixel points in the block, gsIs the gradient vector of point s, gs TIs a transpose of the gradient vector; the quality of the final whole image is characterized by the consistent gradient of all blocksThe mean value of sex;
QFthe calculation formula of (2) is as follows: <math> <mrow> <msub> <mi>Q</mi> <mi>F</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mn>9</mn> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>=</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>-</mo> <mn>4</mn> </mrow> <mrow> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>+</mo> <mn>4</mn> </mrow> </munderover> <mi>Q</mi> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math> wherein, <math> <mrow> <mi>Q</mi> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>#</mo> <msub> <mi>C</mi> <mi>r</mi> </msub> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <msub> <mi>C</mi> <mi>r</mi> </msub> </mrow> </munder> <mo>|</mo> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>|</mo> </mrow> </math> as a function of the intensity of the energy,
Figure A2008101381170002C7
is a ring (r)0-4<=r<=r0+4) number of pixels within, and
<math> <mrow> <mo>|</mo> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>|</mo> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msqrt> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mi>cos</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mi>sin</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
|G(u,v)i reflects the energy intensity at the point (u, v) in the frequency domain image after Fourier transform, | G(u,v)L constitutes the intensity spectrum of the frequency domain; let G (x, y) denote the gray value of a pixel point with coordinates (x, y) in a digital image with size NxN, the discrete Fourier transform G (u, v) of G (x, y) is defined as,
<math> <mrow> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mn>2</mn> <mi>&pi;j</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> </mrow> </msup> </mrow> </math>
<math> <mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>j</mi> <mi>sin</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <mi>&pi;</mi> <mo>&lt;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>></mo> <mo>/</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
QFcalculating the energy of a bright ring band in the frequency spectrum image, and using the size of the energy to represent the quality;
Qsthe calculation formula of (2) is as follows: <math> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>,</mo> </mrow> </math>
wherein, <math> <mrow> <msub> <mi>S</mi> <mi>k</mi> </msub> <mo>=</mo> <msqrt> <mfrac> <mn>1</mn> <msup> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mover> <mi>w</mi> <mo>&OverBar;</mo> </mover> </munderover> <msup> <mrow> <mo>(</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>g</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math> the standard deviation of the k block in the block image is shown, g (x, y) is the gray value of a pixel point (x, y), g (k) is the gray average value of the k block, and w is the block edge length of the block image; qsThe quality of the whole image is represented by averaging the standard deviations of all the block images.
3. The image quality-oriented fingerprint recognition method according to claim 1, wherein in the step (4), the minutiae-based matching algorithm and the texture-based matching algorithm are both classical algorithms.
CN200810138117A 2008-07-01 2008-07-01 Method for identifying fingerprint facing image quality Expired - Fee Related CN100592323C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810138117A CN100592323C (en) 2008-07-01 2008-07-01 Method for identifying fingerprint facing image quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810138117A CN100592323C (en) 2008-07-01 2008-07-01 Method for identifying fingerprint facing image quality

Publications (2)

Publication Number Publication Date
CN101303728A true CN101303728A (en) 2008-11-12
CN100592323C CN100592323C (en) 2010-02-24

Family

ID=40113627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810138117A Expired - Fee Related CN100592323C (en) 2008-07-01 2008-07-01 Method for identifying fingerprint facing image quality

Country Status (1)

Country Link
CN (1) CN100592323C (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567993A (en) * 2011-12-15 2012-07-11 中国科学院自动化研究所 Fingerprint image quality evaluation method based on main component analysis
CN104268587A (en) * 2014-10-22 2015-01-07 武汉大学 False fingerprint detection method based on finger wave conversion and SVM
CN104268529A (en) * 2014-09-28 2015-01-07 深圳市汇顶科技股份有限公司 Judgment method and device for quality of fingerprint images
CN105631863A (en) * 2015-12-23 2016-06-01 苏州汇莱斯信息科技有限公司 Quality assessment method for fingerprint image
CN105809117A (en) * 2016-03-01 2016-07-27 广东欧珀移动通信有限公司 Information prompt method and user terminal
CN106258009A (en) * 2015-04-16 2016-12-28 华为技术有限公司 A kind of gather the method for fingerprint, fingerprint capturer and terminal
CN106682567A (en) * 2015-11-11 2017-05-17 方正国际软件(北京)有限公司 Acquisition processing method of fingerprint images and device
CN106709396A (en) * 2015-07-27 2017-05-24 联想(北京)有限公司 Fingerprint image registration method and registration position
CN107016324A (en) * 2016-01-28 2017-08-04 厦门中控生物识别信息技术有限公司 A kind of fingerprint image processing method and fingerprint detection equipment
CN107992800A (en) * 2017-11-10 2018-05-04 杭州晟元数据安全技术股份有限公司 A kind of fingerprint image quality determination methods based on SVM and random forest
CN108681714A (en) * 2018-05-18 2018-10-19 济南浪潮高新科技投资发展有限公司 A kind of finger vein recognition system and method based on individualized learning
CN110059649A (en) * 2019-04-24 2019-07-26 济南浪潮高新科技投资发展有限公司 Level complementation convolutional neural networks model, robust fingerprint recognition methods and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7136515B2 (en) * 2001-09-13 2006-11-14 Intel Corporation Method and apparatus for providing a binary fingerprint image
CN1327387C (en) * 2004-07-13 2007-07-18 清华大学 Method for identifying multi-characteristic of fingerprint
CN100347719C (en) * 2004-07-15 2007-11-07 清华大学 Fingerprint identification method based on density chart model
CN100370472C (en) * 2006-09-18 2008-02-20 山东大学 Irrelevant technique method of image pickup device in fingerprint recognition algorithm
CN101178773B (en) * 2007-12-13 2010-08-11 北京中星微电子有限公司 Image recognition system and method based on characteristic extracting and categorizer

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567993B (en) * 2011-12-15 2014-06-11 中国科学院自动化研究所 Fingerprint image quality evaluation method based on main component analysis
CN102567993A (en) * 2011-12-15 2012-07-11 中国科学院自动化研究所 Fingerprint image quality evaluation method based on main component analysis
CN104268529A (en) * 2014-09-28 2015-01-07 深圳市汇顶科技股份有限公司 Judgment method and device for quality of fingerprint images
CN104268587A (en) * 2014-10-22 2015-01-07 武汉大学 False fingerprint detection method based on finger wave conversion and SVM
CN104268587B (en) * 2014-10-22 2017-05-24 武汉大学 False fingerprint detection method based on finger wave conversion and SVM
US10268862B2 (en) 2015-04-16 2019-04-23 Huawei Technologies Co., Ltd. Fingerprint collection method, fingerprint collector, and terminal
CN106258009A (en) * 2015-04-16 2016-12-28 华为技术有限公司 A kind of gather the method for fingerprint, fingerprint capturer and terminal
CN106258009B (en) * 2015-04-16 2019-06-21 华为技术有限公司 A kind of method, fingerprint capturer and terminal acquiring fingerprint
CN106709396B (en) * 2015-07-27 2020-04-24 联想(北京)有限公司 Fingerprint image registration method and fingerprint image registration device
CN106709396A (en) * 2015-07-27 2017-05-24 联想(北京)有限公司 Fingerprint image registration method and registration position
CN106682567A (en) * 2015-11-11 2017-05-17 方正国际软件(北京)有限公司 Acquisition processing method of fingerprint images and device
CN105631863A (en) * 2015-12-23 2016-06-01 苏州汇莱斯信息科技有限公司 Quality assessment method for fingerprint image
CN107016324A (en) * 2016-01-28 2017-08-04 厦门中控生物识别信息技术有限公司 A kind of fingerprint image processing method and fingerprint detection equipment
CN107016324B (en) * 2016-01-28 2020-03-20 厦门中控智慧信息技术有限公司 Fingerprint image processing method and fingerprint detection equipment
CN105809117B (en) * 2016-03-01 2019-05-21 Oppo广东移动通信有限公司 A kind of information cuing method and user terminal
CN105809117A (en) * 2016-03-01 2016-07-27 广东欧珀移动通信有限公司 Information prompt method and user terminal
CN107992800A (en) * 2017-11-10 2018-05-04 杭州晟元数据安全技术股份有限公司 A kind of fingerprint image quality determination methods based on SVM and random forest
CN108681714A (en) * 2018-05-18 2018-10-19 济南浪潮高新科技投资发展有限公司 A kind of finger vein recognition system and method based on individualized learning
CN110059649A (en) * 2019-04-24 2019-07-26 济南浪潮高新科技投资发展有限公司 Level complementation convolutional neural networks model, robust fingerprint recognition methods and system

Also Published As

Publication number Publication date
CN100592323C (en) 2010-02-24

Similar Documents

Publication Publication Date Title
CN100592323C (en) Method for identifying fingerprint facing image quality
CN103034838B (en) A kind of special vehicle instrument type identification based on characteristics of image and scaling method
CN107610114B (en) optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
CN103116763B (en) A kind of living body faces detection method based on hsv color Spatial Statistical Character
CN101329726B (en) Method for reinforcing fingerprint image based on one-dimensional filtering
CN102176208B (en) Robust video fingerprint method based on three-dimensional space-time characteristics
CN103456013B (en) A kind of method representing similarity between super-pixel and tolerance super-pixel
CN104103082A (en) Image saliency detection method based on region description and priori knowledge
CN103839042B (en) Face identification method and face identification system
CN105005565B (en) Live soles spoor decorative pattern image search method
CN102103690A (en) Method for automatically portioning hair area
CN101526994B (en) Fingerprint image segmentation method irrelevant to collecting device
CN103955496B (en) A kind of quick live tire trace decorative pattern searching algorithm
CN104680541A (en) Remote sensing image quality evaluation method based on phase congruency
CN104834938A (en) Hyper-spectral information extraction method based on main component and cluster analysis
CN104680158A (en) Face recognition method based on multi-scale block partial multi-valued mode
Wang et al. Feature-based analysis of cell nuclei structure for classification of histopathological images
CN104331877A (en) Color image edge detection method based on fusion color gradient
CN118197610B (en) Auxiliary identification method and system for herpetic neuralgia based on structural magnetic resonance
CN101840513A (en) Method for extracting image shape characteristics
CN105513060A (en) Visual perception enlightening high-resolution remote-sensing image segmentation method
CN113361407B (en) PCANet-based spatial spectrum feature combined hyperspectral sea ice image classification method
CN103473546A (en) Fingerprint direction field obtaining method based on structure tensor
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN103268494A (en) Parasite egg identifying method based on sparse representation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100224

Termination date: 20130701