CN105138974B - A kind of multi-modal Feature fusion of finger based on Gabor coding - Google Patents
A kind of multi-modal Feature fusion of finger based on Gabor coding Download PDFInfo
- Publication number
- CN105138974B CN105138974B CN201510496609.7A CN201510496609A CN105138974B CN 105138974 B CN105138974 B CN 105138974B CN 201510496609 A CN201510496609 A CN 201510496609A CN 105138974 B CN105138974 B CN 105138974B
- Authority
- CN
- China
- Prior art keywords
- finger
- gray
- gabor
- images
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000004927 fusion Effects 0.000 title abstract 2
- 238000000034 method Methods 0.000 claims abstract description 31
- 210000003462 vein Anatomy 0.000 claims abstract description 29
- 239000013598 vector Substances 0.000 claims abstract description 27
- 238000001914 filtration Methods 0.000 claims abstract description 6
- 238000007500 overflow downdraw method Methods 0.000 claims description 12
- 230000000903 blocking effect Effects 0.000 claims description 10
- 230000036544 posture Effects 0.000 claims description 7
- 210000001145 finger joint Anatomy 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 description 10
- 238000002474 experimental method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
- G06V40/1359—Extracting features related to ridge properties; Determining the fingerprint type, e.g. whorl or loop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biodiversity & Conservation Biology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Collating Specific Patterns (AREA)
Abstract
A kind of multi-modal Feature fusion of finger based on Gabor coding comprising using Gabor filter to fingerprint, refer to that the ROI image of vein and phalangeal configurations carries out Gabor filtering, obtain Gabor direction character image;Above-mentioned image angle angle value is arranged from small to large respectively, and is encoded, Gabor direction character coded image is formed;To above-mentioned image block;Pixel in block image is extracted into its gray feature, gray feature vector is consequently formed;Gray feature vector is superimposed to form three mode gray feature histogram of finger;Judge whether this two width finger ROI image matches by the method for three mode gray feature histogram intersection coefficients of two finger ROI images to be matched of calculating.The method of the present invention efficiently solves the problems, such as that finger gesture is variable in finger-image collection process, and the arithmetic speed of finger multimodal recognition is high, discrimination is high.
Description
Technical Field
The invention belongs to the technical field of image detection, and particularly relates to a finger multi-modal feature fusion method based on Gabor coding.
Background
At present, single-mode biological feature recognition has certain limitation in application, so that the requirement of people on high-precision identity recognition cannot be met, and in order to effectively fuse three-mode features of fingers, robust feature analysis becomes a key problem in research. However, most of research methods for extracting robust features of fingers depend on position information and direction information of feature points and are limited by rotation invariance, so that the problem that the gesture of the fingers is easy to change in the image acquisition process cannot be effectively solved.
Disclosure of Invention
In order to solve the above problems, the present invention aims to provide a finger multimodal feature fusion method based on Gabor coding.
In order to achieve the above purpose, the finger multi-modal feature fusion method based on Gabor coding provided by the invention comprises the following steps in sequence:
1) gabor filtering is carried out on ROI images of fingerprints, finger veins and finger joint prints of different postures by utilizing Gabor filters with different scale parameters, and three-mode Gabor direction characteristic images of the finger in 8 directions, namely 0 degree, 22.5 degrees, 45 degrees, 67.5 degrees, 90 degrees, 112.5 degrees, 135 degrees and 157.5 degrees are obtained respectively;
2) arranging the angle values of the finger three-mode Gabor direction characteristic images in the 8 directions from small to large respectively, and coding the angle values to form finger three-mode Gabor direction characteristic coded images in the 8 directions;
3) blocking the finger three-mode Gabor direction feature coding image in the 8 directions to form a blocked image;
4) all the pixel points in the finger three-mode Gabor direction feature coding block image in the 8 directions are regarded as feature points to extract the gray features of the feature points, so that gray feature vectors are formed, and the process is as follows:
the first step is as follows: gray grouping: firstly, sequencing the gray value of each pixel point in each block image from small to large to form a sequence of the pixel points; then, dividing the sequence into k gray groups according to the total number of the pixel points to form k groups of gray group images; then determining the boundary point of each gray grouping by a rounding method, and acquiring the gray value of the boundary point;
the second step is that: calculating the gray characteristic vector of each pixel: taking each pixel point in each gray grouping image as a center, comparing the gray values of the symmetrical adjacent points, and if the gray value of a certain pixel point is greater than the gray value of the symmetrical adjacent point, determining that the gray value is 1; otherwise, the gray level feature vector is 0, so that a gray level feature vector of the 4-bit binary code is formed, and then the 4-bit binary code vector is converted into a gray level feature vector of the 16-bit binary code;
5) superposing the gray characteristic vectors of each pixel point in each gray grouping image to form a gray characteristic histogram of each gray grouping image, connecting the gray characteristic histograms of each gray grouping image in series to form a gray characteristic histogram of a blocking image, fusing the gray characteristic histograms of all blocking images of three single modes of the finger in series to form three single-mode gray characteristic histograms, and finally connecting the gray characteristic histograms of the three single-mode images in series to fuse the three single-mode gray characteristic histograms to form the finger three-mode gray characteristic histogram;
6) and judging whether the two finger ROI images are matched or not by a method of calculating the intersection coefficient of the three-modal gray scale feature histograms of the two finger ROI images to be matched.
In step 1), the expression of the Gabor filter is:
wherein σ represents the scale of the Gabor filter, and σ is 4,5, 6; thetakIndicating the angle value of the k-th direction.
In step 2), the method for forming the finger three-modality Gabor directional feature coded image in 8 directions by arranging the angle values of the finger three-modality Gabor directional feature image in 8 directions from small to large and coding the angle values comprises: firstly, the angle values of the finger three-mode Gabor direction characteristic images in the 8 directions are respectively arranged from small to large, then the gray values of pixel points at the same positions in the images are respectively compared, the direction of the finger three-mode Gabor direction characteristic image corresponding to the maximum gray value is respectively used as the direction characteristic of the pixel point, and the encoding is carried out according to the following encoding method: 0 ° code is 0, 22.5 ° code is 1, 45 ° code is 2, 67.5 ° code is 3, 90 ° code is 4, 112.5 ° code is 5, 135 ° code is 6, 157.5 ° code is 7; thereby forming 8-direction finger three-mode Gabor direction characteristic coding images.
In step 4), the formula for obtaining the gray value of the boundary point is as follows:
wherein,representing the boundary points, t, of each groupiRepresenting the edge of the ith gray groupingBoundary value, IminAnd ImaxRespectively representing the minimum gray value and the maximum gray value of the image pixel points.
In step 4), the formula for converting the gray level feature vector of the 4-bit binary code into the gray level feature vector of the 16-bit binary code is as follows:
wherein i represents the ith pixel point, and m represents the logarithm of the nearest neighbor point of the pixel point.
In step 6), the method for determining whether two finger ROI images match by calculating the intersection coefficient of the tri-modal grayscale feature histograms of the two finger ROI images to be matched is: firstly, calculating the intersection coefficient of the three-modal gray feature histograms of two finger ROI images to be matched by using the following intersection coefficient expression, and if the calculated intersection coefficient is greater than a similarity decision threshold T, indicating that the two finger ROI images are similar, namely indicating that the two finger ROI images are matched; and if the intersection coefficient is less than or equal to T, judging that the two finger ROI images are not matched. The similarity decision threshold T is a corresponding threshold point when the false rejection rate in the finger ROI image matching result is 0 and the false allowance rate is the lowest.
The expression of the intersection coefficient is:
in the formula: m is1And m2Respectively representing two images of the ROI of the finger to be matched, Hm1(i) And Hm2(i) And the L represents the dimension of the histogram of the finger three-mode image.
The finger multi-modal feature fusion method based on Gabor coding effectively solves the problem that the gesture of the finger is easy to change in the finger image acquisition process, and the finger multi-modal recognition has high operation speed and high recognition rate.
Drawings
FIG. 1 is a characteristic diagram of three-mode Gabor directions of a finger in 8 directions; wherein (a) is a fingerprint; (b) is a finger vein; (c) is a knuckle print;
FIG. 2 is a coded diagram of three-mode Gabor directional features of fingers in 8 directions; wherein (a) is a fingerprint; (b) is a finger vein; (c) is a knuckle print;
FIG. 3 is a block image of a finger tri-modal Gabor directional feature code pattern in 8 directions; wherein (a) is a fingerprint; (b) is a finger vein; (c) is a knuckle print;
fig. 4 is a schematic diagram of 8 nearest neighbors of a certain pixel point.
FIG. 5 is a gray level feature histogram of a finger tri-modal Gabor directional feature coding block image in 8 directions; wherein (a) is a fingerprint, (b) is a finger vein, and (c) is a knuckle print;
fig. 6 is a comparison of the recognition performance of different gray scale groupings of 8 by 8 segmented images;
fig. 7 is a comparison of the recognition performance of different block images.
Fig. 8 is a finger vein ROI image of different poses.
Fig. 9 compares the recognition performances of the three feature extraction methods.
Detailed Description
The finger multi-modal feature fusion method based on Gabor coding provided by the invention is explained in detail below with reference to the accompanying drawings and specific embodiments.
The finger multi-modal feature fusion method based on Gabor coding provided by the invention comprises the following steps in sequence:
1) gabor filtering is carried out on the ROI images of the fingerprints, the finger veins and the finger joint prints in different postures by using Gabor filters with different scale parameters, and three-mode Gabor direction characteristic images of the fingers in 8 directions are obtained respectively;
because the fingerprint, the finger vein and the knuckle pattern image respectively have a ridge line structure, a pipeline structure and a crease line structure, and the pattern information is rich, the directional characteristic of the three-mode pattern of the finger is extracted by adopting a Gabor filtering method in the step. According to the characteristic that the textures of the finger tri-modal images are different, Gabor filtering is performed on ROI (region of interest) images of fingerprints, finger veins and finger nodes in different postures by using Gabor filters with different scale parameters (sigma is 4,5 and 6), the expression of the Gabor filters is shown as formula (1), and finger tri-modal Gabor directional feature images in 8 directions (0 degrees, 22.5 degrees, 45 degrees, 67.5 degrees, 90 degrees, 112.5 degrees, 135 degrees and 157.5 degrees) are obtained respectively, as shown in FIG. 1.
Where σ represents the scale of the Gabor filter, θkIndicating the angle value of the k-th direction.
2) Arranging the angle values of the finger three-mode Gabor direction characteristic images in the 8 directions from small to large respectively, and coding the angle values to form finger three-mode Gabor direction characteristic coded images in the 8 directions;
firstly, arranging the angle values of the finger three-mode Gabor direction characteristic images in the 8 directions from small to large, then respectively comparing the gray values of pixel points at the same positions in the images, taking the direction of the finger three-mode Gabor direction characteristic image corresponding to the maximum gray value as the direction characteristic of the pixel point, and coding according to the following coding method: 0 ° code is 0, 22.5 ° code is 1, 45 ° code is 2, 67.5 ° code is 3, 90 ° code is 4, 112.5 ° code is 5, 135 ° code is 6, 157.5 ° code is 7; such as: if the gray value of a pixel point at the same position on a certain 8 finger single-mode direction feature image on the 45-degree direction feature image is the maximum, the pixel point is coded as 2 in the finger single-mode Gabor direction feature coded image, so that a finger three-mode Gabor direction feature coded image in 8 directions is formed, as shown in fig. 2.
3) Blocking the finger three-mode Gabor direction feature coding image in the 8 directions to form a blocked image;
since MRRID (multi-support region rotation invariance feature) is only suitable for describing local images, the step divides the finger three-mode Gabor direction feature coded images in the 8 directions into blocks, in the present invention, the finger three-mode Gabor direction feature coded images in the 8 directions are divided into 8 × 8 blocks, and a block diagram is shown in fig. 3.
4) All pixel points in the finger three-mode Gabor direction feature coding block images in the 8 directions are regarded as feature points to extract the gray features of the feature points, and therefore gray feature vectors are formed;
because the finger tri-modal Gabor directional feature coding block images in the 8 directions are small, and if feature points are searched, loss of detail information is caused, therefore, the MRRID is improved in the step, pixel points in each block image are regarded as feature points to extract gray features of the feature points, and accordingly gray feature vectors are formed, and the process is as follows:
the first step is as follows: and (4) gray grouping. Firstly, sequencing the gray value of each pixel point in each block image from small to large to form a sequence of the pixel points; then, dividing the sequence into k gray groups according to the total number of the pixel points to form k groups of gray group images; then, determining a boundary point of each gray grouping by a rounding method, and acquiring a gray value of the boundary point, as shown in formula (2):
wherein,representing the boundary points, t, of each groupiRepresenting the edge of the ith gray groupingBoundary value, IminAnd ImaxRespectively representing the minimum gray value and the maximum gray value of the image pixel points.
The second step is that: and calculating the gray characteristic vector of each pixel point. Because each pixel point has 8 nearest neighbor points, the invention takes each pixel point in each gray level group image as the center and compares the symmetric neighbor pointsThe gray value size, such as:andis a pair of symmetric neighbors of pixel point i, wherein,the nearest neighbor of pixel point i labeled 1,is the nearest neighbor of pixel i, labeled 5, as shown in FIG. 4, ifGray value of point greater thanThe gray value of the point is 1; otherwise, 0, thereby forming a gray level feature vector of the 4-bit binary code, and then converting the gray level feature vector of the 4-bit binary code into a gray level feature vector of the 16-bit binary code using equation (3).
Wherein i represents the ith pixel point, and m represents the logarithm of the nearest neighbor point of the pixel point.
5) Superposing the gray feature vectors of each pixel point in each gray grouping image to form a gray feature histogram of each gray grouping image, connecting the gray feature histograms of each gray grouping image in series to form a gray feature histogram of a blocking image, fusing the gray feature histograms of all the blocking images of the three single-mode fingers in series to form three single-mode gray feature histograms, representing GLGF features of the single-mode images by using the histograms, and if the number of the blocking images of the finger single-mode Gabor directional feature coding images in each 8 directions is N, the dimension of the single-mode gray feature histogram is N x k x 16. Here, assuming that N is 8 × 8 and k is 5, the grayscale feature histograms of the three single-mode corresponding segmented images in the first row and the first column are as shown in fig. 5. And finally, serially fusing the gray feature histograms of the three single-mode images to form a finger three-mode gray feature histogram.
In addition, according to the formula (2), the value of the gray grouping number k is related to the size of the block image, that is, the number of pixels included in the block image is different, and the optimal value of k is also different. Therefore, the optimal gray grouping number k and the optimal block number N are determined through an ROC (acceptance characteristic curve) curve, so that the matching accuracy of the finger tri-modal gray feature histogram is highest. First, let us assume that N is 8 × 8, and as can be seen from fig. 6, when k is 7, the matching accuracy of the finger tri-modal grayscale feature histogram is the highest; from the previous result, assuming that k is 7, it can be seen from fig. 7 that the finger tri-modal grayscale feature histogram matching accuracy is highest when N is 8 × 8. From the above results, it is understood that the recognition performance of the method of the present invention is best when N is 8 × 8 and k is 7.
6) Judging whether the two finger ROI images are matched or not by a method of calculating the intersection coefficient of the three-modal gray scale feature histograms of the two finger ROI images to be matched;
and (4) judging whether the two finger ROI images are matched or not by a method of calculating the intersection coefficient of the three-modal gray scale feature histograms of the two finger ROI images to be matched, wherein the larger the intersection coefficient of the histograms is, the higher the matching possibility is.
In the formula: m is1And m2Respectively representing two images of the ROI of the finger to be matched, Hm1(i) And Hm2(i) Respectively represent the two frames to be matchedL represents the dimension of the finger tri-modal grayscale feature histogram.
In the image matching process, the intersection coefficient of the three-mode gray scale feature histograms of the two finger ROI images to be matched is calculated firstly. If the calculated intersection coefficient is > T (similarity decision threshold), the two finger ROI images are similar, namely the two finger ROI images are matched; and if the intersection coefficient is less than or equal to T, judging that the two finger ROI images are not matched. The similarity decision threshold T is a threshold point corresponding to the finger ROI image matching result with a false rejection rate of 0 and a lowest false allowance rate.
The present inventors have conducted two sets of experiments based on the above-described method. In both sets of experiments, the finger trimodal databases were collected by the homemade system. The database contains 300 different individuals, each of which contains 10 fingerprint ROI images, 10 finger vein ROI images and 10 finger knuckle ROI images. A total of 9000 finger tri-modal ROI images. And the pose of the finger single-mode image of each individual is different. Since there is a difference in the resolution of the single-mode images of the finger in the database, the resolution of the images of the finger vein, finger knuckle, etc. in the homemade database is adjusted to 152 × 152,88 × 200, respectively. The experimental environment is a PC machine, and the experiment is completed under the Matlab R2010a environment.
In the first set of experiments, four images of finger vein ROI were selected from the homemade database, as shown in fig. 8. The four finger vein ROI images all belong to the same person, and the postures of the four finger vein ROI images are different.
In this experiment, we used the following three feature extraction methods represented by histograms to verify the rotation invariant property of the GLGF feature extraction method proposed by the present invention.
Gabor directional feature coding: firstly, through the description of step 1 and step 2, 4 Gabor directional characteristic coding images of finger veins are formed; then, through the description of step 3, the Gabor directional feature coded images of 4 finger veins are divided into 8 × 8 blocks, and each Gabor directional feature coded block image is described through a conventional gray histogram representation method, where the conventional histogram representation method is: counting the number of pixel points of the Gabor directional characteristic coding image from gray value 0 to 255, and superposing the pixel points at each gray value to form a straight line to form a gray histogram of the Gabor directional characteristic coding block images of 4 finger veins; and finally, serially connecting the gray level histograms of the Gabor directional characteristic coding block images of each 4 finger veins to form a histogram of the Gabor directional characteristic coding images of the 4 finger veins.
2. Improved MRRID profile: firstly, dividing 4 images of the finger vein ROI into 8 x 8 blocks by the description of step 3; then, according to the description of step 4, describing each finger vein ROI block image through the improved MRRID, and forming an improved MRRID feature histogram of each finger vein ROI block image; and finally, connecting the improved MRRID feature histograms of the segmented images of each finger vein ROI in series to form an improved MRRID feature histogram of the finger vein ROI image.
GLGF profile: firstly, through the description of step 1 and step 2, 4 Gabor directional characteristic coding images of finger veins are formed; then, dividing 4 Gabor directional feature coding images of finger veins into 8 × 8 blocks through the description of step 3, describing each Gabor directional feature coding block image through the improved MRRID described in step 4, and forming an improved MRRID feature histogram of each Gabor directional feature coding block image; and finally, connecting the improved MRRID feature histograms of all the Gabor directional feature coding block images in series to form the improved MRRID feature histogram of the Gabor directional feature coding image, namely the GLGF feature histogram provided by the invention.
According to the above-mentioned feature histogram formation process, feature histograms of four finger vein ROI images are respectively matched, and their intersection coefficients are compared, as shown in table 1. As can be seen from the data in Table 1, the GLGF feature extraction method has better rotation invariant characteristic, and solves the problem of variable finger postures to a certain extent.
TABLE 1 histogram similarity coefficient
In a second set of experiments, we compared the identification performance of the GLGF feature extraction method, the Gabor directional feature encoding method described in steps 1 and 2, and the improved MRRID feature extraction method described in step 4, with the ROC (acceptance characteristic curve) curve shown in fig. 9. Table 2 shows the matching times and corresponding EERs (recognition rates at which the error allowance rate and the error rejection rate are equal, which are abbreviated as equal error rates herein) of the three feature extraction methods. It can be seen from the matching time of the different feature extraction methods in table 2 that the matching time of the GLGF feature extraction method is short. As can be seen from the experimental results of fig. 9 and table 2, the GLGF feature extraction method not only solves the problem of variable finger postures to a certain extent, but also has a good matching effect, improves matching efficiency, and has certain feasibility.
Table 2 identification capabilities of different descriptors
Claims (4)
1. A finger multi-modal feature fusion method based on Gabor coding is characterized in that: the finger multi-modal feature fusion method based on Gabor coding comprises the following steps in sequence:
1) gabor filtering is carried out on ROI images of fingerprints, finger veins and finger joint prints of different postures by utilizing Gabor filters with different scale parameters, and three-mode Gabor direction characteristic images of the finger in 8 directions, namely 0 degree, 22.5 degrees, 45 degrees, 67.5 degrees, 90 degrees, 112.5 degrees, 135 degrees and 157.5 degrees are obtained respectively;
2) arranging the angle values of the finger three-mode Gabor direction characteristic images in the 8 directions from small to large respectively, and coding the angle values to form finger three-mode Gabor direction characteristic coded images in the 8 directions;
3) blocking the finger three-mode Gabor direction feature coding image in the 8 directions to form a blocked image;
4) all the pixel points in the finger three-mode Gabor direction feature coding block image in the 8 directions are regarded as feature points to extract the gray features of the feature points, so that gray feature vectors are formed, and the process is as follows:
the first step is as follows: gray grouping: firstly, sequencing the gray value of each pixel point in each block image from small to large to form a sequence of the pixel points; then, dividing the sequence into k gray groups according to the total number of the pixel points to form k groups of gray group images; then determining the boundary point of each gray grouping by a rounding method, and acquiring the gray value of the boundary point;
the second step is that: calculating the gray characteristic vector of each pixel: taking each pixel point in each gray grouping image as a center, comparing the gray values of the symmetrical adjacent points, and if the gray value of a certain pixel point is greater than the gray value of the symmetrical adjacent point, determining that the gray value is 1; otherwise, the gray level feature vector is 0, so that a gray level feature vector of the 4-bit binary code is formed, and then the 4-bit binary code vector is converted into a gray level feature vector of the 16-bit binary code;
5) superposing the gray characteristic vectors of each pixel point in each gray grouping image to form a gray characteristic histogram of each gray grouping image, connecting the gray characteristic histograms of each gray grouping image in series to form a gray characteristic histogram of a blocking image, fusing the gray characteristic histograms of all blocking images of three single modes of the finger in series to form three single-mode gray characteristic histograms, and finally connecting the gray characteristic histograms of the three single-mode images in series to fuse the three single-mode gray characteristic histograms to form the finger three-mode gray characteristic histogram;
6) and judging whether the two finger ROI images are matched or not by a method of calculating the intersection coefficient of the three-modal gray scale feature histograms of the two finger ROI images to be matched.
2. The Gabor-coding-based finger multimodal feature fusion method according to claim 1, wherein: in step 2), the method for forming the finger three-modality Gabor directional feature coded image in 8 directions by arranging the angle values of the finger three-modality Gabor directional feature image in 8 directions from small to large and coding the angle values comprises: firstly, the angle values of the finger three-mode Gabor direction characteristic images in the 8 directions are respectively arranged from small to large, then the gray values of pixel points at the same positions in the images are respectively compared, the direction of the finger three-mode Gabor direction characteristic image corresponding to the maximum gray value is respectively used as the direction characteristic of the pixel point, and the encoding is carried out according to the following encoding method: 0 ° code is 0, 22.5 ° code is 1, 45 ° code is 2, 67.5 ° code is 3, 90 ° code is 4, 112.5 ° code is 5, 135 ° code is 6, 157.5 ° code is 7; thereby forming 8-direction finger three-mode Gabor direction characteristic coding images.
3. The Gabor-coding-based finger multimodal feature fusion method according to claim 1, wherein: in step 4), the formula for converting the gray level feature vector of the 4-bit binary code into the gray level feature vector of the 16-bit binary code is as follows:
wherein i represents the ith pixel point, and m represents the logarithm of the nearest neighbor point of the pixel point.
4. The Gabor-coding-based finger multimodal feature fusion method according to claim 1, wherein: in step 6), the method for determining whether two finger ROI images match by calculating the intersection coefficient of the tri-modal grayscale feature histograms of the two finger ROI images to be matched is: firstly, calculating the intersection coefficient of the three-modal gray feature histograms of two finger ROI images to be matched by using the following intersection coefficient expression, and if the calculated intersection coefficient is greater than a similarity decision threshold T, indicating that the two finger ROI images are similar, namely indicating that the two finger ROI images are matched; if the intersection coefficient is less than or equal to T, judging that the two finger ROI images are not matched; the similarity decision threshold T is a corresponding threshold point when the error rejection rate in the finger ROI image matching result is 0 and the error allowance rate is the lowest;
the expression of the intersection coefficient is:
in the formula: m is1And m2Respectively representing two images of the finger ROI to be matched,andand the L represents the dimension of the histogram of the finger three-mode image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510496609.7A CN105138974B (en) | 2015-08-12 | 2015-08-12 | A kind of multi-modal Feature fusion of finger based on Gabor coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510496609.7A CN105138974B (en) | 2015-08-12 | 2015-08-12 | A kind of multi-modal Feature fusion of finger based on Gabor coding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105138974A CN105138974A (en) | 2015-12-09 |
CN105138974B true CN105138974B (en) | 2019-01-11 |
Family
ID=54724318
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510496609.7A Expired - Fee Related CN105138974B (en) | 2015-08-12 | 2015-08-12 | A kind of multi-modal Feature fusion of finger based on Gabor coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105138974B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105975951A (en) * | 2016-05-27 | 2016-09-28 | 国创科视科技股份有限公司 | Finger vein and fingerprint fusion identification method of middle part of finger |
CN109409179B (en) * | 2018-03-30 | 2021-10-12 | 中国科学院半导体研究所 | Vein identification method based on neighborhood class coding learning |
CN109598205A (en) * | 2018-11-09 | 2019-04-09 | 国网山东省电力公司淄博供电公司 | The method of Finger print characteristic abstract and compressed encoding based on Gabor transformation |
CN109543580A (en) * | 2018-11-15 | 2019-03-29 | 北京智慧眼科技股份有限公司 | Refer to vena characteristic extracting method, comparison method, storage medium and processor |
CN109902585B (en) * | 2019-01-29 | 2023-04-07 | 中国民航大学 | Finger three-mode fusion recognition method based on graph model |
CN110516595B (en) * | 2019-08-27 | 2023-04-07 | 中国民航大学 | Finger multi-mode feature fusion recognition method based on convolutional neural network |
CN113033491A (en) * | 2021-04-26 | 2021-06-25 | 广东工业大学 | Identification method based on single-hand multi-mode knuckle print feature fusion |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070172114A1 (en) * | 2006-01-20 | 2007-07-26 | The Johns Hopkins University | Fusing Multimodal Biometrics with Quality Estimates via a Bayesian Belief Network |
CN102521575A (en) * | 2011-12-16 | 2012-06-27 | 北京天诚盛业科技有限公司 | Iris identification method based on multidirectional Gabor and Adaboost |
CN102629320A (en) * | 2012-03-27 | 2012-08-08 | 中国科学院自动化研究所 | Ordinal measurement statistical description face recognition method based on feature level |
CN103679153A (en) * | 2013-12-16 | 2014-03-26 | 中国民航大学 | Finger multi-modal biometric characteristic polarization imaging system |
-
2015
- 2015-08-12 CN CN201510496609.7A patent/CN105138974B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070172114A1 (en) * | 2006-01-20 | 2007-07-26 | The Johns Hopkins University | Fusing Multimodal Biometrics with Quality Estimates via a Bayesian Belief Network |
CN102521575A (en) * | 2011-12-16 | 2012-06-27 | 北京天诚盛业科技有限公司 | Iris identification method based on multidirectional Gabor and Adaboost |
CN102629320A (en) * | 2012-03-27 | 2012-08-08 | 中国科学院自动化研究所 | Ordinal measurement statistical description face recognition method based on feature level |
CN103679153A (en) * | 2013-12-16 | 2014-03-26 | 中国民航大学 | Finger multi-modal biometric characteristic polarization imaging system |
Non-Patent Citations (1)
Title |
---|
多模态生物特征识别技术进展综述;王瑜 等;《计算机应用与软件》;20090228;第26卷(第2期);第32-34页 |
Also Published As
Publication number | Publication date |
---|---|
CN105138974A (en) | 2015-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105138974B (en) | A kind of multi-modal Feature fusion of finger based on Gabor coding | |
CN105095880B (en) | A kind of multi-modal Feature fusion of finger based on LGBP coding | |
Prabhakar et al. | Learning fingerprint minutiae location and type | |
CN105160305B (en) | A kind of multi-modal Feature fusion of finger | |
CN102938065B (en) | Face feature extraction method and face identification method based on large-scale image data | |
JP7130905B2 (en) | Fast and Robust Dermatoglyphic Mark Minutia Extraction Using Feedforward Convolutional Neural Networks | |
Tavallali et al. | Robust cascaded skin detector based on AdaBoost | |
Zaeri | Minutiae-based fingerprint extraction and recognition | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
CN105956560A (en) | Vehicle model identification method based on pooling multi-scale depth convolution characteristics | |
Nguyen et al. | Deep learning for american sign language fingerspelling recognition system | |
Bakshi et al. | A novel phase-intensive local pattern for periocular recognition under visible spectrum | |
Fei et al. | Jointly heterogeneous palmprint discriminant feature learning | |
Jalilian et al. | Enhanced segmentation-CNN based finger-vein recognition by joint training with automatically generated and manual labels | |
Vij et al. | Learning minutiae neighborhoods: A new binary representation for matching fingerprints | |
Mohamed et al. | Adaptive extended local ternary pattern (aeltp) for recognizing avatar faces | |
RIZVI et al. | A comparative study on handwritten Bangla character recognition | |
Khodadoust et al. | Partial fingerprint identification for large databases | |
Zambrano et al. | Iris recognition using low-level CNN layers without training and single matching | |
Krishnan et al. | Finger vein recognition based on anatomical features of vein patterns | |
Hassin et al. | Machine Learning System for Human–Ear Recognition Using Scale Invariant Feature Transform | |
Fang et al. | Deep belief network based finger vein recognition using histograms of uniform local binary patterns of curvature gray images | |
Charfi et al. | Hand verification system based on multi-features fusion | |
Arora et al. | Sp-net: One shot fingerprint singular-point detector | |
CN113033491A (en) | Identification method based on single-hand multi-mode knuckle print feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190111 Termination date: 20190812 |