CN113673363B - Finger vein recognition method combining apparent similarity and singular point matching number - Google Patents

Finger vein recognition method combining apparent similarity and singular point matching number Download PDF

Info

Publication number
CN113673363B
CN113673363B CN202110858465.0A CN202110858465A CN113673363B CN 113673363 B CN113673363 B CN 113673363B CN 202110858465 A CN202110858465 A CN 202110858465A CN 113673363 B CN113673363 B CN 113673363B
Authority
CN
China
Prior art keywords
image
test image
template
singular point
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110858465.0A
Other languages
Chinese (zh)
Other versions
CN113673363A (en
Inventor
王新年
刘永莹
张鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202110858465.0A priority Critical patent/CN113673363B/en
Publication of CN113673363A publication Critical patent/CN113673363A/en
Application granted granted Critical
Publication of CN113673363B publication Critical patent/CN113673363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Abstract

The invention provides a finger vein recognition method and a system combining apparent similarity and singular point matching number, comprising the following steps: preprocessing each image in the data set for finger vein recognition; respectively obtaining features of the test image and the template image based on the spatial overlapping pyramid and the orthogonal partial reserved projection dimension reduction, and calculating apparent similarity between the test image and the template image based on the features; calculating the number of singular point matches of the test image and the template image, and calculating the similarity of the number of singular point matches between the test image and the template image based on the number of singular point matches; the apparent similarity of the test image and the template image and the similarity of the number of singular points are subjected to weighted fusion to obtain weighted matching scores; and classifying the weighted matching score values of the test image and each template image, wherein the image label corresponding to the classification result is the identification result. The invention can effectively solve the problem of low recognition rate caused by different finger postures during finger vein collection.

Description

Finger vein recognition method combining apparent similarity and singular point matching number
Technical Field
The invention relates to the technical field of finger vein recognition, in particular to a finger vein recognition method and a system combining apparent similarity and singular point matching number.
Background
The current finger vein recognition method mainly comprises a deep learning method and a traditional method, the deep learning method mainly comprises a self-learning feature extraction method based on a neural network, the method is based on learning of massive finger vein image data, multi-layer network parameters are adjusted under the theoretical constraint of a deep frame, an optimal nonlinear fitting network between input and output is established, and finger vein recognition with image classification as a target is completed. The traditional method is mainly based on a texture feature extraction method, a vein network is extracted from a finger vein gray level image, the integral topological structure of the finger vein is expressed, and the feature distance of the vein network is calculated for identification. Based on a texture feature extraction method, the texture features of veins are extracted by using LBP (local binary pattern) or LDP (local derivative pattern) expression, and the finger vein image is identified by utilizing the distance difference of the texture binary features among targets. Based on the minutiae feature extraction method, the minutiae and the position information of the minutiae such as bifurcation points, end points and the like in the finger vein image are extracted to describe the main minutiae features, and the finger vein image is identified by utilizing the distance difference of the minutiae features among targets. The method is also based on a statistical feature extraction method, statistical features of finger vein identification are obtained through statistical analysis of batch finger vein images, reliable statistical features are obtained through dimension reduction of a region of interest of the finger vein images by using PCA (principal component analysis), LDA (linear discriminant analysis), ONPP (orthogonal neighborhood preserving projection) and other methods, and the finger vein images are identified by utilizing the distance difference of the statistical features among targets.
However, in the above methods, the finger gestures and low quality images in the finger vein image acquisition process are easily affected, so that the minutiae features cannot be extracted or are not extracted correctly, and the recognition performance is affected.
Disclosure of Invention
In view of the above, the invention provides a finger vein recognition method and a system combining the apparent similarity and the number of singular point matches, which can effectively solve the problem of lower recognition rate caused by different finger postures during finger vein collection.
For this purpose, the invention provides the following technical scheme:
in one aspect, the invention provides a finger vein recognition method combining apparent similarity and singular point matching number, comprising the following steps:
acquiring a data set for finger vein recognition, and preprocessing each image in the data set; the data set comprises a test image and a plurality of template images with image labels;
for each template image that is preprocessed:
respectively obtaining the features of the preprocessed test image and the template image based on the spatial overlapping pyramid and the orthogonal partial reserved projection dimension reduction, and calculating the apparent similarity between the test image and the template image based on the features; calculating the singular point matching number of the preprocessed test image and the template image, and calculating the similarity of the singular point matching number between the test image and the template image based on the singular point matching number; performing weighted fusion on the apparent similarity and the number similarity of the singular points of the test image and the template image to obtain weighted matching scores of the test image and the template image; the weighted matching score calculation formula is as follows: score=d×λ+ (1-pro) × (1- λ); wherein λ is a weighted value; d is the apparent similarity of the test image and the template image, and pro is the similarity of the number of singular point matches of the test image and the template image;
and classifying the weighted matching score values of the test image and all the template images, wherein the image label corresponding to the classification result is the identification result.
Further, preprocessing each image in the dataset includes:
unifying the images into the upward direction of the fingertips, converting the upward direction of the fingertips into gray images, performing Gaussian filtering, and performing edge detection on the images to obtain finger edge binary images corresponding to the images;
determining a midline between edge lines on two sides of a finger in the finger edge binary image, and judging whether the image is inclined or not based on the midline;
if the image is inclined, calculating the inclination angle of the image and correcting the inclination angle, and carrying out edge detection on the corrected image again to obtain a binary image of the finger edge;
and projecting the binary image of the finger edge in the vertical direction, respectively solving column coordinates of the left projection peak value and the right projection peak value, and intercepting the region between the two column coordinates to obtain the region of interest.
Further, features of the preprocessed test image and the template image are obtained based on the spatial overlapping pyramid and the orthogonal partial preserving projection dimension reduction respectively, and apparent similarity between the test image and the template image is calculated based on the features, including: :
respectively carrying out feature extraction and learning on the preprocessed test image and the template image by adopting a multidirectional pixel difference vector feature extraction and learning method to obtain a mapping matrix W and a codebook D of the test image and the template image;
respectively constructing a spatial overlapping pyramid of the test image and the template image to obtain histogram feature representations of the test image and the template image;
respectively carrying out dimension reduction on the histogram of the test image by using orthogonal local retention projection;
and calculating the cosine distance between the feature of the test image and the feature of the template image after dimension reduction, and taking the cosine distance as the apparent similarity of the test image and the template image.
Further, respectively constructing a spatial overlap pyramid of the test image and the template image to obtain histogram feature representations of the test image and the template image, including:
for the obtained multi-directional pixel difference vector features of the test image and the template image, mapping the multi-directional pixel difference vector features into low-dimensional binary features B through respective mapping matrixes W;
respectively constructing three layers of pyramids for the test image and the template image, dividing each layer of image into locally overlapped blocks according to a fixed size, wherein the number of blocks of the 1 st layer, the 2 nd layer and the 3 rd layer of images is 1, 4 th and 16 th respectively;
performing expansion boundary processing on four boundaries of the image, wherein the expansion method selects 0 filling expansion; the layer 1 image is not segmented, edge expansion processing is not carried out, and the size of the layer 2 and the layer 3 images after expansion is as follows:
(w+w/2 L-1 )×(h+h/2 L-1 ),L=2,3;
where w is the width of the original, h is the height of the original, and L is the number of layers of the image.
Partitioning each layer of image, wherein the size of the partition blocks in the layer L image is as follows:
the size of the overlapped local area between two adjacent blocks in the horizontal direction of the 2 nd layer and the 3 rd layer is as follows:
(w/2 L-1 )×(h/2 L-2 ),L=2,3;
the size of the overlapped local area between two adjacent blocks in the vertical direction of the 2 nd layer and the 3 rd layer is as follows:
(w/2 L-2 )×(h/2 L-1 ),L=2,3;
calculating Euclidean distance between B of each sub-block image in different layers and the codebook D, and counting characteristic histograms of each sub-block in different layers;
the histograms obtained in each layer are connected in series, and each layer is assigned with corresponding weight, the weights of the 1 st layer, the 2 nd layer and the 3 rd layer are respectively 1/4,1/4 and 1/2, and the histogram features of the different layers are connected in series according to the sequence of the 3 rd layer, the 2 nd layer and the 1 st layer.
Further, calculating the singular point matching number of the preprocessed test image and the template image, and calculating the similarity of the singular point matching number between the test image and the template image based on the singular point matching number, including:
detecting singular points of the test image and the template image;
taking each extracted singular point as a center, taking a region with the size of 20 multiplied by 20 pixels, downsampling the region to the size of 5 multiplied by 5, taking the pixel value of the region to generate a 25-dimensional feature vector, and carrying out normalization processing on the feature vector to be used as the feature of the singular point;
calculating Euclidean distances between each singular point feature in the test image and all singular point features in the template image and arranging the singular point features in ascending order, calculating the ratio R of the minimum distance to the second minimum distance, considering that the test image and the template image have matched singular points when R is less than 0.5, and counting the number of the matched singular points;
and calculating the proportion of the number of the matched singular points to the total number of the singular points of the test image, and taking the proportion as the similarity of the number of the matched singular points of the test image and the template image.
Further, the classification method is a k-nearest neighbor method.
Further, λ takes 0.5.
Further, the Sobel operator is used for edge detection.
Further, singular point detection is performed by Harris corner detection.
In yet another aspect, the present invention further provides a finger vein recognition system combining the apparent similarity and the number of singular point matches, the system comprising:
the preprocessing module is used for acquiring a data set for finger vein recognition and preprocessing each image in the data set; the data set comprises a test image and a plurality of template images with image labels;
the similarity calculation module is configured to perform similarity calculation on each preprocessed template image obtained by the preprocessing module, and includes: the system comprises an apparent similarity calculation sub-module, a singular point matching number similarity calculation sub-module and a weighted matching score calculation sub-module;
the apparent similarity calculation sub-module is used for respectively obtaining the characteristics of the preprocessed test image and the template image based on the spatial overlapping pyramid and the orthogonal partial reserved projection dimension reduction, and calculating the apparent similarity between the test image and the template image based on the characteristics;
the singular point matching number similarity calculation module is used for calculating the singular point matching number of the preprocessed test image and the template image and calculating the singular point matching number similarity between the test image and the template image based on the singular point matching number;
the weighted matching score calculation sub-module is used for carrying out weighted fusion on the apparent similarity of the test image and the template image and the similarity of the number of singular points to obtain the weighted matching score of the test image and the template image; the weighted matching score calculation formula is as follows: score=d×λ+ (1-pro) × (1- λ); wherein λ is a weighted value; d is the apparent similarity of the test image and the template image, and pro is the similarity of the number of singular point matches of the test image and the template image;
and the finger vein recognition module is used for classifying the weighted matching score values of the test image and all the template images obtained by the similarity calculation module, and the image label corresponding to the classification result is the recognition result.
Compared with the prior art, the technical scheme has the advantages and positive effects that:
in the technical scheme, finger vein recognition is performed by combining the apparent similarity between the test image and the template image and the similarity of the number of singular point matches, on one hand, finger gestures are represented as translation of finger veins in the vein image, and the spatial pyramid adds spatial position information to histogram features, but is sensitive to translation, so that the problem of translation within a certain degree can be solved by adopting the spatial overlapping pyramid and the spatial overlapping part between blocks; on the other hand, the number of singular point matches is not affected by image shift because the feature points are matched. Therefore, the problem of low recognition rate caused by different finger postures during finger vein collection can be effectively solved by combining the two.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a finger vein recognition method combining apparent similarity and singular point matching numbers in an embodiment of the invention;
FIG. 2 is a schematic diagram of a spatial pyramid in an embodiment of the present invention;
FIG. 3 is a template image of the same finger with different acquired poses in an embodiment of the present invention;
fig. 4 shows test images of the same finger with different collected postures in the embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, a flowchart of a finger vein recognition method combining apparent similarity and singular point matching number in an embodiment of the present invention is shown, where the method includes:
s1, acquiring a finger vein image data set, and preprocessing a finger vein image in the data set;
wherein the dataset comprises a test image and a plurality of template images with image tags.
In specific implementation, S1 comprises the following steps:
s11, detecting the edge of a finger;
and unifying all images in the data set into the upward direction of the finger tip, converting the upward direction of the finger tip into a gray level image, carrying out Gaussian filtering, and carrying out edge detection on the image to obtain a finger edge binary image. Preferably, in the embodiment of the invention, the Sobel operator is used for detecting the edges of the two sides of the finger.
S12, determining a central line between edge lines on two sides of a finger in the image;
and selecting the front five-line area and the last five-line area of the image for the finger edge binary image after edge detection, respectively calculating the average value of the line and column coordinates of the edge line in each area, rounding upwards to be used as the midpoint coordinates of the two areas, and connecting the two midpoint coordinates, wherein the slope is b.
And calculating the difference between the maximum column coordinate and the minimum column coordinate of the two middle points, if the difference is smaller than 10 pixels, considering the image to be normal, and if the difference exceeds 10 pixels, considering the image to be inclined.
S13, if the image is inclined, calculating the inclination angle of the image and correcting the inclination angle;
wherein, the calculation formula of the inclination angle alpha of the image is as follows:
α=arctan(b)/360/2π。
after the inclination angle is obtained, correcting the inclined image based on the inclination angle, specifically: if the angle is positive, the image is rotated clockwise by the angle, and if the angle is negative, the image is rotated counterclockwise by the angle.
S14, acquiring a region of interest of the image.
The normal image does not need to carry out edge detection again, and the corrected image needs to acquire a binary image of the finger edge by using Sobel edge detection again. And projecting the binary image of the finger edge in the vertical direction, respectively solving column coordinates of the left projection peak value and the right projection peak value, and intercepting the region between the two column coordinates to obtain the region of interest.
S2, calculating apparent similarity between the preprocessed test image and each template image based on spatial overlapping pyramid and orthogonal partial retention projection (OLPP, orthogonal Locality Preserving Projection) in a dimension-reducing manner;
in specific implementation, S2 includes the following specific steps:
s21, extracting and learning features of the preprocessed test image and each template image;
preferably, a multi-directional pixel difference vector feature extraction and learning method is adopted to obtain a mapping matrix W and a codebook D.
S22, constructing a spatial overlapping pyramid, and obtaining a histogram feature representation of the test image and each template image based on the spatial overlapping pyramid;
in specific implementation, S22 may be performed according to the following steps:
1) For the obtained multi-directional pixel difference vector features, mapping to low-dimensional binary features B through a mapping matrix W.
2) Three layers of pyramids are constructed for each image, each layer of image is divided into partially overlapped blocks according to a fixed size, the number of the blocks of the 1 st layer, the 2 nd layer and the 3 rd layer of images is 1, 4 th layer and 16 th layer respectively, and the spatial overlapped pyramids are shown in figure 2.
The four boundaries of the image are subjected to expansion boundary processing, the expansion method selects 0 filling expansion, the 1 st layer image is not segmented, so that the edge expansion processing is not performed, and the sizes of the 2 nd and 3 rd layer images after expansion are as follows:
(w+w/2 L-1 )×(h+h/2 L-1 ),L=2,3;
where w is the width of the original, h is the height of the original, and L is the number of layers of the image.
Then each layer of image is partitioned, and the size of the partition blocks in the layer L image is as follows:
the size of the overlapped local area between two adjacent blocks in the horizontal direction of the 2 nd layer and the 3 rd layer is as follows:
(w/2 L-1 )×(h/2 L-2 ),L=2,3;
the size of the overlapped local area between two adjacent blocks in the vertical direction of the 2 nd layer and the 3 rd layer is as follows:
(w/2 L-2 )×(h/2 L-1 ),L=2,3;
3) And calculating Euclidean distance between B of each sub-block image in different layers and the codebook D, and counting the characteristic histogram of each sub-block in different layers.
4) The histograms obtained in each layer are connected in series, and each layer is assigned with corresponding weight, the weights of the 1 st layer, the 2 nd layer and the 3 rd layer are respectively 1/4,1/4 and 1/2, and finally the histogram features of the different layers are connected in series according to the sequence of the 3 rd layer, the 2 nd layer and the 1 st layer.
S23, dimension reduction is carried out on the obtained characteristics by using OLPP;
after obtaining the histogram feature representation of the image, OLPP is used to reduce the feature dimension to 70 dimensions.
S24, calculating cosine distance as apparent similarity
For test image I t And template image I g Features i of spatial pyramid histogram subjected to OLPP dimension reduction t And i g The cosine distance d between the two is calculated, and the cosine distance is taken as the apparent similarity, and the calculation formula is as follows:
s3, similarity calculation is carried out based on the number of singular point matching;
in specific implementation, S3 includes the following specific steps:
s31, detecting singular points of the test image and the template image;
preferably, in the embodiment of the invention, harris corner detection is adopted to detect singular points.
S32, carrying out feature description on the detected singular points;
taking each extracted singular point as a center, taking a region with the size of 20 multiplied by 20 pixels, downsampling the region to the size of 5 multiplied by 5, taking the pixel value of the region to generate a 25-dimensional feature vector, and carrying out normalization processing on the vector as the feature of the singular point.
S33, calculating the number of singular point matches of the test image and the template image;
and calculating Euclidean distances between each singular point feature in the test image and all singular point features in the template image and arranging in ascending order, calculating the ratio R of the minimum distance to the second minimum distance, considering that the test image and the template image have matched singular points when R is less than 0.5, and counting the number n of the matched singular points.
S34, calculating the similarity of the number of singular point matches;
calculating the ratio pro of the number of the matching points to the total number N of the singular points of the test image, and taking the ratio pro as the similarity of the number of the singular point matching of the two images:
pro=n/N。
in the specific execution, the execution order of S2 and S3 is arbitrary, and S2 may be executed first, then S3 may be executed, S3 may be executed first, then S2 may be executed, and S2 and S3 may be executed simultaneously, which is not limited herein.
S4, carrying out weighted fusion on the appearance similarity and the number similarity of the singular points, and calculating weighted matching scores of the test image and all template images;
the weighted matching score calculation formula is as follows:
score=d×λ+(1-pro)×(1-λ);
where lambda is a weighted value, preferably 0.5.
S5, classifying weighted matching score values of the test image and all template images, wherein an image label corresponding to the classification result is the identification result of the test image.
Preferably, the classification method may be KNN (k-nearest neighbor ).
In the above embodiment, finger vein recognition is performed by combining the apparent similarity between the test image and the template image and the similarity of the number of singular point matches, on the one hand, finger gestures are represented as translation of finger veins in the vein image, and the spatial pyramid adds spatial position information to histogram features, but is sensitive to translation, so that the problem of translation within a certain degree can be solved by adopting the spatial overlapping pyramid and the spatial overlapping part between blocks; on the other hand, the number of singular point matches is not affected by image shift because the feature points are matched. Therefore, the problem of low recognition rate caused by different finger postures during finger vein collection can be effectively solved by combining the two.
In a specific embodiment, the following data sets are employed:
image class/category Number of samples per class Subject/person Total number of samples/number of samples
84 6 14 504
The original image size of the acquired finger vein image is 256 multiplied by 320, and the image size after size normalization and ROI area extraction is 120 multiplied by 260. Fig. 3 and 4 show template images and test images of the same finger in different collecting postures, wherein (a) is a finger vein image collected in a normal posture, (b), (c) and (d) are finger vein images collected when the finger rotates leftwards about an axis, and (e) and (f) are finger vein images collected when the finger rotates rightwards about an axis.
1) Finger vein image recognition with small finger gesture difference between the test chart and the template chart, such as recognition between two corresponding (a), (b), (c), (d), (e) and (f) in fig. 3 and 4, has a correct rate of 100%.
2) Finger vein image recognition with large finger gesture difference between the test chart and the template chart, such as recognition between two images except for the same corresponding reference numerals in fig. 3 and fig. 4, the test accuracy in the data set is 98.016%. In the drawing example, the test chart (a) and the template chart (e), the test chart (b) and the template chart (a), the test chart (c) and the template chart (b), the test chart (d) and the template chart (c), the test chart (e) and the template chart (f), and the test chart (f) and the template chart (e) can be correctly identified.
3) The method has good recognition effect on translational and axial rotation finger vein images generated by different finger gestures, and has strong practicability.
The invention also provides a finger vein recognition system combining the apparent similarity and the number of singular point matches, which comprises:
the preprocessing module is used for acquiring a data set for finger vein recognition and preprocessing each image in the data set; the data set comprises a test image and a plurality of template images with image labels;
the similarity calculation module is configured to perform similarity calculation on each preprocessed template image obtained by the preprocessing module, and includes: the system comprises an apparent similarity calculation sub-module, a singular point matching number similarity calculation sub-module and a weighted matching score calculation sub-module;
the apparent similarity calculation sub-module is used for respectively obtaining the characteristics of the preprocessed test image and the template image based on the spatial overlapping pyramid and the orthogonal partial reserved projection dimension reduction, and calculating the apparent similarity between the test image and the template image based on the characteristics;
the singular point matching number similarity calculation module is used for calculating the singular point matching number of the preprocessed test image and the template image and calculating the singular point matching number similarity between the test image and the template image based on the singular point matching number;
the weighted matching score calculation sub-module is used for carrying out weighted fusion on the apparent similarity of the test image and the template image and the similarity of the number of singular points to obtain the weighted matching score of the test image and the template image; the weighted matching score calculation formula is as follows: score=d×λ+ (1-pro) × (1- λ); wherein λ is a weighted value; d is the apparent similarity of the test image and the template image, and pro is the similarity of the number of singular point matches of the test image and the template image;
and the finger vein recognition module is used for classifying the weighted matching score values of the test image and each template image obtained by the similarity calculation module, and the image label corresponding to the classification result is the recognition result.
The finger vein recognition system combining the apparent similarity and the number of singular point matches according to the embodiment of the present invention is relatively simple to describe because it corresponds to the finger vein recognition method combining the apparent similarity and the number of singular point matches in the above embodiment, and the relevant similarities are described in the section of the finger vein recognition method combining the apparent similarity and the number of singular point matches in the above embodiment, and will not be described in detail here.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (9)

1. A finger vein recognition method combining apparent similarity and singular point matching number, the method comprising:
acquiring a data set for finger vein recognition, and preprocessing each image in the data set; the data set comprises a test image and a plurality of template images with image labels;
for each template image that is preprocessed:
respectively obtaining the features of the preprocessed test image and the template image based on the spatial overlapping pyramid and the orthogonal partial reserved projection dimension reduction, and calculating the apparent similarity between the test image and the template image based on the features; calculating the singular point matching number of the preprocessed test image and the template image, and calculating the similarity of the singular point matching number between the test image and the template image based on the singular point matching number; performing weighted fusion on the apparent similarity and the number similarity of the singular points of the test image and the template image to obtain weighted matching scores of the test image and the template image; the weighted matching score calculation formula is as follows: score=d×λ+ (1-pro) × (1- λ); wherein λ is a weighted value; d is the apparent similarity of the test image and the template image, and pro is the similarity of the number of singular point matches of the test image and the template image;
classifying the weighted matching score values of the test image and all the template images, wherein the image label corresponding to the classification result is the identification result;
the method for obtaining the features of the preprocessed test image and the template image based on the spatial overlapping pyramid and the orthogonal partial reserved projection dimension reduction respectively, and calculating the apparent similarity between the test image and the template image based on the features comprises the following steps:
respectively carrying out feature extraction and learning on the preprocessed test image and the template image by adopting a multidirectional pixel difference vector feature extraction and learning method to obtain a mapping matrix W and a codebook D of the test image and the template image;
respectively constructing a spatial overlapping pyramid of the test image and the template image to obtain histogram feature representations of the test image and the template image;
respectively carrying out dimension reduction on the histogram of the test image by using orthogonal local retention projection;
and calculating the cosine distance between the feature of the test image and the feature of the template image after dimension reduction, and taking the cosine distance as the apparent similarity of the test image and the template image.
2. The finger vein recognition method combining apparent similarity and number of singular point matches according to claim 1, wherein preprocessing each image in the dataset comprises:
unifying the images into the upward direction of the fingertips, converting the upward direction of the fingertips into gray images, performing Gaussian filtering, and performing edge detection on the images to obtain finger edge binary images corresponding to the images;
determining a midline between edge lines on two sides of a finger in the finger edge binary image, and judging whether the image is inclined or not based on the midline;
if the image is inclined, calculating the inclination angle of the image and correcting the inclination angle, and carrying out edge detection on the corrected image again to obtain a binary image of the finger edge;
and projecting the binary image of the finger edge in the vertical direction, respectively solving column coordinates of the left projection peak value and the right projection peak value, and intercepting the region between the two column coordinates to obtain the region of interest.
3. The finger vein recognition method combining apparent similarity and number of singular point matches according to claim 1, wherein constructing spatially overlapping pyramids of the test image and the template image, respectively, to obtain histogram feature representations of the test image and the template image, comprises:
for the obtained multi-directional pixel difference vector features of the test image and the template image, mapping the multi-directional pixel difference vector features into low-dimensional binary features B through respective mapping matrixes W;
respectively constructing three layers of pyramids for the test image and the template image, dividing each layer of image into locally overlapped blocks according to a fixed size, wherein the number of blocks of the 1 st layer, the 2 nd layer and the 3 rd layer of images is 1, 4 th and 16 th respectively;
performing expansion boundary processing on four boundaries of the image, wherein the expansion method selects 0 filling expansion; the layer 1 image is not segmented, edge expansion processing is not carried out, and the size of the layer 2 and the layer 3 images after expansion is as follows:
(w+w/2 L-1 )×(h+h/2 L-1 ),L=2,3;
wherein w is the width of the original image, h is the height of the original image, and L is the number of image layers;
partitioning each layer of image, wherein the size of the partition blocks in the layer L image is as follows:
the size of the overlapped local area between two adjacent blocks in the horizontal direction of the 2 nd layer and the 3 rd layer is as follows:
(w/2 L-1 )×(h/2 L-2 ),L=2,3;
the size of the overlapped local area between two adjacent blocks in the vertical direction of the 2 nd layer and the 3 rd layer is as follows:
(w/2 L-2 )×(h/2 L-1 ),L=2,3;
calculating Euclidean distance between B of each sub-block image in different layers and the codebook D, and counting characteristic histograms of each sub-block in different layers;
the histograms obtained in each layer are connected in series, and each layer is assigned with corresponding weight, the weights of the 1 st layer, the 2 nd layer and the 3 rd layer are respectively 1/4,1/4 and 1/2, and the histogram features of the different layers are connected in series according to the sequence of the 3 rd layer, the 2 nd layer and the 1 st layer.
4. The finger vein recognition method combining apparent similarity and number of singular point matches according to claim 1, wherein calculating the number of singular point matches of the preprocessed test image and the template image and calculating the number of singular point matches between the test image and the template image based on the number of singular point matches comprises:
detecting singular points of the test image and the template image;
taking each extracted singular point as a center, taking a region with the size of 20 multiplied by 20 pixels, downsampling the region to the size of 5 multiplied by 5, taking the pixel value of the region to generate a 25-dimensional feature vector, and carrying out normalization processing on the feature vector to be used as the feature of the singular point;
calculating Euclidean distances between each singular point feature in the test image and all singular point features in the template image and arranging the singular point features in ascending order, calculating the ratio R of the minimum distance to the second minimum distance, considering that the test image and the template image have matched singular points when R is less than 0.5, and counting the number of the matched singular points;
and calculating the proportion of the number of the matched singular points to the total number of the singular points of the test image, and taking the proportion as the similarity of the number of the matched singular points of the test image and the template image.
5. The finger vein recognition method combining apparent similarity and singular point matching number according to claim 1, wherein the classification method is a k-nearest neighbor method.
6. The method for identifying finger veins by combining apparent similarity and singular point matching numbers as claimed in claim 1 wherein λ takes 0.5.
7. The finger vein recognition method combining apparent similarity and singular point matching number according to claim 2, wherein the edge detection is performed using a Sobel operator.
8. The finger vein recognition method combining apparent similarity and singular point matching number according to claim 4, wherein the singular point detection is performed by Harris corner detection.
9. A finger vein recognition system combining apparent similarity with a number of singular point matches, the system comprising:
the preprocessing module is used for acquiring a data set for finger vein recognition and preprocessing each image in the data set; the data set comprises a test image and a plurality of template images with image labels;
the similarity calculation module is configured to perform similarity calculation on each preprocessed template image obtained by the preprocessing module, and includes: the system comprises an apparent similarity calculation sub-module, a singular point matching number similarity calculation sub-module and a weighted matching score calculation sub-module;
the apparent similarity calculation sub-module is used for respectively obtaining the characteristics of the preprocessed test image and the template image based on the spatial overlapping pyramid and the orthogonal partial reserved projection dimension reduction, and calculating the apparent similarity between the test image and the template image based on the characteristics;
the singular point matching number similarity calculation module is used for calculating the singular point matching number of the preprocessed test image and the template image and calculating the singular point matching number similarity between the test image and the template image based on the singular point matching number;
the weighted matching score calculation sub-module is used for carrying out weighted fusion on the apparent similarity of the test image and the template image and the similarity of the number of singular points to obtain the weighted matching score of the test image and the template image; the weighted matching score calculation formula is as follows: score=d×λ+ (1-pro) × (1- λ); wherein λ is a weighted value; d is the apparent similarity of the test image and the template image, and pro is the similarity of the number of singular point matches of the test image and the template image;
the finger vein recognition module is used for classifying the weighted matching score values of the test image and all the template images obtained by the similarity calculation module, and an image label corresponding to the classification result is a recognition result;
the method for obtaining the features of the preprocessed test image and the template image based on the spatial overlapping pyramid and the orthogonal partial reserved projection dimension reduction respectively, and calculating the apparent similarity between the test image and the template image based on the features comprises the following steps:
respectively carrying out feature extraction and learning on the preprocessed test image and the template image by adopting a multidirectional pixel difference vector feature extraction and learning method to obtain a mapping matrix W and a codebook D of the test image and the template image;
respectively constructing a spatial overlapping pyramid of the test image and the template image to obtain histogram feature representations of the test image and the template image;
respectively carrying out dimension reduction on the histogram of the test image by using orthogonal local retention projection;
and calculating the cosine distance between the feature of the test image and the feature of the template image after dimension reduction, and taking the cosine distance as the apparent similarity of the test image and the template image.
CN202110858465.0A 2021-07-28 2021-07-28 Finger vein recognition method combining apparent similarity and singular point matching number Active CN113673363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110858465.0A CN113673363B (en) 2021-07-28 2021-07-28 Finger vein recognition method combining apparent similarity and singular point matching number

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110858465.0A CN113673363B (en) 2021-07-28 2021-07-28 Finger vein recognition method combining apparent similarity and singular point matching number

Publications (2)

Publication Number Publication Date
CN113673363A CN113673363A (en) 2021-11-19
CN113673363B true CN113673363B (en) 2024-03-01

Family

ID=78540569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110858465.0A Active CN113673363B (en) 2021-07-28 2021-07-28 Finger vein recognition method combining apparent similarity and singular point matching number

Country Status (1)

Country Link
CN (1) CN113673363B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116766A (en) * 2013-03-20 2013-05-22 南京大学 Increment neural network and sub-graph code based image classification method
CN104881684A (en) * 2015-05-27 2015-09-02 天津大学 Stereo image quality objective evaluate method
WO2018032861A1 (en) * 2016-08-17 2018-02-22 广州广电运通金融电子股份有限公司 Finger vein recognition method and device
CN110472479A (en) * 2019-06-28 2019-11-19 广州中国科学院先进技术研究所 A kind of finger vein identification method based on SURF feature point extraction and part LBP coding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097327B (en) * 2016-06-06 2018-11-02 宁波大学 In conjunction with the objective evaluation method for quality of stereo images of manifold feature and binocular characteristic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116766A (en) * 2013-03-20 2013-05-22 南京大学 Increment neural network and sub-graph code based image classification method
CN104881684A (en) * 2015-05-27 2015-09-02 天津大学 Stereo image quality objective evaluate method
WO2018032861A1 (en) * 2016-08-17 2018-02-22 广州广电运通金融电子股份有限公司 Finger vein recognition method and device
CN110472479A (en) * 2019-06-28 2019-11-19 广州中国科学院先进技术研究所 A kind of finger vein identification method based on SURF feature point extraction and part LBP coding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Normal maps vs. visible images: Comparing classifiers and combining modalities;Andrea Francesco AbateMaria De Marsico;《Journal of Visual Languages & Computing》;20090131;全文 *
基于贝叶斯优化神经网络的物体形状分类;张善新;范强;周治平;《激光与光电子学进展》;20180630;全文 *
旋转平移不敏感的指静脉识别算法研究;刘永莹;《CNKI》;20231209;全文 *

Also Published As

Publication number Publication date
CN113673363A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
Deitsch et al. Automatic classification of defective photovoltaic module cells in electroluminescence images
CN110060237B (en) Fault detection method, device, equipment and system
CN107316031B (en) Image feature extraction method for pedestrian re-identification
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
Alcantarilla et al. KAZE features
CN113592845A (en) Defect detection method and device for battery coating and storage medium
Pan et al. A robust system to detect and localize texts in natural scene images
CN107145829B (en) Palm vein identification method integrating textural features and scale invariant features
Zhu et al. Logo matching for document image retrieval
US20070058856A1 (en) Character recoginition in video data
CN102930300B (en) Method and system for identifying airplane target
CN102915435B (en) Multi-pose face recognition method based on face energy diagram
CN108197644A (en) A kind of image-recognizing method and device
CN109947273B (en) Point reading positioning method and device
JP2011118694A (en) Learning device and method, recognition device and method, and program
CN103136525A (en) Hetero-type expanded goal high-accuracy positioning method with generalized Hough transposition
CN104298995A (en) Three-dimensional face identification device and method based on three-dimensional point cloud
CN111274915A (en) Depth local aggregation descriptor extraction method and system for finger vein image
Ding et al. Recognition of hand-gestures using improved local binary pattern
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
CN110163182A (en) A kind of hand back vein identification method based on KAZE feature
CN111597875A (en) Traffic sign identification method, device, equipment and storage medium
CN104268550A (en) Feature extraction method and device
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN103336964A (en) SIFT image matching method based on module value difference mirror image invariant property

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant