CN107145829B - Palm vein identification method integrating textural features and scale invariant features - Google Patents

Palm vein identification method integrating textural features and scale invariant features Download PDF

Info

Publication number
CN107145829B
CN107145829B CN201710222874.5A CN201710222874A CN107145829B CN 107145829 B CN107145829 B CN 107145829B CN 201710222874 A CN201710222874 A CN 201710222874A CN 107145829 B CN107145829 B CN 107145829B
Authority
CN
China
Prior art keywords
palm vein
sift
nbp
sift feature
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710222874.5A
Other languages
Chinese (zh)
Other versions
CN107145829A (en
Inventor
邹见效
张钊
于力
徐红兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710222874.5A priority Critical patent/CN107145829B/en
Publication of CN107145829A publication Critical patent/CN107145829A/en
Application granted granted Critical
Publication of CN107145829B publication Critical patent/CN107145829B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Abstract

The invention discloses a palm vein recognition method fusing textural features and scale invariant features, which is characterized in that an NBP algorithm is used for primary screening, and if the primary screening result is obviously distinguished, the minimum Hamming distance is smaller than a set threshold value t1Selecting a corresponding palm vein ROI image as an identification result of the palm vein ROI image to be identified; otherwise, using SIFT features to carry out secondary screening, if the secondary screening results are distinguished obviously, the number of SIFT feature point matching pairs is more than or equal to the threshold value t2Obtaining the identification result (if the number of the identification results is the same, selecting the identification result with the minimum Hamming distance as the matching result); when the results cannot be clearly distinguished by both screenings, the object is considered not to be identified. The invention can not only improve the accuracy rate of singly using the NBP algorithm and the SIFT algorithm to carry out the palm vein recognition, namely reduce the false recognition rate, but also meet the requirement of real-time acquisition and recognition in efficiency, namely consider the recognition speed, thereby overcoming the defects of higher false recognition rate of the NBP algorithm and time consumption of the SIFT algorithm.

Description

Palm vein identification method integrating textural features and scale invariant features
Technical Field
The invention belongs to the technical field of pattern recognition, and particularly relates to a palm vein recognition method fusing textural features and scale invariant features.
Background
The technology of palm vein recognition is a biological recognition technology emerging in recent years, and the general processes of the technology include palm vein acquisition, image preprocessing, vein feature extraction and feature matching recognition. According to different feature extraction methods, the existing palm vein identification methods can be classified into the following four types: the method based on the structural feature, the method based on the subspace, the method based on the texture feature (NBP feature) and the method based on the Scale Invariant Feature (SIFT), wherein (1) the method based on the structural feature extracts the structural features such as line features or point features of the palm veins to represent the palm veins. The principle of the method is simple and visual, but the identification capability is limited because the point and line characteristics are easily lost due to the fuzzy palm vein information; (2) the method based on the subspace considers the vein image as a high-dimensional vector or matrix, converts the high-dimensional vector or matrix into a low-dimensional vector or matrix through projection or transformation, and represents and matches the palm vein in the low-dimensional space. The method is sensitive to noise influence such as illumination change and the like, so that the application range of the method is limited; (3) the texture feature-based method mainly extracts image global or local statistical information and the like as description, has stronger universality and high efficiency, has good robustness for small-range displacement, and has a great promotion space for the robustness for displacement at a larger position; (4) the method based on the scale invariant features mainly derives from invariant feature operators widely used in the field of computer vision, has certain robustness, but depends on preprocessing means such as image enhancement and the like, and meanwhile, the calculation efficiency needs to be further improved. The four types of methods each have advantages, disadvantages and advantages.
The method based on texture features, namely the NBP algorithm, is a palm vein recognition method based on texture and adopting a neighbor binary pattern. According to the method, the palm vein image is divided into a plurality of area blocks, the block gray level average value is calculated to eliminate the image rotation influence which possibly occurs during acquisition, and the NBP characteristics are further utilized to realize matching identification. Specifically, reference can be made to document [1 ]: lissen, Wuwei and Yunwei, palm vein biometric identification research using texture nearest neighbor model [ J ]. Instrument and meter report 2015,36(10): 2330-2338. Although the method has certain advantages in recognition speed and good robustness in small-range displacement and image rotation, when the displacement is large, the recognition effect is poor, the discrimination between different people is not obvious, the phenomenon of misrecognition is easily caused, the misrecognition rate is high, and the method is not beneficial to practical application.
The SIFT algorithm, a method based on scale invariant features, is an algorithm for detecting and describing local features in an image, finds extreme points in a spatial scale, and extracts invariant of positions, scales and rotations of the extreme points, and is published in 1999 and perfectly summarized in 2004 by David Lowe. The palm vein identification method based on SIFT invariant features is characterized in that extreme points of a scale space are detected by using a Gaussian difference function, stable feature points are selected at the extreme points, a direction is planned to be allocated for each feature band, and then generated features are matched. The method is relatively more in use, has no special advantage in accuracy, but has a remarkable effect in distinguishing palm veins of different hands, and can effectively reduce the false recognition rate. Therefore, the SIFT algorithm is an effective supplement to the NBP algorithm, how to efficiently combine the two algorithms, and meanwhile, the recognition accuracy and the recognition time are considered, and the false recognition rate is further reduced to meet the requirement that practical application becomes the important research content in the palm vein recognition research.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a palm vein identification method fusing textural features and scale invariant features, and improves the accuracy rate of palm vein identification (reduces the false identification rate) while considering the identification speed (time).
In order to achieve the above object, the palm vein recognition method with the texture feature and the scale invariant feature fused is characterized by comprising the following steps:
(1) preprocessing (image enhancement processing) a palm vein ROI (region Of interest) image Of a palm vein library, extracting and coding NBP (texture feature) features Of the image, constructing a NBP feature coding library, and simultaneously extracting SIFT (scale invariant feature) features Of the image, and constructing an SIFT feature library;
(2) extracting NBP characteristics from the palm vein ROI image to be identified, and coding;
(3) comparing the NBP feature codes of the palm vein ROI image to be identified with all NBP feature codes of an NBP feature code library, calculating the Hamming distance, and if the minimum Hamming distance is smaller than a set threshold value t1If the minimum Hamming distance NBP feature code is obviously matched and identified, selecting a palm vein ROI image corresponding to the minimum Hamming distance NBP feature code as an identification result, and finishing the identification; otherwise, if the obvious matching identification cannot be realized, h palm vein ROI images with the minimum Hamming distance are selected from the NBP feature coding library to enter a candidate region;
(4) extracting SIFT features of the palm vein ROI images to be identified, and respectively matching the SIFT features of the h palm vein ROI images in the candidate regions in an SIFT feature library to obtain matching results, namely the matching pair number of SIFT feature points;
(5) and selecting a matching result by combining the SIFT feature and the NBP feature:
when the number of matched pairs of SIFT feature points is more than or equal to a threshold value t2And the maximum value of the matching pairs of the SIFT feature points is one, the palm vein ROI image is determined to be matched with the palm vein ROI image to be identified in h palm vein ROI images, the palm vein ROI image with the maximum matching pairs of the SIFT feature points is used as an identification result, and when the number of the matching pairs of the SIFT feature points is larger than or equal to a threshold value t2If the maximum value of the SIFT feature point matching pairs is more than one, selecting the minimum Hamming distance between the H palm vein ROI images and the NBP feature codes of the palm vein ROI images to be identified as identification results;
taking a threshold t2If the number of SIFT feature matching pairs is less than t2And if yes, determining that the object is not identified.
The object of the invention is thus achieved.
The palm vein identification method fusing the textural features and invariant features carries out primary screening through the NBP algorithm, and if the primary screening result is obviously distinguished, the minimum Hamming distance is smaller than the set threshold t1Then, the corresponding palm vein ROI image is selected asIdentifying the result of the palm vein ROI image to be identified; if the primary screening result is not distinguished obviously, SIFT features are used for secondary screening, and if the secondary screening result is distinguished obviously, the quantity of SIFT feature point matching pairs is greater than or equal to a threshold value t2Obtaining the identification result (if the number of the identification results is the same, selecting the identification result with the minimum Hamming distance as the matching result); when the results cannot be clearly distinguished by both screenings, the object is considered not to be identified. The invention can not only improve the accuracy rate of singly using the NBP algorithm and the SIFT algorithm to carry out the palm vein recognition, namely reduce the false recognition rate, but also meet the requirement of real-time acquisition and recognition in efficiency, namely consider the recognition speed, thereby overcoming the defects of higher false recognition rate of the NBP algorithm and time consumption of the SIFT algorithm.
Drawings
FIG. 1 is a flow chart of an embodiment of a palm vein recognition method with a combination of texture features and scale invariant features;
fig. 2 is a comparison diagram before and after enhancement of a palm vein ROI image, in which (a) is an original image before enhancement and (b) is after enhancement;
FIG. 3 is a block division diagram;
FIG. 4 is a schematic diagram of an embodiment of NBP encoding;
fig. 5 is a SIFT feature matching graph;
FIG. 6 is a flowchart of a method for removing matching of wrong SIFT feature points based on the RANSC algorithm in the prior art;
FIG. 7 is a flowchart of a method for rejecting matches of wrong SIFT feature points in the present invention;
fig. 8 is a comparison of the two proposed methods.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Fig. 1 is a flowchart of a specific embodiment of a palm vein identification method fusing texture features and scale invariant features according to the present invention.
In this embodiment, as shown in fig. 1, the method for identifying a palm vein by fusing a texture feature and a scale invariant feature of the present invention includes the following steps:
step S1: building NBP feature coding library and SIFT feature library
Establishing an NBP (texture feature) feature coding library and a SIFT (scale invariant feature) feature library for the preprocessed (image enhancement processed) palm vein ROI (region Of interest) image.
In the matching process, in order to improve the recognition speed and efficiency, we first perform preprocessing on the ROI image of the palm vein bank, that is, perform enhancement processing on the image, as shown in fig. 2, where fig. 2(a) is an original image of the palm vein ROI image, and fig. 2(b) is the image of the palm vein ROI after image enhancement. Comparing fig. 2(a) and fig. 2(b), we can see that the palm veins of the image are more distinct.
And after preprocessing, extracting NBP (texture feature) and SIFT (scale invariant feature) features of the image, and constructing an NBP feature coding library and an SIFT feature library.
Step S2: and extracting NBP characteristics from the palm vein ROI image to be identified, and encoding.
In steps S1 and S2, the extracted NBP features are encoded as:
1.1) performing block division on palm vein ROI image
Firstly, a palm vein ROI image with the size of M multiplied by M is taken as a matrix to be converted into an image block square matrix V, namely:
Figure GDA0001336589570000041
wherein each block Vij(i, j ═ 1, 2.., k) is a square matrix of M × M size, M ═ k × M;
then, the gray value of each block is calculated by the formula (2):
Figure GDA0001336589570000051
wherein f isij(x, y) is block VijThe gray values of all blocks can form a multi-block mean matrix I of the palm vein image with the size of k multiplied by k:
Figure GDA0001336589570000052
in this embodiment, as shown in fig. 3, a 128 × 128 size palm vein ROI image is converted into a 16 × 16 image block square matrix V (in this case, k is 16, and m is 8) as a matrix, pixel average values in the blocks are calculated, and a new 16 × 16 matrix, that is, a palm vein image multi-block average value matrix I, is constructed using the average values as pixel values. As can be seen from FIG. 3, the image is subjected to the calculation of the mean values, and each small image can simultaneously grasp the whole and local information of the image, so that the influence caused by small-amplitude rotation during image acquisition is reduced, the original data volume is obviously reduced, the operation efficiency is improved, and the discrimination of the features is not influenced.
1.2) carrying out NBP coding operation on the palm vein image multi-block average value matrix I to form an NBP binary coding bit string;
the NBP feature addresses the gray scale relationship between neighboring pixels. The NBP coding operation is that for a palm vein image multi-block mean matrix I, each element is taken as a central element, a 3 multiplied by 3 window is taken to surround the central element, the upper left corner element is taken as a starting point, a clockwise traversal method is adopted, 8 points around the central element are extracted (if the window is not on the palm vein image multi-block mean matrix I, the value is 0 if the window is a first row, a first column, a last row and a last column), the 8 points are arranged in parallel into a row, and the element values are p in sequence7,p6,…,p0(ii) a Then, starting from the leftmost element value, the current element value is compared with the next neighbor element value to its right, by the formula:
Figure GDA0001336589570000053
in particular, it is possible to use, for example,
Figure GDA0001336589570000054
and connecting the NBP codes of each central element, namely coding the NBP features of the palm vein ROI image.
In this embodiment, as shown in fig. 4, NBP coding is performed on the central element, and the NBP coding is obtained by using equations (4) and (5): 00011010. NBP codes at 256 elements in 16 multiplied by 16 are sequentially calculated and are sequentially arranged to obtain 2048 bit NBP codes.
And connecting the NBP feature codes of all the palm vein ROI images in the palm vein library to form an NBP feature code library.
Step S3: and comparing the NBP feature codes of the palm vein ROI image to be identified with all NBP feature codes of the NBP feature code library to obtain the Hamming distance.
NBP signature codes for two alignments, denoted SNBP1,SNBP2The bit string form is:
SNBP1=a1a2…aN(6)
SNBP2=b1b2…bN(7)
wherein, a1~aN,b1~bNIs 0 or 1;
the hamming distance between them is defined as:
Figure GDA0001336589570000061
wherein, the symbol
Figure GDA0001336589570000062
Indicating an exclusive or operation and N is the length of the NBP feature code.
If the minimum Hamming distance is less than the set threshold t1If the minimum Hamming distance NBP feature code is found to be capable of obviously identifying and matching, selecting the palm vein ROI image corresponding to the minimum Hamming distance NBP feature code as a matching result, and finishing the identification; otherwise, indicating that no match can be clearly identified, then at NAnd h palm vein ROI images with the minimum Hamming distance are selected from the BP characteristic coding library to enter a candidate region.
In this embodiment, the Hamming distance is obtained, and if the minimum Hamming distance is less than or equal to the threshold t1If the minimum Hamming distance is larger than t, the corresponding image in the palm vein base is selected as the recognition result, and recognition is finished, if the minimum Hamming distance is larger than t1And (5) selecting h which is 20 palm vein ROI images with the minimum Hamming distance from the palm vein ROI image to be identified in the palm vein library to enter a candidate region if the hamming distance is 0.21. In the present embodiment, as shown in fig. 5, (a) is a palm vein ROI image to be recognized, and (b) and (c) are representative two images of 20 palm vein ROI images selected at the time of primary screening.
Step S4: extracting SIFT characteristics of the palm vein ROI images to be identified, and respectively matching the SIFT characteristics of the h palm vein ROI images in the candidate regions in an SIFT characteristic library to obtain matching results, namely the matching pair number of SIFT characteristic points.
In this embodiment, in step S1 and step S4, the SIFT features of the extracted palm vein ROI image are:
2.1), constructing a Gaussian difference pyramid and searching an extreme point
In order to find the feature points independent of the scale, an image pyramid is first constructed. The image pyramid is constructed by continuously sampling the original image by one-half. And calculating the Gaussian difference of the pyramid images of adjacent scales to obtain a Gaussian difference image, and searching an extreme point in the image space after obtaining a series of Gaussian difference images. The Gaussian difference pyramid structural formula is as follows:
Figure GDA0001336589570000071
wherein, D (x, y, sigma) is a scale variable Gaussian function, G (x, y, sigma) is a Gaussian convolution kernel, L (x, y, sigma) is a scale space of the image, and (x, y) is a space coordinate, the size of sigma determines the smoothness of the image, the large scale corresponds to the general appearance characteristic of the image, and the small scale corresponds to the detail characteristic of the image.
The method for searching the extreme point comprises the following steps: and comparing the detection point with 26 points of 8 adjacent points of the same scale and 18 points corresponding to the upper and lower adjacent scales, and determining the detection point as an extreme point when the detection point is larger or smaller than other adjacent domain points.
2.2), feature point localization and edge removal response
2.2.1), precise positioning of feature points
Since the DoG value is sensitive to noise, the local extreme point detected in the DoG scale space is further checked to determine the local extreme point as a feature point. The Taylor expansion of the DoG function in scale space is:
Figure GDA0001336589570000072
the above equation is derived and made 0, the exact position is obtained, which yields:
Figure GDA0001336589570000073
substituting equation (11) into equation (10) yields the equation:
Figure GDA0001336589570000074
when in use
Figure GDA0001336589570000075
Less than q1, the feature point is discarded. q1 is here suggested to be 0.03.
2.2.2), remove edge response
A Hessian matrix is constructed,
Figure GDA0001336589570000081
when in use
Figure GDA0001336589570000082
If so, the feature points are reserved, otherwise, the feature points are removed. Where r suggests a value of 10.
2.3) Direction assignment
SIFT assigns a direction to a feature point by the gradient of each feature point. The gradient magnitude is defined as:
Figure GDA0001336589570000083
the gradient direction is defined as:
Figure GDA0001336589570000084
the direction of the characteristic point is determined by adopting a gradient histogram statistical method, namely, the contribution of image pixel points in a certain area with the characteristic point as an original point to the direction generation of the characteristic point is counted. The gradient histogram takes every s-degree direction as a column, 360/s columns are total, the direction represented by the column is the gradient direction of the pixel point, and the length of the column represents the gradient amplitude. The direction in which the gradient magnitude is the largest is the direction of the feature point.
2.4) generating feature point descriptors
When the feature point descriptor is generated, the coordinate axis is rotated to the direction of the feature point to ensure the invariance of rotation. Then, for any feature point, a 16 × 16 region centered on the feature point is taken in the scale space, gradient vector histograms in 8 directions are calculated on each 4 × 4 patch, and an accumulated value in each gradient direction is drawn to form a seed point region. A feature point is described using 16 points, 4 x 4, forming a 128-dimensional feature vector. And normalizing the length of the feature vector, further removing the influence of illumination change, and generating a final SIFT feature descriptor.
Fig. 5(d), (e), and (f) are SIFT feature point descriptors of representative two SIFT feature point descriptors, namely, (a), (b), and (c), of the palm vein ROI image to be recognized and the selected 20 palm vein ROI images, respectively, and the number of feature points is 520,491,576, respectively. And (3) solving SIFT feature points of each image, firstly constructing a Gaussian difference pyramid, solving extreme points of the Gaussian image, accurately positioning the feature points, removing edge response, continuing to endow the feature points with directions, and generating a feature descriptor. The feature points of each image are shown in fig. 5(d), (e) and (f), the feature points are about 500, the data volume is large, and every matching of the SIFT features requires pairwise matching among the feature points, so that the workload can be greatly reduced by carrying out primary screening.
When the characteristics are matched, a nearest neighbor distance searching method is adopted for matching, namely 2 nearest neighbor characteristic points of each characteristic point are found, and if the ratio of the nearest distance to the next nearest distance is less than a certain proportional threshold t3Then the pair of matching points is accepted. Thus, the matching result, namely the number of SIFT feature point matching pairs is obtained.
After SIFT matching is performed on the image, some unreliable SIFT feature point matching pairs are inevitably obtained. When the number of unreliable feature point matching pairs increases, the image matching accuracy is reduced or fails. Therefore, after SIFT feature matching, SIFT feature point matching pairs need to be screened, wrong SIFT feature point matching pairs are removed, and image matching accuracy is improved.
The conventional method for eliminating the matching pairs of the wrong SIFT feature points adopts an RANSC algorithm. The RANSAC algorithm is based on two assumptions, one is that the data set contains interior points (inliers) and exterior points (outliers); second, the outer points do not fit the model estimated by the inner points. Firstly, randomly selecting a point computational mathematical model, then finding an inner point of the model, recalculating the model, and finding a model which accords with the most points through continuous iterative computation.
Let image I1And I2The SIFT feature point matching pair set (matching pair set for short) is phi. The minimum sample set S is a matching pair set with the minimum parameters for calculating the model, and since the transformation model of the image has 8 degrees of freedom, at least 4 matching pairs of feature points are required to calculate the transformation matrix, so S includes 4 matching pairs of feature points. A Set of points whose residual from the matrix H is smaller than a predetermined threshold value is called a coherent Set (Consensus Set) of the matrix H, and CS.
Fig. 6 shows a flowchart of the RANSAC algorithm, which includes the following steps:
(1) randomly selecting a minimum sample set S from the initial matching pair set phi;
(2) calculating a homography matrix H according to S;
(3) calculating whether each point in phi in the error metric function accords with the model M, forming a consistent set CS of H by the consistent point set, and returning a matching logarithm n in the CS;
(4) according to CS, the homography matrix H is restored, and then the calculation is returned to the step (3) until the given repetition number k is reached;
(5) taking the consistent set with the maximum n in the k times of calculation as a final interior point data set CSm
(6) By CSmAnd (5) calculating a homography matrix H as a final data model of the set phi.
This method can effectively eliminate mismatching pairs, but discard some correct matching pairs, andthe final matching result is troublesome when the number of wrong matching points is large or the number of matching pairs is small.
In this embodiment, a screening method for SIFT feature point matching pairs of palm vein images is proposed by using the RANSAC algorithm for reference. Before the palm vein recognition, the ROI image is subjected to normalization processing, so that large displacement and rotation cannot occur, and wrong SIFT feature point matching pairs are screened out by utilizing the included angle between the connecting line of the two feature points of the SIFT feature point matching pairs and the horizontal direction and the distance between the two feature points. We define two straight lines L1、L2The similarity distance is as follows:
g(L1,L2)=γ1l(L1,L2)+γ2θ(L1,L2)
wherein L (L)1,L2) Is two straight lines L1、L2Difference in length, θ (L)1,L2) Is two straight lines L1、L2Angle between, gamma1Is a distance weight, gamma2Is an angle weight, a distance weight gamma1Product of the difference and the angle weight gamma2The products of the angle and the included angle are dimensionless numbers. That is, two straight lines L1、L2The similarity distance of (a) is a weighted sum of the two.
In this embodiment, a flowchart of a method for removing matching pairs of wrong SIFT feature points is shown in fig. 7, and the specific implementation steps are as follows:
3.1) sorting a matching pair set phi formed by all SIFT feature point matching pairs, sorting by taking an included angle between a connecting line of two points of the SIFT feature point matching pairs and the horizontal direction as a first priority and taking the distance between the two points as a second priority, and selecting the first n groups of SIFT feature point matching pairs as a sample set S;
3.2) calculating the average distance and the average included angle between two points of SIFT feature point matching pairs in the sample set S, and taking the average value as a straight line L1
3.3) calculating a two-point connecting line L of SIFT feature point matching pairs in the matching pair set phi2And a straight line L1The similarity distance of (2):
g(L1,L2)=γ1l(L1,L2)+γ2θ(L1,L2)
wherein L (L)1,L2) Is two straight lines L1、L2Difference in length, θ (L)1,L2) Is two straight lines L1、L2Angle between, gamma1Is a distance weight, gamma2Is an angle weight, a distance weight gamma1Product of the difference and the angle weight gamma2The product of the angle and the included angle is dimensionless;
setting a threshold value z, and calculating an SIFT feature point matching proportion k with the similarity distance smaller than the threshold value z;
3.4) if k is greater than or equal to a set threshold value k1Taking all SIFT feature point matching pairs meeting the condition that the similarity distance is smaller than the threshold value z as the eliminated result, and finishing the elimination; otherwise, entering step 3.5);
3.5) if k is greater than or equal to a set threshold value k2Selecting all SIFT feature point matching pair sample sets S meeting the condition that the similarity distance is smaller than the threshold value z as a reference, returning to the step 3.2), and otherwise, entering the step 3.6); wherein the threshold value k1Greater than a threshold value k2
3.6) removing the SIFT feature point matching pair with the maximum similarity distance with other SIFT feature point matching pairs in the sample set S, adding the next SIFT feature point matching pair after sequencing in the matching pair set phi, returning to the step 3.2), if the last SIFT feature point matching pair after sequencing in the matching pair set phi is executed and no next SIFT feature point matching pair exists, selecting the sample set S with the highest proportion k in the iteration process as a result after the elimination, and finishing the elimination.
In the present embodiment, as shown in fig. 8, the rejection results of the two methods are compared. In this embodiment, the number of selected groups n is 4, and the threshold k is set to be1Is 90%, threshold k2The rejection was performed for 30%. Fig. 8(a) shows the original matching result without culling, fig. 8(b) shows the result after culling by the RANSAC method, and fig. 8(c) shows the result after culling by the method of this embodiment. In the first row, when there are more matched pairs of SIFT feature points, the RANSAC discards some correct matched pairs of SIFT feature points, but the elimination method adopted in the embodiment can keep more correct matched pairs of SIFT feature points while eliminating wrong matched pairs of SIFT feature points; in the second row, when mismatching occurs, the RANSAC method finds partial wrong H matrixes, and a small number of mismatching methods are reserved to remove all mismatching; and in the third line, when the number of SIFT feature point matching pairs is relatively small, the result of the RANSAC method is less reserved, and the identification standard cannot be met when the threshold value is higher.
(5) And selecting a matching result by combining the SIFT feature and the NBP feature:
when the number of matched pairs of SIFT feature points is more than or equal to a threshold value t2And the maximum value of the matching pairs of the SIFT feature points is one, the palm vein ROI images are determined to be matched with the palm vein ROI image to be identified in h palm vein ROI images, the palm vein ROI image with the maximum matching pairs of the SIFT feature points is used as a matching result, and when the number of the matching pairs of the SIFT feature points is larger than or equal to a threshold value t2If the maximum value of the SIFT feature point matching pairs is more than one, selecting the minimum Hamming distance between the H palm vein ROI images and the NBP feature codes of the palm vein ROI images to be identified as matching results;
taking a threshold t2If the number of SIFT feature matching pairs is less than t2And if yes, determining that no matching object exists.
In the palm vein recognition process, as shown in fig. 5. Fig. 5(a) and (b) are palm vein ROI images of the same person, which have a large displacement, and during NBP primary screening, the hamming distance between fig. 5(a) and 5(b) is 0.2505, and the hamming distance between fig. 5(a) and 5(c) is 0.2485, so that the erroneous conclusion of fig. 5(a) will be mistaken for fig. 5(c) by using only the NBP algorithm. By using the identification method of the invention, although the hamming distances of the former two are larger than the hamming distances of the latter two, the (b) and (c) are still selected to enter the candidate area during the initial screening, and when the SIFT feature matching is carried out on the two, as shown in fig. 5(g) and 5(h), the pairs of SIFT matching feature points of the former two are 83 pairs, and the pairs of matching feature points of the latter two are 3 pairs, so that the correctly matched images are selected as fig. 5(a) and 5(b), namely, the correctly matched images are identified as fig. 5 (b).
In the palm vein recognition, the method has obvious advantages on the recognition time of the original SIFT algorithm, not only removes the images which are obviously distinguished by the NBP algorithm through primary screening, but also reduces the SIFT feature matching amount of the images which cannot be obviously distinguished from the original 500 times to 20 times, thereby greatly reducing the matching time. For the NBP algorithm, the algorithm can perform secondary screening on the images with unobvious distribution of the original algorithm region, the error rate of the NBP algorithm is reduced, and the error matching is improved obviously. In practical application, the method has obvious advantages.
Method of producing a composite material Mean time False rate (1: N) Rejection rate (1:1)
SIFT algorithm 31.7s 0.02% 5.0%
NBP algorithm 6.6ms 1.8% 1.6%
The invention 0.1s 0% 0.6%
TABLE 1
Table 1 is a table comparing the data of the present invention with the pre-fusion method on the palm vein bank of chinese university in hong kong. As can be seen from table 1, compared with the NBP algorithm, the present invention can effectively improve the recognition performance in the recognition process, and particularly, in the aspect of error recognition, the present invention can well eliminate the characteristic of error matching by using the SIFT algorithm, thereby effectively reducing the error matching; compared with the SIFT algorithm, the method has great advantages in matching time, the matching time is short, and the requirement of matching time in practical application can be met. Therefore, compared with the SIFT algorithm and the NBP algorithm, the method can effectively fuse the advantages of the SIFT algorithm and the NBP algorithm, and improves the matching speed and the experimental efficiency to a certain extent.
TABLE 2 self-sampling palm vein bank
Method of producing a composite material Mean time False rate (1: N) Rejection rate (1:1)
SIFT algorithm 29.2s 0.05% 4.5%
NBP algorithm 6.6ms 2.25% 3.0%
The invention 0.09s 0% 1.5%
TABLE 2
Table 2 is a table comparing the data of the present invention with the pre-fusion method in the self-sampling palm vein library, and it can be seen from table 2 that the present invention can be used to achieve better improvement in the self-sampling library as well.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (3)

1. A palm vein identification method fusing texture features and scale invariant features is characterized by comprising the following steps:
(1) carrying out image enhancement processing on a palm vein ROI image of a palm vein library, then extracting and coding NBP (negative bias potential) features of the image, constructing an NBP feature coding library, and simultaneously extracting SIFT (scale invariant feature transform) features of the image and constructing an SIFT feature library;
(2) extracting NBP characteristics from the palm vein ROI image to be identified, and coding;
(3) comparing the NBP feature codes of the palm vein ROI image to be identified with all NBP feature codes of an NBP feature code library, calculating the Hamming distance, and if the minimum Hamming distance is smaller than a set threshold value t1If the minimum Hamming distance NBP feature code is obviously matched and identified, selecting a palm vein ROI image corresponding to the minimum Hamming distance NBP feature code as an identification result, and finishing the identification; otherwise, if the obvious matching identification cannot be realized, h palm vein ROI images with the minimum Hamming distance are selected from the NBP feature coding library to enter a candidate region;
(4) extracting SIFT features of the palm vein ROI images to be identified, and respectively matching the SIFT features of the h palm vein ROI images in the candidate regions in an SIFT feature library to obtain matching results, namely the matching pair number of SIFT feature points;
(5) and selecting a matching result by combining the SIFT feature and the NBP feature:
when the number of matched pairs of SIFT feature points is more than or equal to a threshold value t2And the maximum value of the matching pairs of the SIFT feature points is one, the palm vein ROI image is determined to be matched with the palm vein ROI image to be identified in h palm vein ROI images, the palm vein ROI image with the maximum matching pairs of the SIFT feature points is used as an identification result, and when the number of the matching pairs of the SIFT feature points is larger than or equal to a threshold value t2If the maximum value of the SIFT feature point matching pairs is more than one, selecting the minimum Hamming distance between the H palm vein ROI images and the NBP feature codes of the palm vein ROI images to be identified as identification results;
taking a threshold t2If the number of SIFT feature matching pairs is less than t2And if yes, determining that the object is not identified.
2. The palm vein identification method according to claim 1, wherein in the step (4), the matched pairs of wrong SIFT feature points need to be removed, and the specific steps are as follows:
3.1) sorting a matching pair set phi formed by all SIFT feature point matching pairs, sorting by taking an included angle between a connecting line of two points of the SIFT feature point matching pairs and the horizontal direction as a first priority and taking the distance between the two points as a second priority, and selecting the first n groups of SIFT feature point matching pairs as a sample set S;
3.2) calculating the average distance and the average included angle between two points of SIFT feature point matching pairs in the sample set S, and taking the average value as a straight line L1
3.3) calculating a two-point connecting line L of SIFT feature point matching pairs in the matching pair set phi2And a straight line L1The similarity distance of (2):
g(L1,L2)=γ1l(L1,L2)+γ2θ(L1,L2)
wherein L (L)1,L2) Is two straight lines L1、L2Difference in length, θ (L)1,L2) Is two straight lines L1、L2Angle between, gamma1Is a distance weight, gamma2Is an angle weight, a distance weight gamma1Product of the difference and the angle weight gamma2The product of the angle and the included angle is dimensionless;
setting a threshold value z, and calculating an SIFT feature point matching proportion k with the similarity distance smaller than the threshold value z;
3.4) if k is greater than or equal to a set threshold value k1Taking all SIFT feature point matching pairs meeting the condition that the similarity distance is smaller than the threshold value z as the eliminated result, and finishing the elimination; otherwise, entering step 3.5);
3.5) if k is greater than or equal to a set threshold value k2Selecting all SIFT feature point matching pair sample sets S meeting the condition that the similarity distance is smaller than the threshold value z as a reference, returning to the step 3.2), and otherwise, entering the step 3.6); wherein the threshold value k1Greater than a threshold value k2
3.6) removing the SIFT feature point matching pair with the maximum similarity distance with other SIFT feature point matching pairs in the sample set S, adding the next SIFT feature point matching pair after sequencing in the matching pair set phi, returning to the step 3.2), if the last SIFT feature point matching pair after sequencing in the matching pair set phi is executed and no next SIFT feature point matching pair exists, selecting the sample set S with the highest proportion k in the iteration process as a result after the elimination, and finishing the elimination.
3. The palm vein recognition method according to claim 2, wherein the number n of groups is 4 and the threshold k is1Is 90%, threshold k2The content was 30%.
CN201710222874.5A 2017-04-07 2017-04-07 Palm vein identification method integrating textural features and scale invariant features Expired - Fee Related CN107145829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710222874.5A CN107145829B (en) 2017-04-07 2017-04-07 Palm vein identification method integrating textural features and scale invariant features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710222874.5A CN107145829B (en) 2017-04-07 2017-04-07 Palm vein identification method integrating textural features and scale invariant features

Publications (2)

Publication Number Publication Date
CN107145829A CN107145829A (en) 2017-09-08
CN107145829B true CN107145829B (en) 2020-05-22

Family

ID=59773689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710222874.5A Expired - Fee Related CN107145829B (en) 2017-04-07 2017-04-07 Palm vein identification method integrating textural features and scale invariant features

Country Status (1)

Country Link
CN (1) CN107145829B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107980140B (en) * 2017-10-16 2021-09-14 厦门熵基科技有限公司 Palm vein identification method and device
CN110490271B (en) * 2017-12-22 2021-09-21 展讯通信(上海)有限公司 Image matching and splicing method, device, system and readable medium
CN108710690A (en) * 2018-05-22 2018-10-26 长春师范大学 Medical image search method based on geometric verification
CN110049097A (en) * 2019-03-04 2019-07-23 平安科技(深圳)有限公司 Information-pushing method, device, server and computer storage medium
CN110348289B (en) * 2019-05-27 2023-04-07 广州中国科学院先进技术研究所 Finger vein identification method based on binary image
CN110705341A (en) * 2019-08-13 2020-01-17 平安科技(深圳)有限公司 Verification method, device and storage medium based on finger vein image
CN110717469B (en) * 2019-10-16 2022-04-12 山东浪潮科学研究院有限公司 Finger vein identification method and system based on correlation semantic feature learning
CN112200159B (en) * 2020-12-01 2021-02-19 四川圣点世纪科技有限公司 Non-contact palm vein identification method based on improved residual error network
CN114998950B (en) * 2022-08-01 2022-11-22 北京圣点云信息技术有限公司 Vein encryption and identification method based on deep learning
TWI794132B (en) * 2022-09-19 2023-02-21 威盛電子股份有限公司 System for detecting misidentified objects
CN115311691B (en) * 2022-10-12 2023-02-28 山东圣点世纪科技有限公司 Joint identification method based on wrist vein and wrist texture
CN115631514B (en) * 2022-10-12 2023-09-12 中海银河科技(北京)有限公司 User identification method, device, equipment and medium based on palm vein fingerprint

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609499A (en) * 2008-06-20 2009-12-23 南京理工大学 Quick fingerprint identification method
CN102622587A (en) * 2012-03-08 2012-08-01 哈尔滨工程大学 Hand back vein recognition method based on multi-scale second-order differential structure model and improved watershed algorithm
CN103136751A (en) * 2013-02-05 2013-06-05 电子科技大学 Improved scale invariant feature transform (SIFT) image feature matching algorithm
CN103729654A (en) * 2014-01-22 2014-04-16 青岛新比特电子科技有限公司 Image matching retrieval system on account of improving Scale Invariant Feature Transform (SIFT) algorithm
CN104616035A (en) * 2015-03-12 2015-05-13 哈尔滨工业大学 Visual Map rapid matching method based on global image feature and SURF algorithm
CN105117712A (en) * 2015-09-15 2015-12-02 北京天创征腾信息科技有限公司 Single-sample human face recognition method compatible for human face aging recognition
CN105760815A (en) * 2016-01-26 2016-07-13 南京大学 Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9251402B2 (en) * 2011-05-13 2016-02-02 Microsoft Technology Licensing, Llc Association and prediction in facial recognition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609499A (en) * 2008-06-20 2009-12-23 南京理工大学 Quick fingerprint identification method
CN102622587A (en) * 2012-03-08 2012-08-01 哈尔滨工程大学 Hand back vein recognition method based on multi-scale second-order differential structure model and improved watershed algorithm
CN103136751A (en) * 2013-02-05 2013-06-05 电子科技大学 Improved scale invariant feature transform (SIFT) image feature matching algorithm
CN103729654A (en) * 2014-01-22 2014-04-16 青岛新比特电子科技有限公司 Image matching retrieval system on account of improving Scale Invariant Feature Transform (SIFT) algorithm
CN104616035A (en) * 2015-03-12 2015-05-13 哈尔滨工业大学 Visual Map rapid matching method based on global image feature and SURF algorithm
CN105117712A (en) * 2015-09-15 2015-12-02 北京天创征腾信息科技有限公司 Single-sample human face recognition method compatible for human face aging recognition
CN105760815A (en) * 2016-01-26 2016-07-13 南京大学 Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Plam vein verification system based on SIFT matching;Pierre-Olivier Ladoux 等;《International Conference on Biometrics:ICB2009:aDVANCES IN Biometrics》;20091231;第1290-1298页 *
Sift and gray scale invariant features for palmprint indentification using complex directional wavelet and local binary pattern;Meiru Mu 等;《Neurocomputing》;20111031;第74卷(第17期);第3351-3360页 *
基于LavVIEW的手掌静脉身份识别系统研究;蓝惠英;《中国优秀硕士学位论文全文数据库-信息科技辑》;20140215(第2期);第I138-663页 *
改进SIFT算法在图像数据相似性匹配中的应用研究;张德全 等;《计算机科学》;20140630;第41卷(第6A期);第122-124+146页 *
采用纹理近邻模式的掌静脉生物特征识别研究;林森 等;《仪器仪表学报》;20151031;第36卷(第1期);第2230-2238页 *

Also Published As

Publication number Publication date
CN107145829A (en) 2017-09-08

Similar Documents

Publication Publication Date Title
CN107145829B (en) Palm vein identification method integrating textural features and scale invariant features
CN111611643B (en) Household vectorization data acquisition method and device, electronic equipment and storage medium
US9619733B2 (en) Method for generating a hierarchical structured pattern based descriptor and method and device for recognizing object using the same
CN109740606B (en) Image identification method and device
CN109299720A (en) A kind of target identification method based on profile segment spatial relationship
CN104850822B (en) Leaf identification method under simple background based on multi-feature fusion
CN110738216A (en) Medicine identification method based on improved SURF algorithm
CN111199558A (en) Image matching method based on deep learning
CN110738222B (en) Image matching method and device, computer equipment and storage medium
CN111753119A (en) Image searching method and device, electronic equipment and storage medium
CN114782715B (en) Vein recognition method based on statistical information
CN110942473A (en) Moving target tracking detection method based on characteristic point gridding matching
CN109840529B (en) Image matching method based on local sensitivity confidence evaluation
CN108694411B (en) Method for identifying similar images
Yang et al. Graph evolution-based vertex extraction for hyperspectral anomaly detection
CN110852292A (en) Sketch face recognition method based on cross-modal multi-task depth measurement learning
Shen et al. Satellite objects extraction and classification based on similarity measure
CN114358166A (en) Multi-target positioning method based on self-adaptive k-means clustering
CN110766708B (en) Image comparison method based on contour similarity
CN109544614B (en) Method for identifying matched image pair based on image low-frequency information similarity
CN110969128A (en) Method for detecting infrared ship under sea surface background based on multi-feature fusion
CN108763265B (en) Image identification method based on block retrieval
CN111292346A (en) Method for detecting contour of casting box body in noise environment
CN115578778A (en) Human face image feature extraction method based on trace transformation and LBP (local binary pattern)
CN116311391A (en) High-low precision mixed multidimensional feature fusion fingerprint retrieval method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200522