CN107980140B - Palm vein identification method and device - Google Patents
Palm vein identification method and device Download PDFInfo
- Publication number
- CN107980140B CN107980140B CN201780001261.7A CN201780001261A CN107980140B CN 107980140 B CN107980140 B CN 107980140B CN 201780001261 A CN201780001261 A CN 201780001261A CN 107980140 B CN107980140 B CN 107980140B
- Authority
- CN
- China
- Prior art keywords
- matching
- pair
- point
- image
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The embodiment of the application discloses a palm vein identification method and a palm vein identification device, which are used for extracting a certain number of characteristics from each local part of a palm interested region and improving the identification rate of the palm veins. The method in the embodiment of the application comprises the following steps: acquiring a target palm vein image of a user; extracting an image of a region of interest (ROI) from the target palm vein image; dividing an image of the ROI into at least two sub-regions; extracting target characteristic points from each sub-region by adopting a preset algorithm; comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs; judging whether each pair of matching point pairs is true matching; if the matching is true matching, the matching point pair is reserved; and if the matching point pair is false matching, rejecting the matching point pair.
Description
Technical Field
The application relates to the technical field of biological identification, in particular to a palm vein identification method and device.
Background
The palm vein recognition is to collect vein images under palm skin by utilizing the characteristic that human hemoglobin can absorb near infrared light when passing through veins, and extract the vein images as biological features. The palm vein use mode is non-contact type, and the palm vein cleaner is more sanitary and is suitable for being used in public places. Meanwhile, the palm is natural, so that the palm is more easily accepted by a user. Compared with other biological identification technologies such as fingerprints and iris, the palm vein identification has the three characteristics of the living body identification, the internal characteristic and the non-contact, so that the palm vein is extremely difficult to copy and forge, the palm vein characteristic of a user is ensured to be difficult to forge, and the palm vein identification system is high in safety level and is particularly suitable for places with high safety requirements.
At present, a palm vein recognition method is based on an overall subspace learning method, that is, the whole palm vein is used as a global description, a palm vein image is projected to a subspace to extract a feature vector, for example, a palm vein matching is performed by using a feature recognition method, a laplacian palm feature image formed by fusing a palm print and a palm vein image is subjected to global matching, and finally, a local structural feature of the palm vein image is extracted.
For the subspace learning method based on the whole body, due to the palm vein distribution characteristic, the feature points are intensively distributed in the palm region near the thenar, and the palm region near the hypothenar has fewer feature points, so that the algorithm identification rate is reduced due to incomplete feature information.
Disclosure of Invention
The embodiment of the application provides a palm vein identification method and a palm vein identification device, which are used for extracting a certain number of characteristics from each local part of a palm interested region and improving the identification rate of the palm veins.
The application provides a palm vein identification method in a first aspect, which includes: acquiring a target palm vein image of a user; extracting an image of a region of interest (ROI) from the target palm vein image; dividing an image of the ROI into at least two sub-regions; extracting target characteristic points from each sub-region by adopting a preset algorithm; comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs; judging whether each pair of matching point pairs is true matching; if the matching is true matching, the matching point pair is reserved; and if the matching point pair is false matching, rejecting the matching point pair.
In one possible design, in a first implementation manner of the first aspect of the embodiment of the present application, the determining whether each pair of the matching point pairs is a true match includes: dividing the image of the ROI into G-G lattices with the same specification, wherein G is a positive integer larger than 1, and mapping each pair of matching point pairs to corresponding positions of the G-G lattices to obtain a preset characteristic point lattice image and a target characteristic point lattice image; determining a first grid and a second grid with the same matching point pairs and the largest number as matching grid pairs, wherein the first grid is located in the preset feature point grid image, and the second grid is located in the target feature point grid image; judging whether each pair of matching point pairs belongs to a corresponding matching lattice pair; if the matching point pair belongs to the corresponding matching lattice pair, determining that the matching point pair is true matching; and if the matching point pair does not belong to the corresponding matching lattice pair, determining that the matching point pair is false matching.
In a possible design, in a second implementation manner of the first aspect of the embodiment of the present application, the determining whether each pair of matching point pairs belongs to a corresponding pair of matching lattices includes: calculating a threshold value of the first grid and a score value of the second grid; judging whether the scoring value is larger than the threshold value; if yes, determining that the matching point pair belongs to a corresponding matching lattice pair; if not, determining that the matching point pair does not belong to the corresponding matching lattice pair.
In a possible design, in a third implementation manner of the first aspect of the embodiment of the present application, the extracting, from each of the sub-regions, a target feature point by using a preset algorithm includes: adjusting a sampling threshold according to each sub-region, wherein the sampling threshold is used for determining the target characteristic point; and determining a target point in each sub-area as a target characteristic point, wherein the parameter value of the target point is greater than the sampling threshold value.
In a possible design, in a fourth implementation manner of the first aspect of the embodiment of the present application, the preset algorithm is any one of an orientation descriptor ORB algorithm, a scale invariant feature transform, SIFT algorithm, or a fast robust feature SURF algorithm.
The present application provides in a second aspect a device for identifying a palm vein, comprising: the acquisition unit is used for acquiring a target palm vein image of a user; a first extraction unit, which is used for extracting an image of a region of interest ROI from the target palm vein image; a dividing unit for dividing the image of the ROI into at least two sub-regions; the second extraction unit is used for extracting target feature points from each sub-area by adopting a preset algorithm; the comparison unit is used for comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs; the judging unit is used for judging whether each pair of matching point pairs is true matching; a reserving unit, configured to reserve the matching point pair if the matching point pair is true matching; and the rejecting unit is used for rejecting the matching point pair if the matching point pair is false matching.
In a possible design, in a first implementation manner of the second aspect of the embodiment of the present application, the determining unit includes: the processing module is used for dividing the image of the ROI into G-G grids with the same specification, wherein G is a positive integer larger than 1, and mapping each pair of matching point pairs to corresponding positions of the G-G grids to obtain a preset feature point grid image and a target feature point grid image; a first determining module, configured to determine a first lattice and a second lattice with the same number of matching point pairs as a matching lattice pair, where the first lattice is located in the preset feature point lattice image, and the second lattice is located in the target feature point lattice image; the judging module is used for judging whether each pair of matching point pairs belongs to the corresponding matching lattice pair; the second determining module is used for determining that the matching point pair is true matching if the matching point pair belongs to the corresponding matching lattice pair; and the third determining module is used for determining that the matching point pair is false matching if the matching point pair does not belong to the corresponding matching lattice pair.
In a possible design, in a second implementation manner of the second aspect of the embodiment of the present application, the determining module is specifically configured to: calculating a threshold value of the first grid and a score value of the second grid; judging whether the scoring value is larger than the threshold value; if yes, determining that the matching point pair belongs to a corresponding matching lattice pair; if not, determining that the matching point pair does not belong to the corresponding matching lattice pair.
In a possible design, in a third implementation manner of the second aspect of the embodiment of the present application, the second extraction unit includes: the adjusting module is used for adjusting a sampling threshold according to each sub-region, and the sampling threshold is used for determining the target feature point; and the fourth determination module is used for determining a target point in each sub-area as a target characteristic point, and the parameter value of the target point is greater than the sampling threshold value.
In a possible design, in a fourth implementation manner of the second aspect of the embodiment of the present application, the preset algorithm is any one of an orientation descriptor ORB algorithm, a scale invariant feature transform, SIFT algorithm, or a fast robust feature SURF algorithm.
A third aspect of the present application provides a palm vein recognition apparatus including: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the device for identification of a metacarpal vein to perform the method of the aspects described above.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the above-described aspects.
A fifth aspect of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above-described aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
according to the technical scheme provided by the embodiment of the application, a target palm vein image of a user is obtained; extracting an image of a region of interest (ROI) from the target palm vein image; dividing an image of the ROI into at least two sub-regions; extracting target characteristic points from each sub-region by adopting a preset algorithm; comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs; judging whether each pair of matching point pairs is true matching; if the matching is true matching, the matching point pair is reserved; and if the matching point pair is false matching, rejecting the matching point pair. In the embodiment of the application, the image of the ROI is divided into a plurality of sub-regions, and the feature points are extracted from each sub-region, so that a certain number of feature points are extracted from each part of the palm region of interest, and the palm vein identification rate is improved.
Drawings
Fig. 1 is a schematic diagram of an embodiment of a palm vein identification method in an embodiment of the present application;
fig. 2 is a schematic diagram of another embodiment of a palm vein identification method in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating the positions of feature points in the ROI area according to an embodiment of the present application;
FIG. 4 is a schematic diagram of the distribution of feature points in a grid in the embodiment of the present application;
FIG. 5 is a diagram illustrating a comparison between a matching point pair and a matching grid pair in the embodiment of the present application;
fig. 6 is a schematic view of an embodiment of a palm vein identification device in an embodiment of the present application;
fig. 7 is a schematic view of another embodiment of the palm vein identification device in the embodiment of the present application;
fig. 8 is a schematic view of another embodiment of the device for identifying a metacarpal vein in the embodiment of the present application.
Detailed Description
The embodiment of the application provides a palm vein identification method and a palm vein identification device, which are used for extracting a certain number of characteristic points in each local area of a palm interested area, so that the palm vein identification rate is improved.
In order to make the technical field better understand the scheme of the present application, the following description will be made on the embodiments of the present application with reference to the attached drawings.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For the sake of understanding, a specific flow of the embodiment of the present application is described below, and referring to fig. 1, an embodiment of the method for identifying a palm vein in the embodiment of the present application includes:
101. and acquiring a target palm vein image of the user.
The user places the palm in the scanning range of the recognition device, and the recognition device starts near-infrared light irradiation to obtain a target palm vein image of the user. Because the palm vein blood can absorb near infrared light, the reflection at the vein blood vessel is less and darker than the surrounding, thereby forming a palm vein pattern.
The near-infrared light is an electromagnetic wave between visible light and mid-infrared light, and is an electromagnetic wave having a wavelength in a range of 780 to 2526 nm. It can be understood that, in the process of identifying the palm of the user, the palm side of the palm of the user receives near infrared light irradiation, and the palm of the user is entirely located in the irradiation range of the identification device, so as to ensure the integrity of the acquired target palm vein image.
102. And extracting an image of the region of interest ROI from the target palm vein image.
An image of a region of interest (ROI) is extracted from the target palm vein image. The pixel size of the ROI region may be set to 184 × 184, 129 × 129, or other values, and may be set according to actual needs, which is not limited herein.
103. The image of the ROI is divided into at least two sub-regions.
The image of the ROI is divided into at least two sub-regions. Specifically, the image of the ROI is equally divided into N × N sub-regions, where N is a positive integer greater than 1, and generally, when N is 2 or 3, the distribution of the feature points is reasonable, and the value of N may be other values, which is not limited herein.
104. And extracting target characteristic points from each sub-area by adopting a preset algorithm.
After dividing the image of the ROI to obtain N × N equally divided sub-regions, extracting target feature points in each sub-region by adopting a preset algorithm. Specifically, a direction descriptor (ORB) algorithm is used for feature extraction and representation, sampling thresholds are different for different sub-regions, each sub-region can adjust the sampling threshold, and the sampling threshold is used for judging whether a target point is a target feature point, so that the sub-region can extract feature points meeting the quantity condition number. Specifically, when the parameter value of the target point is greater than the sampling threshold of the sub-area to which the target point belongs, the target point is determined to be the target feature point, when the parameter value of the target point is less than or equal to the sampling threshold of the sub-area to which the target point belongs, the target point is determined not to be the target feature point, the target point is excluded, all the target points in the image of the ROI are traversed, and the target feature points meeting the quantity requirement are extracted.
It is understood that the preset algorithm may also be other feature extraction algorithms, for example, the preset algorithm may also be a Scale Invariant Feature Transform (SIFT) algorithm or a fast-up robust features (SURF) algorithm, and may also be other feature extraction algorithms, which are not limited herein.
105. And comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs.
And comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs. Specifically, a certain number of extracted target feature points are compared with a database, and the database comprises preset feature points and feature parameters of the preset feature points. After the target characteristic points are obtained, comparing and matching the characteristic parameters of each target characteristic point with the characteristic parameters in the database, and determining a preset characteristic point and a target characteristic point with the same characteristic parameters as a matching point pair.
It should be noted that there are various matching methods, specifically, the matching method may be a Brute Force (BF) algorithm, or may be other matching algorithms, and the specific method is not limited herein.
106. And judging whether each pair of matching point pairs is true matching.
And judging whether each pair of matching point pairs is true matching. Specifically, each pair of the obtained matching point pairs is screened, the matching point pairs meeting the requirements are determined to be true matching according to the screening standard, and the matching point pairs not meeting the requirements are false matching. If the match is true, step 107 is executed, and if the match is false, step 108 is executed.
107. And reserving the matching point pairs.
If the matching point pair is true matching, the matching point pair is reserved.
108. And eliminating the matching point pairs.
And if the matching point pair is false matching, rejecting the matching point pair.
In the embodiment of the application, the image of the ROI is divided into a plurality of sub-regions, and the feature points are extracted from each sub-region, so that a certain number of feature points are extracted from each part of the palm region of interest, and the palm vein identification rate is improved.
Referring to fig. 2, another embodiment of the method for identifying a palm vein in an embodiment of the present application includes:
201. and acquiring a target palm vein image of the user.
The user places the palm in the scanning range of the recognition device, and the recognition device starts near-infrared light irradiation to obtain a target palm vein image of the user. Because the palm vein blood can absorb near infrared light, the reflection at the vein blood vessel is less and darker than the surrounding, thereby forming a palm vein pattern.
The near-infrared light is an electromagnetic wave between visible light and mid-infrared light, and is an electromagnetic wave having a wavelength in a range of 780 to 2526 nm. It can be understood that, in the process of identifying the palm of the user, the palm side of the palm of the user receives near infrared light irradiation, and the palm of the user is entirely located in the irradiation range of the identification device, so as to ensure the integrity of the acquired target palm vein image.
202. And extracting an image of the region of interest ROI from the target palm vein image.
An image of a region of interest (ROI) is extracted from the target palm vein image. The pixel size of the ROI region may be set to 184 × 184, 129 × 129, or other values, and may be set according to actual needs, which is not limited herein.
203. The image of the ROI is divided into at least two sub-regions.
The image of the ROI is divided into at least two sub-regions. Specifically, the image of the ROI is equally divided into N × N sub-regions, where N is a positive integer greater than 1, and generally, when N is 2 or 3, the distribution of the feature points is reasonable, and the value of N may be other values, which is not limited herein.
204. And extracting target characteristic points from each sub-area by adopting a preset algorithm.
After dividing the image of the ROI to obtain N × N equally divided sub-regions, extracting target feature points in each sub-region by adopting an oriented brief (ORB) algorithm. Specifically, the ORB algorithm is adopted for feature extraction and representation, sampling thresholds are different for different sub-regions, the sampling threshold can be adjusted for each sub-region, and the sampling threshold is used for judging whether a target point is a target feature point, so that the sub-region can extract feature points meeting the quantity condition number. Specifically, when the parameter value of the target point is greater than the sampling threshold of the sub-area to which the target point belongs, the target point is determined to be the target feature point, when the parameter value of the target point is less than or equal to the sampling threshold of the sub-area to which the target point belongs, the target point is determined not to be the target feature point, the target point is excluded, all the target points in the image of the ROI are traversed, and the target feature points meeting the quantity requirement are extracted.
It is to be understood that the algorithm for extracting the features may also be other feature extraction algorithms, for example, a Scale Invariant Feature Transform (SIFT) algorithm or a fast-up robust features (SURF) algorithm, and may also be other feature extraction algorithms, which are not limited herein.
205. And comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs.
And comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs. Specifically, a certain number of extracted target feature points are compared with a database, and the database comprises preset feature points and feature parameters of the preset feature points. After the target characteristic points are obtained, comparing and matching the characteristic parameters of each target characteristic point with the characteristic parameters in the database, and determining a preset characteristic point and a target characteristic point with the same characteristic parameters as a matching point pair.
It should be noted that there are various matching methods, specifically, the matching method may be a Brute Force (BF) algorithm, or may be other matching algorithms, and the specific method is not limited herein.
For example, as shown in fig. 3, after the target feature point is compared with the database by using a BF algorithm, 5 pairs of matching point pairs with the same parameter are obtained, and the graph is distinguished by different geometric shapes, or other numbers of matching point pairs and geometric shapes may be used. In this embodiment, the feature points represented by the triangles in the preset feature point image and the feature points represented by the triangles in the target feature point image are a pair of matching point pairs, and the feature points represented by other geometric shapes are similar, and there are 5 matching point pairs in total in this embodiment.
206. And acquiring a preset characteristic point grid image and a target characteristic point grid image of the matching point pair.
And dividing the image of the ROI into G-G grids with the same specification, wherein G is a positive integer larger than 1, and mapping each pair of matching point pairs to corresponding positions of the G-G grids to obtain a preset characteristic point grid image and a target characteristic point grid image. There may be 4 × 4 lattices, and G may also be another number, for example, G ═ 20.
In order to avoid the matching point pairs falling on the grid edge or even on the grid line, the grids in the right drawing are moved by 0.5 grids respectively from top to bottom and from left to right when the grids are divided, so that the matching effect is optimal. Meanwhile, in order to adapt to the matching situation between pictures with large size difference, the target feature point grid graph is zoomed by a certain multiple to obtain the best matching effect.
For example, as shown in fig. 4, the image of the ROI is divided into 4 × 4 grids with the same specification, and the feature points in the acquired preset feature point image and the feature point in the target feature point image are corresponding to the corresponding positions of the 4 × 4 grids, so as to obtain a preset feature point grid image and a target feature point grid image.
207. And determining a matching lattice pair according to the preset characteristic point lattice image and the target characteristic point lattice image.
And determining a first grid and a second grid with the same matching point pairs and the maximum number as matching grid pairs, wherein the first grid is positioned in the preset characteristic point image, and the second grid is positioned in the target characteristic point image.
For example, as shown in fig. 5, the number of feature points in each cell of the preset feature point cell image in fig. 5 and the number of feature points in each cell of the target feature point cell image are counted respectively. The first grid of the preset feature point grid image comprises three preset feature points, a feature point represented by a triangle, a feature point represented by a circle and a feature point represented by a square; the second grid of the target feature point grid image comprises two feature points, a feature point represented by a triangle and a feature point represented by a circle; the other grid of the target feature point grid image comprises a feature point represented by a square. Since the second lattice and the first lattice have the largest number of feature points, the second lattice is determined as a lattice matching the first lattice, that is, the first lattice and the second lattice are a tentative pair of matching lattices, and the first lattice and the second lattice are marked with oblique lines in the drawing for easy understanding.
Specifically, the threshold value of the first lattice is calculated with the first lattice and the second lattice of the tentative matching lattice pair as the center. The formula for calculating the threshold is as follows:wherein: n isiThe average value of the number of feature points included in the first grid and the eight-field grids in the preset feature point grid image in fig. 5 is shown, where α is 6, which is a set empirical value; and calculating the score value of the second grid, wherein the formula for calculating the score is as follows:where k is the index of the second lattice and its eight neighbourhood lattices, XijThe number of the matching point pairs falling into the jth lattice corresponding to the target feature point lattice image in the ith lattice of the preset feature point lattice image in fig. 5 is shown. If the score value is greater than the threshold, the first and second lattices in FIG. 5 are a pair of matches. And traversing all the feature points and determining all the matching lattice pairs.
208. And judging whether each pair of matching point pairs belongs to the corresponding matching lattice pair.
And judging whether each pair of matching point pairs belongs to the corresponding matching lattice pair. Specifically, calculating a threshold value of a first grid and a score value of a second grid; judging whether the score value is greater than a threshold value; if yes, determining that the matching point pair belongs to the corresponding matching lattice pair, and executing step 209; if not, the matching point pair is determined not to belong to the corresponding matching lattice pair, and step 211 is executed. All matching point pairs in the image of the ROI are traversed.
It should be noted that there is no specific order between the threshold of the first grid and the score of the second grid, and the threshold of the first grid may be calculated first, and then the score of the second grid may be calculated; or calculating the score value of the second grid, and calculating the threshold value of the first grid; and the calculation can be performed simultaneously, and the calculation is not limited in detail here.
209. And determining the matching point pair as a true match and reserving the matching point pair.
And if the matching point pair belongs to the corresponding matching lattice pair, determining the matching point pair as a true match, and reserving the matching point pair.
210. And determining the matching point pair as a false match, and rejecting the matching point pair.
And if the matching point pair does not belong to the corresponding matching lattice pair, determining the matching point pair as a false match, and rejecting the matching point pair.
In the embodiment of the application, the image of the ROI is divided into a plurality of sub-regions, the feature points are extracted from each sub-region, a certain number of feature points are extracted from each local region to be matched, the matched point pairs are judged, the false matching point pairs are removed, the true matching point pairs are reserved, the algorithm recognition rate is improved, and the palm vein recognition rate is further improved.
The above description is made on the method for identifying a palm vein in the embodiment of the present application, and referring to fig. 6, the following description is made on the device for identifying a palm vein in the embodiment of the present application, where an embodiment of the device for identifying a palm vein in the embodiment of the present application includes:
an acquisition unit 601 configured to acquire a target palm vein image of a user;
a first extraction unit 602, configured to extract an image of a region of interest ROI from the target palm vein image;
a dividing unit 603 configured to divide an image of the ROI into at least two sub-regions;
a second extracting unit 604, configured to extract a target feature point from each sub-region by using a preset algorithm;
a comparing unit 605, configured to perform feature comparison on the extracted target feature points and preset feature points to obtain matching point pairs;
a determining unit 606, configured to determine whether each pair of matching point pairs is a true match;
a retaining unit 607 for retaining the matching point pair if the matching is true;
and a rejecting unit 608, configured to reject the matching point pair if the matching point pair is a false match.
In the embodiment of the application, the image of the ROI is divided into a plurality of sub-regions, and the feature points are extracted from each sub-region, so that a certain number of feature points are extracted from each part of the palm region of interest, and the palm vein identification rate is improved.
Referring to fig. 7, another embodiment of the device for identifying a palm vein in an embodiment of the present application includes:
an acquisition unit 701 configured to acquire a target palm vein image of a user;
a first extraction unit 702, configured to extract an image of a region of interest ROI from the target palm vein image;
a dividing unit 703 for dividing the image of the ROI into at least two sub-regions;
a second extracting unit 704, configured to extract a target feature point from each sub-region by using a preset algorithm;
a comparing unit 605, configured to perform feature comparison on the extracted target feature points and preset feature points to obtain matching point pairs;
a determining unit 706, configured to determine whether each pair of matching point pairs is a true match;
a retaining unit 707, configured to retain the matching point pair if the matching is true;
and a rejecting unit 708, configured to reject the matching point pair if the matching point pair is a false match.
Optionally, the determining unit 706 may further include:
a processing module 7061, configured to divide the image of the ROI into G × G grids with the same specification, where G is a positive integer greater than 1, and map each pair of matching point pairs to corresponding positions of the G × G grids to obtain a preset feature point grid image and a target feature point grid image;
a first determining module 7062, configured to determine, as a matching grid pair, a first grid and a second grid that have the same matching point pair and are the most numerous, where the first grid is located in the preset feature point grid image, and the second grid is located in the target feature point grid image;
a judging module 7063, configured to judge whether each pair of matching point pairs belongs to a corresponding matching lattice pair;
a second determining module 7064, configured to determine that the matching point pair is a true match if the matching point pair belongs to a corresponding matching lattice pair;
a third determining module 7065, configured to determine that the matching point pair is a false match if the matching point pair does not belong to a corresponding matching lattice pair.
Optionally, the determining module 7063 may be further specifically configured to:
calculating a threshold value of the first grid and a score value of the second grid;
judging whether the scoring value is larger than the threshold value;
if yes, determining that the matching point pair belongs to a corresponding matching lattice pair; if not, determining that the matching point pair does not belong to the corresponding matching lattice pair.
Optionally, the second extracting unit 704 may further include:
an adjusting module 7041, configured to adjust a sampling threshold according to each of the sub-regions, where the sampling threshold is used to determine the target feature point;
a fourth determining module 7042, configured to determine a target point in each of the sub-areas as a target feature point, where a parameter value of the target point is greater than the sampling threshold.
In the embodiment of the application, the image of the ROI is divided into a plurality of sub-regions, the feature points are extracted from each sub-region, a certain number of feature points are extracted from each local region to be matched, the matched point pairs are judged, the false matching point pairs are removed, the true matching point pairs are reserved, the algorithm recognition rate is improved, and the palm vein recognition rate is further improved.
Fig. 6 to 7 describe the identification device of the metacarpal vein in the embodiment of the present application in detail from the perspective of the modular functional entity, and the identification device of the metacarpal vein in the embodiment of the present application is described in detail from the perspective of hardware processing.
Fig. 8 is a schematic structural diagram of a device for identifying a metacarpal vein according to an embodiment of the present disclosure, where the device 800 for identifying a metacarpal vein may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 801 (e.g., one or more processors) and a memory 809, and one or more storage media 808 (e.g., one or more mass storage devices) storing an application 807 or data 806. Memory 809 and storage media 808 can be, among other things, transient or persistent storage. The program stored on the storage medium 808 may include one or more modules (not shown), each of which may include a series of instruction operations in a palm vein recognition device. Further, the processor 801 may be configured to communicate with the storage medium 808 to execute a series of instruction operations in the storage medium 808 on the palm vein recognition device 800.
The device 800 may also include one or more power supplies 802, one or more wired or wireless network interfaces 803, one or more input-output interfaces 804, and/or one or more operating systems 805, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc. It will be appreciated by those skilled in the art that the structure of the identification device of the metacarpal vein shown in fig. 8 does not constitute a limitation of the identification device of the metacarpal vein, and may include more or less components than those shown, or some components may be combined, or a different arrangement of components may be used.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (9)
1. A method for identifying a palm vein, comprising:
acquiring a target palm vein image of a user;
extracting an image of a region of interest (ROI) from the target palm vein image;
dividing an image of the ROI into at least two sub-regions;
extracting target characteristic points from each sub-region by adopting a preset algorithm;
comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs;
judging whether each pair of matching point pairs is true matching;
if the matching is true matching, the matching point pair is reserved; if the matching point pair is false matching, the matching point pair is removed;
the determining whether each pair of matching point pairs is true matching includes:
dividing the image of the ROI into G-G lattices with the same specification, wherein G is a positive integer larger than 1, and mapping each pair of matching point pairs to corresponding positions of the G-G lattices to obtain a preset characteristic point lattice image and a target characteristic point lattice image;
determining a first grid and a second grid with the same matching point pairs and the largest number as matching grid pairs, wherein the first grid is located in the preset feature point grid image, and the second grid is located in the target feature point grid image;
judging whether each pair of matching point pairs belongs to a corresponding matching lattice pair;
if the matching point pair belongs to the corresponding matching lattice pair, determining that the matching point pair is true matching;
if the matching point pair does not belong to the corresponding matching lattice pair, determining that the matching point pair is false matching;
the judging whether each pair of matching point pairs belongs to the corresponding matching lattice pair comprises:
calculating a threshold value of the first grid and a score value of the second grid;
judging whether the scoring value is larger than the threshold value;
if yes, determining that the matching point pair belongs to a corresponding matching lattice pair; if not, determining that the matching point pair does not belong to the corresponding matching lattice pair.
2. The identification method according to claim 1, wherein said extracting the target feature point from each of the sub-regions by using a preset algorithm comprises:
adjusting a sampling threshold according to each sub-region, wherein the sampling threshold is used for determining the target characteristic point;
and determining a target point in each sub-area as a target characteristic point, wherein the parameter value of the target point is greater than the sampling threshold value.
3. The identification method according to claim 1, wherein the preset algorithm is any one of an ORB (orientation descriptor) algorithm, a SIFT (scale invariant feature transform) algorithm or a SURF (speeded robust features) algorithm.
4. A device for identifying a palm vein, comprising:
the acquisition unit is used for acquiring a target palm vein image of a user;
a first extraction unit, which is used for extracting an image of a region of interest ROI from the target palm vein image;
a dividing unit for dividing the image of the ROI into at least two sub-regions;
the second extraction unit is used for extracting target feature points from each sub-area by adopting a preset algorithm;
the comparison unit is used for comparing the extracted target characteristic points with preset characteristic points to obtain matching point pairs;
the judging unit is used for judging whether each pair of matching point pairs is true matching;
a reserving unit, configured to reserve the matching point pair if the matching point pair is true matching;
the rejecting unit is used for rejecting the matching point pair if the matching point pair is false matching;
the judging unit includes:
the processing module is used for dividing the image of the ROI into G-G grids with the same specification, wherein G is a positive integer larger than 1, and mapping each pair of matching point pairs to corresponding positions of the G-G grids to obtain a preset feature point grid image and a target feature point grid image;
a first determining module, configured to determine a first lattice and a second lattice with the same number of matching point pairs as a matching lattice pair, where the first lattice is located in the preset feature point lattice image, and the second lattice is located in the target feature point lattice image;
the judging module is used for judging whether each pair of matching point pairs belongs to the corresponding matching lattice pair;
the second determining module is used for determining that the matching point pair is true matching if the matching point pair belongs to the corresponding matching lattice pair;
a third determining module, configured to determine that the matching point pair is a false match if the matching point pair does not belong to a corresponding matching lattice pair;
the judgment module is specifically configured to:
calculating a threshold value of the first grid and a score value of the second grid;
judging whether the scoring value is larger than the threshold value;
if yes, determining that the matching point pair belongs to a corresponding matching lattice pair; if not, determining that the matching point pair does not belong to the corresponding matching lattice pair.
5. The recognition device according to claim 4, wherein the second extraction unit includes:
the adjusting module is used for adjusting a sampling threshold according to each sub-region, and the sampling threshold is used for determining the target feature point;
and the fourth determination module is used for determining a target point in each sub-area as a target characteristic point, and the parameter value of the target point is greater than the sampling threshold value.
6. The identification device according to claim 4, wherein the preset algorithm is any one of an ORB (orientation descriptor) algorithm, a SIFT (Scale invariant feature transform) algorithm or a SURF (speeded robust features) algorithm.
7. A device for identifying a palm vein, comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the palm vein identification device to perform the method of any one of claims 1-3.
8. A computer arrangement, characterized in that the computer arrangement comprises a processor for implementing the steps of the method according to any one of claims 1-3 when executing a computer program stored in a memory.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program realizing the steps of the method according to any one of claims 1-3 when executed by a processor.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/106307 WO2019075601A1 (en) | 2017-10-16 | 2017-10-16 | Palm vein recognition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107980140A CN107980140A (en) | 2018-05-01 |
CN107980140B true CN107980140B (en) | 2021-09-14 |
Family
ID=62006180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780001261.7A Active CN107980140B (en) | 2017-10-16 | 2017-10-16 | Palm vein identification method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107980140B (en) |
WO (1) | WO2019075601A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020107267A1 (en) * | 2018-11-28 | 2020-06-04 | 华为技术有限公司 | Image feature point matching method and device |
CN109361880A (en) | 2018-11-30 | 2019-02-19 | 三星电子(中国)研发中心 | A kind of method and system showing the corresponding dynamic picture of static images or video |
CN109977909B (en) * | 2019-04-04 | 2021-04-20 | 山东财经大学 | Finger vein identification method and system based on minutia area matching |
CN110298944B (en) * | 2019-06-13 | 2021-08-10 | Oppo(重庆)智能科技有限公司 | Vein unlocking method and vein unlocking device |
CN110879956A (en) * | 2019-07-12 | 2020-03-13 | 熵加网络科技(北京)有限公司 | Method for extracting palm print features |
CN110633511B (en) * | 2019-08-27 | 2023-05-09 | 广东工业大学 | Modularized design method and device for personalized medical instrument |
CN110598640B (en) * | 2019-09-16 | 2022-06-07 | 杭州奔巴慧视科技有限公司 | Hand vein recognition method based on transfer learning |
CN112861846B (en) * | 2019-11-12 | 2024-04-19 | 北京地平线机器人技术研发有限公司 | Method and device for processing tensor data |
CN111163442B (en) * | 2019-12-27 | 2021-11-23 | 咻享智能(深圳)有限公司 | Route planning method and related device for wireless Internet of things |
CN111553241B (en) * | 2020-04-24 | 2024-05-07 | 平安科技(深圳)有限公司 | Palm print mismatching point eliminating method, device, equipment and storage medium |
CN112861743B (en) * | 2021-02-20 | 2023-07-14 | 厦门熵基科技有限公司 | Palm vein image anti-counterfeiting method, device and equipment |
CN113705344B (en) * | 2021-07-21 | 2024-10-01 | 西安易掌慧科技有限公司 | Palm print recognition method and device based on full palm, terminal equipment and storage medium |
CN113693636B (en) * | 2021-08-30 | 2023-11-24 | 南方科技大学 | Sampling method, sampling system and storage medium |
CN114241536B (en) * | 2021-12-01 | 2022-07-29 | 佛山市红狐物联网科技有限公司 | Palm vein image identification method and system |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8818048B2 (en) * | 2010-01-22 | 2014-08-26 | Indiana University Research And Technology Corp. | System and method for cancelable iris recognition |
TW201447625A (en) * | 2013-06-07 | 2014-12-16 | Univ Chien Hsin Sci & Tech | Palm vein recognition method using adaptive Gabor filter |
CN104915965A (en) * | 2014-03-14 | 2015-09-16 | 华为技术有限公司 | Camera tracking method and device |
CN105005776A (en) * | 2015-07-30 | 2015-10-28 | 广东欧珀移动通信有限公司 | Fingerprint identification method and device |
CN105474234A (en) * | 2015-11-24 | 2016-04-06 | 厦门中控生物识别信息技术有限公司 | Method and apparatus for palm vein recognition |
CN105551012A (en) * | 2014-11-04 | 2016-05-04 | 阿里巴巴集团控股有限公司 | Method and system for reducing wrong matching pair in computer image registration |
CN105760841A (en) * | 2016-02-22 | 2016-07-13 | 桂林航天工业学院 | Identify recognition method and identify recognition system |
CN106056040A (en) * | 2016-05-18 | 2016-10-26 | 深圳市源厚实业有限公司 | Palm vein identification method and device |
CN106651827A (en) * | 2016-09-09 | 2017-05-10 | 浙江大学 | Fundus image registering method based on SIFT characteristics |
CN106778510A (en) * | 2016-11-25 | 2017-05-31 | 江西师范大学 | A kind of ultra high resolution remote sensing images middle-high building characteristic point matching method |
WO2017116331A1 (en) * | 2015-12-30 | 2017-07-06 | Gebze Teknik Universitesi | Stereo palm vein detection method and biometric identification system operating in compliance with said method |
CN107145829A (en) * | 2017-04-07 | 2017-09-08 | 电子科技大学 | A kind of vena metacarpea recognition methods for merging textural characteristics and scale invariant feature |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6532642B2 (en) * | 2013-07-30 | 2019-06-19 | 富士通株式会社 | Biometric information authentication apparatus and biometric information processing method |
CN103824086A (en) * | 2014-03-24 | 2014-05-28 | 东方网力科技股份有限公司 | Image matching method and device |
US9659205B2 (en) * | 2014-06-09 | 2017-05-23 | Lawrence Livermore National Security, Llc | Multimodal imaging system and method for non-contact identification of multiple biometric traits |
CN104751475B (en) * | 2015-04-16 | 2017-09-26 | 中国科学院软件研究所 | A kind of characteristic point Optimum Matching method towards still image Object identifying |
CN105608409B (en) * | 2015-07-16 | 2019-01-11 | 宇龙计算机通信科技(深圳)有限公司 | The method and device of fingerprint recognition |
-
2017
- 2017-10-16 WO PCT/CN2017/106307 patent/WO2019075601A1/en active Application Filing
- 2017-10-16 CN CN201780001261.7A patent/CN107980140B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8818048B2 (en) * | 2010-01-22 | 2014-08-26 | Indiana University Research And Technology Corp. | System and method for cancelable iris recognition |
TW201447625A (en) * | 2013-06-07 | 2014-12-16 | Univ Chien Hsin Sci & Tech | Palm vein recognition method using adaptive Gabor filter |
CN104915965A (en) * | 2014-03-14 | 2015-09-16 | 华为技术有限公司 | Camera tracking method and device |
CN105551012A (en) * | 2014-11-04 | 2016-05-04 | 阿里巴巴集团控股有限公司 | Method and system for reducing wrong matching pair in computer image registration |
CN105005776A (en) * | 2015-07-30 | 2015-10-28 | 广东欧珀移动通信有限公司 | Fingerprint identification method and device |
CN105474234A (en) * | 2015-11-24 | 2016-04-06 | 厦门中控生物识别信息技术有限公司 | Method and apparatus for palm vein recognition |
WO2017116331A1 (en) * | 2015-12-30 | 2017-07-06 | Gebze Teknik Universitesi | Stereo palm vein detection method and biometric identification system operating in compliance with said method |
CN105760841A (en) * | 2016-02-22 | 2016-07-13 | 桂林航天工业学院 | Identify recognition method and identify recognition system |
CN106056040A (en) * | 2016-05-18 | 2016-10-26 | 深圳市源厚实业有限公司 | Palm vein identification method and device |
CN106651827A (en) * | 2016-09-09 | 2017-05-10 | 浙江大学 | Fundus image registering method based on SIFT characteristics |
CN106778510A (en) * | 2016-11-25 | 2017-05-31 | 江西师范大学 | A kind of ultra high resolution remote sensing images middle-high building characteristic point matching method |
CN107145829A (en) * | 2017-04-07 | 2017-09-08 | 电子科技大学 | A kind of vena metacarpea recognition methods for merging textural characteristics and scale invariant feature |
Non-Patent Citations (2)
Title |
---|
Contact-Free Palm-Vein Recognition Based on Local Invariant Features;Wenxiong Kang等;《PLOS ONE》;20140527;第1-12页 * |
手掌静脉识别技术研究;吴微;《中国博士学位论文全文数据库(信息科技辑)》;20141015(第10期);第I138-52页 * |
Also Published As
Publication number | Publication date |
---|---|
WO2019075601A1 (en) | 2019-04-25 |
CN107980140A (en) | 2018-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107980140B (en) | Palm vein identification method and device | |
US10762366B2 (en) | Finger vein identification method and device | |
Syarif et al. | Enhanced maximum curvature descriptors for finger vein verification | |
Li et al. | Iris recognition based on a novel variation of local binary pattern | |
CN110717372A (en) | Identity verification method and device based on finger vein recognition | |
CN103870808B (en) | Finger vein identification method | |
Frucci et al. | WIRE: Watershed based iris recognition | |
Ibrahim et al. | Iris localization using local histogram and other image statistics | |
EP2092460A1 (en) | Method and apparatus for extraction and matching of biometric detail | |
Ambeth Kumar et al. | Exploration of an innovative geometric parameter based on performance enhancement for foot print recognition | |
Khotimah et al. | Iris recognition using feature extraction of box counting fractal dimension | |
Johar et al. | Iris segmentation and normalization using Daugman’s rubber sheet model | |
CN110574036A (en) | Detection of nerves in a series of echographic images | |
Aleem et al. | Fast and accurate retinal identification system: Using retinal blood vasculature landmarks | |
CN115953824B (en) | Face skin image processing method and system | |
Bouaziz et al. | A cuckoo search algorithm for fingerprint image contrast enhancement | |
Lynn et al. | Melanoma classification on dermoscopy skin images using bag tree ensemble classifier | |
de Brito Silva et al. | Classification of breast masses in mammograms using geometric and topological feature maps and shape distribution | |
Tallapragada et al. | Iris recognition based on combined feature of GLCM and wavelet transform | |
Abdel-Latif et al. | Achieving information security by multi-modal iris-retina biometric approach using improved mask R-CNN | |
Jayalakshmi et al. | A study of Iris segmentation methods using fuzzy C-means and K-means clustering algorithm | |
Poosarala | Uniform classifier for biometric ear and retina authentication using smartphone application | |
Oueslati et al. | Identity verification through dorsal hand vein texture based on NSCT coefficients | |
Patil et al. | Automatic blood vessels segmentation & extraction in fundus images for identification | |
Ramasubramanian et al. | A novel approach for automated detection of exudates using retinal image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 1301, No.132, Fengqi Road, phase III, software park, Xiamen City, Fujian Province Applicant after: Xiamen Entropy Technology Co., Ltd Address before: 361000, Xiamen three software park, Fujian Province, 8 North Street, room 2001 Applicant before: XIAMEN ZKTECO BIOMETRIC IDENTIFICATION TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |