WO2017059591A1 - 手指静脉识别方法及装置 - Google Patents

手指静脉识别方法及装置 Download PDF

Info

Publication number
WO2017059591A1
WO2017059591A1 PCT/CN2015/091630 CN2015091630W WO2017059591A1 WO 2017059591 A1 WO2017059591 A1 WO 2017059591A1 CN 2015091630 W CN2015091630 W CN 2015091630W WO 2017059591 A1 WO2017059591 A1 WO 2017059591A1
Authority
WO
WIPO (PCT)
Prior art keywords
line
region
finger vein
gray value
point
Prior art date
Application number
PCT/CN2015/091630
Other languages
English (en)
French (fr)
Inventor
车全宏
陈书楷
Original Assignee
厦门中控生物识别信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 厦门中控生物识别信息技术有限公司 filed Critical 厦门中控生物识别信息技术有限公司
Priority to US15/767,176 priority Critical patent/US10762366B2/en
Priority to PCT/CN2015/091630 priority patent/WO2017059591A1/zh
Priority to CN201580000591.5A priority patent/CN105518716A/zh
Publication of WO2017059591A1 publication Critical patent/WO2017059591A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
    • G06V40/1388Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger using image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • G06V10/7515Shifting the patterns to accommodate for positional errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the invention relates to a biometric identification technology, in particular to a finger vein recognition method and device.
  • Biometrics technology uses human biometric or behavioral characteristics for personal identity authentication. Physical characteristics such as fingerprints, palms, rainbows, human odors, face shapes, etc., behavioral characteristics such as signature, voice, gait, etc.
  • fingerprint recognition has a wide range of applications because of its uniqueness, stability, and ease of use.
  • fingerprint is an external biological feature.
  • the user is required to keep the finger clean and smooth when entering the fingerprint, and any dirt or stain existing on the fingerprint can cause difficulty in identification, and the fingerprint Easy to counterfeit, and even a cloned fingerprint made of silicone, resulting in a relatively low safety factor for fingerprint recognition technology.
  • vein recognition is an essential "living body recognition” biometric method, that is, the identified object must be a living person to satisfy the vein pattern acquisition in the identification. Difficult to forge or surgical changes, a high safety factor.
  • Venous recognition mainly includes hand vein recognition, palm vein recognition and finger vein recognition. Among them, finger veins and fingerprints have strong universality and uniqueness. Therefore, finger vein recognition has become a new field of biometrics development in recent years.
  • finger vein images are usually acquired by means of transmitted light or reflected light, and then vein features are extracted from the finger vein image to perform feature matching, finger vein recognition is realized, and the user identity is confirmed.
  • an embodiment of the present invention provides a finger vein recognition method and apparatus for effectively extracting a finger vein recognition feature for finger vein recognition.
  • a first aspect of the present invention provides a finger vein recognition method, which may include: collecting a finger vein map image;
  • Finger vein recognition is performed based on the finger vein map.
  • a second aspect of the present invention provides a finger vein recognition apparatus, which may include:
  • a region extraction module configured to extract a region of interest from the finger vein image by using a straight line fitting manner
  • An image processing module configured to perform geometric normalization and gray normalization on the region of interest to obtain a processed region, and determine a finger vein line from the processed region to obtain a finger vein pattern;
  • an identification module configured to perform finger vein recognition according to the finger vein pattern.
  • the finger vein image is first collected, the region of interest is extracted from the finger vein image by a straight line fitting method, and then the geometric normalization and gray return of the region of interest are performed.
  • the processed area is obtained, and the finger vein line is determined from the processed area, and the finger vein line is a finger vein recognition feature, thereby obtaining a finger vein pattern, and the finger vein can be performed according to the finger vein pattern. It is recognized that the embodiment of the present invention can effectively extract the finger vein recognition feature for finger vein recognition.
  • FIG. 1a is a schematic diagram of an original vein image according to an embodiment of the present invention.
  • FIG. 1b is a schematic diagram of a finger vein image of a specific application according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a finger vein recognition method according to an embodiment of the present invention.
  • FIG. 3a is a schematic flowchart of determining a region of interest according to an embodiment of the present invention.
  • Figure 3b shows the flow of the upper edge fitting line and the lower edge fitting line by straight line fitting.
  • 3c is a schematic diagram of dividing a finger vein image into six sub-areas according to an embodiment of the present invention.
  • 3d is a diagram showing an edge response of a sub-area integrated on the y-axis according to an embodiment of the present invention
  • FIG. 3e is a diagram showing an edge response of two sub-regions integrated on the y-axis according to another embodiment of the present invention.
  • FIG. 3f is a schematic diagram of a fitted straight line provided by an embodiment of the present invention.
  • Figure 3g is a schematic diagram of a specific application of fitting a straight line
  • FIG. 4a is a schematic flowchart of a region normalization method according to an embodiment of the present invention.
  • FIG. 4b is a schematic flowchart of a region normalization method according to another embodiment of the present invention.
  • 4c is a schematic flowchart of a region normalization method according to another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of application of a normalized area according to some embodiments of the present invention.
  • FIG. 5a is a schematic flowchart of geometric normalization according to some embodiments of the present invention.
  • FIG. 5b is a schematic diagram of finger projection according to some embodiments of the present invention.
  • 6a is a schematic flow chart of grayscale normalization according to some embodiments of the present invention.
  • FIG. 6b is a schematic diagram of a finger vein model according to some embodiments of the present invention.
  • FIG. 7a is a schematic flow chart of a method for determining a finger vein line according to some embodiments of the present invention.
  • FIG. 7b is a schematic diagram of a finger vein model according to some embodiments of the present invention.
  • FIG. 8 is a schematic flowchart diagram of a denoising processing method according to some embodiments of the present invention.
  • FIG. 9 is a schematic structural diagram of a finger vein recognition device according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a finger vein recognition device according to an embodiment of the present invention.
  • Embodiments of the present invention provide a finger vein recognition method for effectively extracting a finger vein recognition feature for finger vein recognition.
  • the embodiment of the invention further provides a finger vein recognition device.
  • the finger vein image acquisition is performed by developing a corresponding acquisition system, the finger is placed at the collection position, the finger is illuminated by infrared light, and then the original vein image is displayed on the image sensor, and the original vein image in the image sensor is stored. Into the memory, and then read the original vein image from the memory for processing.
  • FIG. 1a is a schematic diagram of an original vein image provided by an embodiment of the present invention
  • FIG. 1b is a schematic diagram of a finger vein image of a specific application according to an embodiment of the present invention.
  • the image sensor in addition to the imageable area, the frame area is included, and the imaged area is the finger vein image provided by the embodiment of the present invention. Therefore, the original vein image stored in the memory includes the finger vein image and the frame area.
  • a finger vein recognition method may include:
  • the finger vein image provided by the embodiment of the present invention does not include the frame area in FIG.
  • the region of interest refers to a region of the finger vein image that needs to be processed, and in the embodiment of the present invention refers to the finger vein that is of interest.
  • the region of interest is extracted from the finger vein image by straight line fitting.
  • the straight line fitting method determines the four edge lines of the region of interest in the finger vein image, and then determines the region of interest from the four edge lines.
  • the region of interest determined in step 202 may not be a regular geometric image,
  • the region of interest is geometrically normalized, and then grayscale normalization is performed to obtain the processed region.
  • the finger vein image is collected first, and the region of interest is extracted from the finger vein image by straight line fitting, and then the region of interest is geometrically normalized and grayscale normalized to obtain the processed region, and then In the post-treatment area, the finger vein line is determined, and the finger vein line is a finger recognition feature, thereby obtaining a finger vein map, and the finger vein recognition can be performed according to the finger vein map.
  • the embodiment of the present invention can effectively extract the finger vein recognition.
  • the algorithms involved in the whole process are relatively simple, the amount of calculation is small, and the processing speed is fast.
  • the finger vein pattern is denoised to effectively suppress noise generation.
  • the width of the finger is generally not larger than the area imaged in the image sensor, and the length of the finger is not smaller than the area imaged in the image sensor, only the upper edge fitting line of the region of interest needs to be determined in the embodiment of the present invention.
  • the lower edge fits the line and then determines the region of interest in conjunction with the left and right edges of the finger vein image.
  • step 202 specifically includes the following steps:
  • A1 using a straight line fitting method to determine the upper edge fitting line and the lower edge fitting line;
  • A2. Determine the region of interest according to the left and right along lines of the region of interest, the upper edge fit line, and the lower edge fit line.
  • Some embodiments of the present invention provide a method for determining an upper edge fitting line and a lower edge fitting line by a straight line fitting method. As shown in FIG. 3b, the above-mentioned straight line fitting method is used to determine the upper edge fitting line and the lower edge fitting.
  • the line includes:
  • the region of interest falls completely within the range of the finger vein image, and the finger is passed through the middle line.
  • the vein image is divided into upper and lower parts on average, then the upper edge of the region of interest is fitted to the upper part, the lower edge of the fitted line is at the lower part, and the pixel point gray value inside the region of interest is not less than that removed from the finger vein image.
  • the gray value of the remaining range of pixels of the region of interest is not less than that removed from the finger vein image.
  • the middle line divides the finger vein image into two parts, and then divides the finger vein image into at least two sub-areas on the x-axis (in FIG. 3c, six sub-areas are taken as an example). They are sub-regions x0 ⁇ x6, respectively, in which each sub-region is integrated on the y-axis. After integration, an array is obtained.
  • Each array forms a line on the image display, and then passes the formula Y[j+2]+ Y[j+1]-Y[j]-Y[j-1] calculates the difference of the integral value of the upper part, and the maximum value of the integral difference represents the upper boundary of the vein, here y coordinate, horizontal segmentation
  • the center is the x coordinate value, and the six upper edge points are respectively obtained.
  • the straight line fitting is performed on the six upper edge points by the following formula (1), and similarly, by Y[j]+Y[j-1]-Y[j +2]-Y[j+1] calculates the difference of the integral values of the lower part, and obtains 6 lower edge points.
  • the straight line fitting is performed on the six lower edge points by the formula (1).
  • one of the six sub-regions is integrated on the y-axis, and the left is the edge response map of the sub-region integrated on the y-axis.
  • the integral gives an array of y, so the representation on the image is as shown in the white part of the left.
  • two of the six sub-regions are integrated on the y-axis, and the left is the edge response map of the two sub-regions integrated on the y-axis.
  • the six highest points are selected on the upper edge for straight line fitting to obtain the above-mentioned upper edge fitting line
  • the lowermost edge is selected with six lowest points for straight line fitting to obtain the lower edge fitting line.
  • the correlation of the fit can be calculated. If the correlation between the selected 6 points is too poor, the fitting is performed again.
  • the value of the correlation between the six points can be calculated by the following formula (3):
  • FIG. 3f is a top edge fitting line and a lower edge fitting fitted by the above method.
  • the line segmentation process is performed on the finger vein image, and then the left and right edges of the finger vein image are combined to obtain the region of interest.
  • FIG. 3g is based on FIG. 1b, and the upper edge fitting line and the lower edge fitting line are fitted by the above method, and then the region of interest is divided by the upper edge fitting line and the lower edge fitting line. .
  • the step 203 includes: performing normalization on the region of interest by affine transformation to obtain a normalized region; performing geometric normalization on the normalized region by ellipsometry to obtain geometricalization a region; performing grayscale normalization on the geometric region to obtain the processed region.
  • the normalizing the region by using the affine transformation to normalize the region of interest includes:
  • the normalized region is obtained by normalizing the region of interest by affine transformation according to the upper left intersection point, the upper right intersection point, the lower left intersection point, and the lower right intersection point.
  • determining the upper left intersection point of the upper edge fitting line of the region of interest and the left edge of the finger vein image determining a lower edge fitting line of the region of interest and a left edge line of the finger vein image
  • the lower left intersection points include:
  • Determining a first starting point along a left line of the finger vein image determining the upper left intersection point and the lower left intersection point according to the first starting point, the first starting point to an upper edge of the region of interest
  • the distance of the line is equal to the distance from the first starting point to the line of the lower edge of the region of interest
  • the lower intersection points include:
  • the first starting point is found on the left line of the finger vein image, and the distance from the first starting point to the upper edge fitting line is equal to the distance from the first starting point to the lower edge fitting line, and the point is adopted.
  • step C2 specifically includes:
  • the upper left intersection point, the upper right intersection point, the lower left intersection point, the lower right intersection point, and the upper left intersection point, the upper right intersection point, and the lower left side after the normalization process are not performed before the normalization process.
  • a rectangular normalized area is obtained.
  • the geometric normalization of the normalized region by the ellipse transformation to obtain the geometric region includes:
  • the finger is elliptical.
  • the image resolution of the edge of the finger contour will be lower, and the image resolution of the center of the contour of the finger will be higher. Therefore, the ellipse needs to be elliptical. Transform for correction.
  • FIG. 5b is a schematic diagram of finger projection according to an embodiment of the present invention.
  • the ellipse radius of the finger is r, and the imaging position of the finger surface is uniformly moved from one side to the other side, in the finger
  • the angle a between the line connecting the point of the heart point and the horizontal direction is uniformly increased from 0 to 180 degrees, and the projection y' of the point on the y-axis is increased from 0 to height, so that the image resolution of the edge of the finger contour It will be lower, and the image resolution at the center of the finger contour will be higher and the image resolution will be uneven.
  • height corresponds to the height of a pixel in the corrected geometric region. It can be understood that when geometrically normalizing the region, a blank image of the geometric region is first determined, and the blank image has been determined with height and width, and then corresponding to the blank image for any pixel in the normalized region. Assuming that a pixel corresponds to a height in the blank image, and the angle a is obtained according to the height change, r is calculated by the above formula, and then the y coordinate of the pixel is calculated according to r and a.
  • the resolution of the geometric region is 180*180.
  • performing the gray-scale normalization processing on the geometric region to obtain the processed region specifically includes:
  • a pixel point in the geometric region is used as a first central pixel point, and the first central pixel point is a square midpoint, and a square having a side length d is marked;
  • the average value of the gray value of all the pixels in the square is calculated as m, and the variance is V, then the transformation formula (5) of the gray value of the first central pixel is obtained, as follows:
  • v0 is the original variance of the first central pixel
  • m0 is the original gray value of the first central pixel
  • FIG. 6b is a schematic diagram of a finger venous blood vessel diagram according to an embodiment of the present invention
  • FIG. 6b is a geometrical processing of the normalized region on the basis of FIG. 4c, and then the processed region after the grayscale processing is performed. .
  • FIG. 7a is a method for determining a finger vein line according to an embodiment of the present invention.
  • Schematic diagram of the flow; as shown in FIG. 7a, the above determining the finger vein line from the post-treatment area includes:
  • the pixel points of the processed area are sequentially used as the second central pixel point, and the second central pixel point is centered, and the circle with the radius r is marked;
  • the second gray value when the first gray value is 255, the second gray value is 0; when the first gray value is 0, the second gray value is 255.
  • the color of the pixel is white, and when the gradation value is 0, the color of the pixel is black.
  • the pixel is taken as the second central pixel, and then the second central pixel is taken as the center, and the circle having the radius r is marked in the processed region.
  • Defining a variable L initializing the variable L to 0, and then calculating the gray value of the other pixels in the circle except the second central pixel, if the gray value of one pixel in the other pixel is not greater than the second center
  • the gray value of the pixel, then the L value is increased by 1.
  • the above operation is repeated, and the gray value of all other pixels in the circle is directly calculated, and then it is judged whether the L satisfies the preset condition.
  • the L value and the circle can be taken.
  • the ratio of all the pixels including the second central pixel), if the ratio is not less than 0.472, determines that the second central pixel belongs to the venous blood vessel line.
  • the gray value of the second central pixel point is set to the first gray value, and if it does not belong to the venous blood vessel line, the gray value of the second central pixel point is set to The second gray value.
  • the second central pixel belongs to the venous blood vessel line
  • the gray value of the second central pixel point is set to 255 (white)
  • the second central pixel point does not belong to the venous blood vessel line
  • the second center The gray value of the pixel is set to 0 (black).
  • FIG. 7b is a schematic diagram of a finger vein model according to some embodiments of the present invention
  • Fig. 7b is a diagram of a finger vein obtained by extracting a venous blood vessel line based on Fig. 6b.
  • FIG. 8 is a schematic flowchart of a method for performing a denoising process according to an embodiment of the present invention.
  • the method for denoising a finger vein image according to an embodiment of the present invention includes:
  • the pixel points in the finger vein map are arranged in a matrix, and the pixel has pixel points in 8 adjacent positions.
  • the gray value of the pixel is changed to a gray value corresponding to the preset condition, and the gray value corresponding to the preset condition includes a first gray value and a second gray value, where When the first gray value is 255, the second gray value is 0; when the first gray value is 0, the second gray value is 255.
  • the preset condition is specifically: if the gray value of the pixel is the first gray value, and the gray values of the left pixel and the right pixel of the pixel are both the second a gray value, the gray value of the pixel is changed to the second gray value; or, if the gray value of the pixel is the first gray value, and the pixel point When the total number of the pixel points whose gray value is the first gray value in the adjacent eight pixel points is not more than 5, the gray value of the pixel point is changed to the second gray value.
  • the above preset conditions correspond to the following four cases:
  • the finger vein pattern obtained by the above method can be saved into the database as a finger vein template. Then, when the finger vein recognition is performed, the identified finger vein pattern is acquired in the above manner, and the finger vein template and the recognized finger vein pattern are matched and recognized. Or the finger vein template is stored in the database, and then the obtained finger vein pattern is matched and identified by the finger vein template in the database.
  • the finger vein recognition according to the finger vein pattern provided by the embodiment of the present invention includes: counting that the finger vein pattern and the other finger vein map belong to the same vein at the same coordinate position. a total number of pixels of the line Wab, and a total number of pixels Wa of the pixel points belonging to the vein line in the finger vein map and a total number Wb of the pixel points belonging to the vein line of the other finger vein map; The total number Wab, the total number Wa, and the total number Wb are used to calculate a matching rate; whether the matching rate satisfies a preset condition, and if so, the finger vein of the finger vein pattern is identified to match the finger vein of the other finger vein pattern.
  • the matching method is:
  • the total number of pixels of the pixel value of the finger vein vascular map a having a gray value of 255 is calculated.
  • the finger is calculated.
  • the total number Wb of the pixel points whose gradation value is 255 in the venous blood vessel diagram b, and the total number of pixels of the pixel points in the finger vein maps a and b having a gray value of 255, can be calculated by the following formula (6).
  • S is the matching rate
  • the matching rate is not less than a certain value, the finger vein pattern a and b can be considered to match.
  • FIG. 9 is a schematic structural diagram of a finger vein recognition device according to an embodiment of the present invention. and as shown in FIG. 9, a finger vein recognition device may include:
  • the collecting module 910 is configured to collect a finger vein image
  • a region extraction module 920 configured to extract a region of interest from the finger vein image by using a straight line fitting manner
  • the image processing module 930 is configured to perform geometric normalization and gray normalization processing on the region of interest to obtain a processed region, and determine a finger vein line from the processed region to obtain a finger vein pattern;
  • the identification module 940 is configured to perform finger vein recognition according to the finger vein pattern.
  • the acquisition module 910 collects the finger vein image
  • the region extraction module 920 extracts the region of interest from the finger vein image by using a straight line fitting method
  • the image processing module 930 performs the region of interest obtained by the region extraction module 920.
  • Geometric normalization and gray normalization processing the processed area is obtained, and the finger vein line is determined from the processed area, and the finger vein line is a finger vein recognition feature, thereby obtaining a finger vein pattern
  • the recognition module 940 can perform finger vein recognition according to the finger vein pattern.
  • the finger vein recognition feature can be effectively extracted to perform finger vein recognition.
  • the finger vein recognition apparatus provided by the embodiment of the present invention further includes a noise processing module 1010 as shown in FIG. 10: before the recognition module 940 performs finger vein recognition according to the finger vein pattern, The finger vein pattern obtained by the image processing module 930 performs denoising processing.
  • the area extraction module 920 is specifically configured to determine an upper edge fitting line and a lower edge fitting line by using a straight line fitting manner; according to the left edge line and the right edge line of the region of interest, The edge fitting line and the lower edge fitting line are described to determine the region of interest.
  • the area extraction module 920 is specifically configured to divide the finger vein image into upper and lower parts by an intermediate line, and divide the finger vein image into at least two sub-areas on the x-axis. Integrating the sub-region on the y-axis, respectively obtaining an upper edge fitting point and a lower edge fitting point, and fitting the upper edge fitting point to obtain the upper edge fitting line, for the lower The edge fitting point is fitted to obtain the lower edge fitting line.
  • the image processing module 930 is specifically configured to: perform normalization on the region of interest by affine transformation to obtain a normalized region; perform geometric normalization on the normalized region by using an elliptic transformation. Obtaining a geometric region; performing grayscale normalization on the geometric region to obtain the processed region.
  • the image processing module 930 is specifically configured to determine an upper edge intersection line of the region of interest and an upper left intersection of the left edge line of the finger vein image, and determine a location Determining a lower left intersection of a lower edge fit line of the region of interest and a left edge along the left edge of the finger vein image, and determining an upper right intersection of the upper edge fit line of the region of interest and the right edge of the finger vein image Determining a lower right intersection point of the lower edge fitting line of the region of interest and a right edge line of the finger vein image; according to the left upper intersection point, the upper right intersection point, the lower left intersection point, and the lower right intersection point, The transform transform normalizes the region of interest to obtain the normalized region.
  • the image processing module 930 is specifically configured to: determine a first starting point along a left line of the finger vein image, and determine, according to the first starting point, the left upper intersection point and the lower left intersection a point, a distance from the first starting point to an upper edge fitting line of the region of interest is equal to a distance from the first starting point to a lower edge fitting line of the region of interest; and determining the finger a first end point along the right side of the vein image, determining the right upper intersection point and the lower right intersection point according to the first end point, the distance from the first end point to the upper edge fitting line of the region of interest The distance from the first end point to the lower edge fitting line of the region of interest is equal.
  • the image processing module 930 is specifically configured to determine a center line of the region of interest according to the first starting point and the first end point; starting from the first starting point of the center line Ending the first end point of the center line, determining the intersection point of the midline on the center line one by one, and determining a straight line perpendicular to the center line through the intersecting point of the center line, and intersecting the center line with the sense
  • a line segment between the upper edge fitting lines of the region of interest is normalized onto the line, and a line segment between the center line intersection point and the lower edge line of the region of interest is normalized onto the line to obtain the Formalized area.
  • the image processing module 930 is further configured to calculate a coordinate point after geometric transformation of each pixel in the normalized region; and perform geometry on the normalized region according to the coordinate point. Transforming to obtain the geometric region.
  • the image processing module 930 is further configured to sequentially use a pixel in the geometric region as a first central pixel, and the first central pixel as a square midpoint, and mark a square having a side length d; determining an average value and a variance of gray values of all the pixels in the square; and changing the gray value of the first central pixel according to the average value and the variance to obtain the processing After the area.
  • the image processing module 930 is further configured to sequentially use the pixel points of the processed area as the second central pixel point, and the second central pixel point as the center of the mark.
  • a circle having a diameter r within the circle, querying a total number of pixel points whose gray value is not greater than a gray value of the second center pixel; and determining that the total number satisfies a preset condition, determining the number The two central pixel points belong to the finger vein line, and the gray value of the second center pixel is set to a first gray value; when it is determined that the total number does not satisfy the preset condition, the second center pixel is determined Not belonging to the finger vein line, setting the gray value of the second central pixel to the second gray value; wherein, when the first gray value is 255, the second gray value is 0 When the first gray value is 0, the second gray value is 255.
  • the noise processing module 1010 is specifically configured to sequentially determine gray values of pixel points in the finger vein pattern, and gray values of eight adjacent pixels of the pixel point. Determining whether it is necessary to change the gray value of the pixel point to a gray value corresponding to the preset condition according to the gray value of the pixel point and the gray value of the adjacent eight pixel points of the pixel point If yes, the gray value of the pixel is changed to a gray value corresponding to the preset condition, and the gray value corresponding to the preset condition includes a first gray value and a second gray value, wherein When the first gray value is 255, the second gray value is 0; when the first gray value is 0, the second gray value is 255.
  • the preset condition is that if the gray value of the pixel is the first gray value, and the gray values of the left pixel and the right pixel of the pixel are both the second gray a value of the pixel, the gray value of the pixel is changed to the second gray value; or, if the gray value of the pixel is the first gray value, and the phase of the pixel When the total number of pixel points in which the gray value is the first gray value in the adjacent eight pixels is not more than 5, the gray value of the pixel is changed to the second gray value.
  • the identification module 940 is specifically configured to count the total number of pixels of the pixel points belonging to the venous blood vessel line at the same coordinate position of the finger vein vascular map and the other finger vein map, and the calculation center.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种手指静脉识别方法及装置,用于有效提取手指静脉识别特征,以进行手指静脉识别,所述方法包括:采集手指静脉图像(201);采用直线拟合方式从所述手指静脉图像中提取感兴趣区域(202);对所述感兴趣区域进行几何归一化和灰度归一化处理得到处理后区域(203);从所述处理后区域中确定出手指静脉血管线,得到手指静脉血管图(204);根据所述手指静脉血管图进行手指静脉识别(205)。

Description

手指静脉识别方法及装置 技术领域
本发明涉及生物特征识别技术,具体涉及一种手指静脉识别方法及装置。
背景技术
生物特征识别技术是利用人体生物特征或者行为特征进行个人身份认证,身体特征如指纹、掌型、虹漠视、人体气味、脸型等,行为特征如签名、语音、步态等。在这些生物特征识别技术中,由于指纹识别具有很强的唯一性、稳定性、易用性等特点,应用极为广泛。但是,指纹是一种外在的生物特征,在指纹识别中,要求使用者在录入指纹时保持手指洁净、光滑、任何存在于指纹上的脏东西或者污点都能给识别带来困难,且指纹易于伪造,甚至出现一种用硅树脂制造的克隆指纹,导致指纹识别技术的安全系数比较低。
根据社会需求,近年来出现了一种新的生物特征识别技术---静脉识别。由于静脉特征是流动血液的纹路,因此,静脉识别是一种本质的“活体识别”生物特征识别方法,也就是说被识别对象必须是活着的人,才能满足身份识别中获取静脉血管纹路,很难伪造或是手术改变,安全系数较高。静脉识别主要包括手背静脉识别、手掌静脉识别以及手指静脉识别。其中,手指静脉和指纹一样,具有很强的普遍性和唯一性,因此,手指静脉识别成为近年来生物识别技术开辟的新领域。
在手指静脉识别过程中,通常利用透射光或者反射光两种方式获取手指静脉图像,然后从手指静脉图像中提取静脉特征,进行特征匹配,实现了手指静脉识别,确认使用者身份。
但目前手指静脉识别技术的成熟度和准确率低于指纹识别,主要原因在于,现有手指静脉图像的静脉特征提取和匹配的效率都较低,因此,如何提取手指静脉特征和实现匹配是手指静脉特征中的研究课题之一。
发明内容
针对上述缺陷,本发明实施例提供了一种手指静脉识别方法及装置,用于有效提取手指静脉识别特征,以进行手指静脉识别。
本发明第一方面提供了一种手指静脉识别方法,可包括:采集手指静脉图 像;
采用直线拟合方式从所述手指静脉图像中提取感兴趣区域;
对所述感兴趣区域进行几何归一化和灰度归一化处理得到处理后区域;
从所述处理后区域中确定出手指静脉血管线,得到手指静脉血管图;
根据所述手指静脉血管图进行手指静脉识别。
本发明第二方面提供了一种手指静脉识别装置,可包括:
采集模块,用于采集手指静脉图像;
区域提取模块,用于采用直线拟合方式从所述手指静脉图像中提取感兴趣区域;
图像处理模块,用于对所述感兴趣区域进行几何归一化和灰度归一化处理得到处理后区域,从所述处理后区域中确定出手指静脉血管线,得到手指静脉血管图;
识别模块,用于根据所述手指静脉血管图进行手指静脉识别。
从以上技术方案可以看出,在本发明实施例中,先采集手指静脉图像,采用直线拟合方式从手指静脉图像中提取感兴趣区域,然后对感兴趣区域进行几何归一化和灰度归一化处理,得到处理后区域,再从处理后区域中确定出手指静脉血管线,该手指静脉血管线为手指静脉识别特征,从而得到手指静脉血管图,则可以根据手指静脉血管图进行手指静脉识别,本发明实施例能够有效提取手指静脉识别特征,以进行手指静脉识别。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a为本发明实施例提供的原始静脉图像示意图;
图1b为本发明实施例提供的具体应用的手指静脉图像示意图;
图2为本发明实施例提供的手指静脉识别方法的流程示意图;
图3a为本发明实施例提供的确定感兴趣区域的流程示意图;
图3b为通过直线拟合方式确定上边沿拟合线和下边沿拟合线的流程示意 图;
图3c为本发明实施例提供的手指静脉图像划分为6个子区域示意图;
图3d为本发明实施例提供的一个子区域在y轴上积分的边沿响应图;
图3e为本发明另一实施例提供的两个子区域在y轴上积分的边沿响应图;
图3f为本发明实施例提供的拟合直线示意图;
图3g为拟合直线的具体应用示意图;
图4a为本发明实施例提供的区域正规化方法流程示意图;
图4b为本发明另一实施例提供的区域正规化方法流程示意图;
图4c为本发明另一实施例提供的区域正规化方法流程示意图;
图4d为本发明一些实施例提供的正规化区域的应用示意图;
图5a为本发明一些实施例提供的几何归一化的流程示意图;
图5b为本发明一些实施例提供的手指投影示意图;
图6a为本发明一些实施例提供的灰度归一化的流程示意图;
图6b为本发明一些实施例提供的手指静脉血管图的示意图;
图7a为本发明一些实施例提供的手指静脉血管线的确定方法的流程示意图;
图7b为本发明一些实施例提供的手指静脉血管图的示意图;
图8为本发明一些实施例提供的去噪处理方法的流程示意图;
图9为本发明实施例提供的一种手指静脉识别装置的结构示意图;
图10为本发明实施例提供的一种手指静脉识别装置的结构示意图。
具体实施方式
下面将结合本发明实施例的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供了一种手指静脉识别方法,用于有效提取手指静脉识别特征,以进行手指静脉识别。本发明实施例还相应提供了一种手指静脉识别装置。
首先,对手指静脉识别进行简单介绍。
手指静脉成像的理论和依据:由于静脉比动脉靠近皮肤,当近红外线照射手指时,比较容易读取到静脉信息,而且静脉血管分布数量多,曲线和分枝也相当的复杂,每个人的差别十分明显,其关键在于流动静脉血液中红细胞所携带的失脱氧份的血红蛋白对红外线有较强的吸收率,导致静脉部分的透射较少,在影像上就会产生静脉图像。
基于上述理论和依据,通过开发相应的采集系统进行手指静脉图像采集,将手指放置在采集位置,通过红外光照射手指,然后在图像传感器上显示出原始静脉图像,图像传感器中的原始静脉图像存入内存,然后再从内存中读出原始静脉图像进行处理。
可以理解,如图1a和1b所示,其中,图1a为本发明实施例提供的原始静脉图像示意图,图1b为本发明实施例提供的具体应用的手指静脉图像示意图。在图像传感器中除了能够成像的区域,还包括边框区域,该成像的区域即是本发明实施例提供的手指静脉图像,因此,存入内存的原始静脉图像包括了手指静脉图像和边框区域。
基于上述介绍,下面将以具体实施例,详细介绍本发明技术方案。
请参阅图2,图2为本发明实施例提供的手指静脉识别方法的流程示意图;如图2所示,一种手指静脉识别方法可包括:
201、采集手指静脉图像;
继续参阅图1,本发明实施例提供的手指静脉图像不包括图1中的边框区域。
202、采用直线拟合方式从所述手指静脉图像中提取感兴趣区域;
其中,感兴趣区域是指手指静脉图像中需要处理的区域,在本发明实施例是指被关注的手指静脉。
在本发明实施例中采用直线拟合的方式从手指静脉图像中提取出感兴趣区域。直线拟合方式即在手指静脉图像中分别确定感兴趣区域的四条边沿线,然后由四条边沿线确定出感兴趣区域。
203、对所述感兴趣区域进行几何归一化和灰度归一化处理得到处理后区域;
在经过步骤202确定出来的感兴趣区域可能不是一个规则的几何图像,为 了后续处理的方便性,将感兴趣区域进行几何归一化,然后再进行灰度归一化,以得到处理后区域。
204、从所述处理后区域中确定出手指静脉血管线,得到手指静脉血管图;
205、根据所述手指静脉血管图进行手指静脉识别。
可以看出,先采集手指静脉图像,采用直线拟合方式从手指静脉图像中提取感兴趣区域,然后对感兴趣区域进行几何归一化和灰度归一化处理,得到处理后区域,再从处理后区域中确定出手指静脉血管线,该手指静脉血管线为手指识别特征,从而得到手指静脉血管图,则可以根据手指静脉血管图进行手指静脉识别,本发明实施例能够有效提取手指静脉识别特征,以进行手指静脉识别,整个过程中涉及的算法都较为简单,运算量小,处理速度快。
可以理解,根据需要,在得到手指静脉血管图之后,对手指静脉血管图进行去噪处理,以有效抑制噪点产生。
需要说明,由于手指宽度一般不大于图像传感器中成像的区域,而手指长度不小于图像传感器中成像的区域,因此,在本发明实施例中只需要确定出感兴趣区域的上边沿拟合线和下边沿拟合线,然后结合手指静脉图像的左边沿线和右边沿线确定出感兴趣区域。
因此,请参阅图3a,上述步骤202具体包括如下步骤:
A1、采用直线拟合方式确定出上边沿拟合线和下边沿拟合线;
A2、根据所述感兴趣区域的左边沿线和右边沿线、所述上边沿拟合线和下边沿拟合线确定出所述感兴趣区域。
本发明一些实施例提供通过直线拟合方式确定上边沿拟合线和下边沿拟合线的方法,如图3b所示,上述采用直线拟合方式确定出上边沿拟合线和下边沿拟合线包括:
B1、用中间线将所述手指静脉图像平均划分成上下两部分,并在x轴上将所述手指静脉图像划分成至少2个子区域;
B2、对所述子区域在y轴上积分,分别得到上边沿拟合点和下边沿线拟合点,对所述上边沿拟合点进行拟合得到所述上边沿拟合线,对所述下边沿拟合点进行拟合得到所述下边沿拟合线。
可以理解,感兴趣区域完全落在手指静脉图像范围内,通过中间线将手指 静脉图像平均划分成上下两部分,则感兴趣区域的上边沿拟合线在上部分,下边沿拟合线在下部分,而且感兴趣区域内部的像素点灰度值不小于手指静脉图像中除去且感兴趣区域的其余范围的像素点的灰度值。
举例来说,请结合图3c~3e,中间线将手指静脉图像划分成两部分,然后在x轴上将手指静脉图像划分成至少2个子区域(图3c中以6个子区域为例)。分别为子区域x0~x6,其中,对每一个子区域在y轴上积分,积分后得到的是一个数组,每一个数组在图像显示上形成一条线,然后通过公式Y[j+2]+Y[j+1]-Y[j]-Y[j-1]计算上半部分的积分值差异,积分差值的最大值处代表静脉的上边界,以此处为y坐标,横向分段的中心为x坐标值,分别得到6个上边沿点,采用以下公式(1)对6个上边沿点进行直线拟合,同样,通过Y[j]+Y[j-1]-Y[j+2]-Y[j+1]计算下半部分的积分值差异,得到6个下边沿点,采用公式(1)对6个下边沿点进行直线拟合。
其中,像素点(xi,yj)的最小二乘法拟合直线方程为:
y=a+bx    (1)
其中,
Figure PCTCN2015091630-appb-000001
举例来说,如图3d所示,对6个子区域中的一个子区域(图3d中间所示)在y轴上积分,左边为该子区域在y轴上积分的边沿响应图,在上述介绍,积分得到的是y的一个数组,因此,在图像上表示则如左边的白色部分所示。如图3e所示,对6个子区域中的2个子区域(图3e中间所示)在y轴上积分,左边为该2个子区域在y轴上积分的边沿响应图。依次类推,分别在上边沿选取6个最高点进行直线拟合得到上述上边沿拟合线,在下边沿选取6个最低点进行直线拟合得到下边沿拟合线。
为了增强拟合健壮性,可以通过计算拟合的相关性,若选取的6个点之间的相关性太差,则重新进行拟合。
其中,可以通过下列公式(3)计算6个点之间的相关性的值:
Figure PCTCN2015091630-appb-000002
        (公式3)
请参阅图3f,图3f为采用上述方式拟合出的上边沿拟合线和下边沿拟合 线对手指静脉图像进行分割处理,然后结合手指静脉图像的左边沿线和右边沿线,得到感兴趣区域。
请参阅图3g,图3g在图1b基础上,通过上述方式拟合出上边沿拟合线和下边沿拟合线,然后用上边沿拟合线和下边沿拟合线分割感兴趣区域的示意图。
在本发明一些实施例中,上述步骤203具体包括:通过仿射变换对所述感兴趣区域进行区域正规化得到正规化区域;通过椭圆变换对所述正规化区域进行几何归一化得到几何化区域;对所述几何化区域进行灰度归一化处理得到所述处理后区域。
其中,在本发明一些实施例中,如图4a所示,上述通过仿射变换对所述感兴趣区域进行区域正规化得到正规化区域包括:
401、确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的左边沿线的左上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的左边沿线的左下相交点,以及确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的右边沿线的右上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的右边沿线的右下相交点;
402、根据所述左上相交点、右上相交点、左下相交点和右下相交点,通过仿射变换对所述感兴趣区域进行正规化得到所述正规化区域。
其中,上述确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的左边沿线的左上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的左边沿线的左下相交点包括:
确定出所述手指静脉图像的左边沿线上的第一起点,根据所述第一起点确定所述左上相交点和所述左下相交点,所述第一起点到所述感兴趣区域的上边沿拟合线的距离与所述第一起点到所述感兴趣区域的下边沿拟合线的距离相等;
上述确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的右边沿线的右上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的右边沿线的右下相交点包括:
确定出所述手指静脉图像的右边沿线上的第一终点,根据所述第一终点确 定所述右上相交点和所述右下相交点,所述第一终点到所述感兴趣区域的上边沿拟合线的距离与所述第一终点到所述感兴趣区域的下边沿拟合线的距离相等。
在本发明实施例中,基于在手指静脉图像的左边沿线上找出第一起点,第一起点到上边沿拟合线的距离与第一起点到下边沿拟合线的距离相等,并采用点(x1,y1)到直线y=ax+b的距离计算公式计算:
Figure PCTCN2015091630-appb-000003
在本发明一些实施例中,如图4b所示,上述步骤C2具体包括:
411、根据所述第一起点与所述第一终点确定所述感兴趣区域的中线;
412、从所述中线的所述第一起点开始到所述中线的所述第一终点结束,逐个确定出所述中线上的中线相交点,并确定出经过所述中线相交点与所述中线垂直的直线,将所述中线相交点与所述感兴趣区域的上边沿拟合线之间的线段正规化到所述直线上,以及将所述中线相交点与所述感兴趣区域的下边沿线之间的线段正规化到所述直线上得到所述正规化区域。
请参阅图4c和4d,在4c中,未进行正规化处理前的左上相交点、右上相交点、左下相交点、右下相交点和进行正规化处理后的左上相交点、右上相交点、左下相交点、右下相交点的对应关系。正规化处理后的得到一个矩形的正规化区域。
如图4d所示,是对附图3f进行正规化处理后的正规化区域。
如图5a所示,上述通过椭圆变换对所述正规化区域进行几何归一化得到几何化区域具体包括:
501、计算所述正规化区域中每一个像素点进行几何变换后的坐标点;
502、根据所述坐标点对所述正规化区域进行几何变换得到所述几何化区域。
需要说明,手指是呈椭圆形的,在椭圆形的手指投影到平面时,手指轮廓边沿的图像解析度会较低,而手指轮廓中心的图像解析度会较高,因此,需要对其进行椭圆变换以进行矫正。
请参阅图5b,图5b为本发明实施例提供的手指投影示意图;在图5b中,手指椭圆半径为r,手指表面的成像位置从一侧均匀移动到另一侧时,手指中 心点到该点的连线与水平方向的夹角a均匀地从0增加到180度,而该点在y轴上的投影y’从0增加到height,使得其手指轮廓边沿的图像解析度会较低,而手指轮廓中心的图像解析度会较高,图像解析度不均匀。
根据图5b,推导出矫正公式(4),如下:
Figure PCTCN2015091630-appb-000004
Figure PCTCN2015091630-appb-000005
        (公式4)
其中,height对应的是某一个像素点在矫正后的几何化区域中的高度。可以理解,在几何化正规化区域时,先确定出几何化区域的空白图像,该空白图像已经确定好了高度和宽度,然后对于正规化区域中的任意一个像素点,对应到空白图像中,假设某一个像素点对应到空白图像中其高度为height,并且根据其高度变化得到角度a,则通过上述公式计算得到r,然后根据r和a计算得到该像素点的y坐标。
其中,几何化区域的分辨率为180*180。
如图6a所示,在本发明一些实施例中,上述对所述几何化区域进行灰度归一化处理得到所述处理后区域具体包括:
601、依次以所述几何化区域中的像素点作为第一中心像素点,以所述第一中心像素点为正方形中点,标记边长为d的正方形;
602、求取所述正方形内所有像素点的灰度值的平均值和方差;
603、根据所述平均值和方差更改所述第一中心像素点的灰度值,得到所述处理后区域。
其中,计算正方形内所有像素点的灰度值的平均值为m,方差为V,那么得到第一中心像素点的灰度值的变换公式(5),如下:
gray=(gray-m)*v0/v+m0    (公式5)
其中,v0为第一中心像素点原来的方差,m0为第一中心像素点原来的灰度值。
请参阅图6b,图6b为本发明实施例提供的手指静脉血管图的示意图;图6b在图4c基础上,对正规化区域进行几何化处理,然后再进行灰度化处理后的处理后区域。
请参阅图7a,图7a为本发明实施例提供的手指静脉血管线的确定方法的 流程示意图;如图7a所示,上述从所述处理后区域中确定出手指静脉血管线包括:
701、依次以所述处理后区域的像素点作为第二中心像素点,以所述第二中心像素点为圆心,标记半径为r的圆;
702、在所述圆内,查询灰度值不大于所述第二中心像素点的灰度值的像素点的总数;
703、当确定所述总数满足预设条件时,确定所述第二中心像素点属于手指静脉血管线,将所述第二中心像素点的灰度值设置为第一灰度值;当确定所述总数不满足预设条件时,确定所述第二中心像素点不属于手指静脉血管线,将所述第二中心像素点的灰度值设置为第二灰度值。
其中,当所述第一灰度值为255时,所述第二灰度值为0;当所述第一灰度值为0时,所述第二灰度值为255。
其中,灰度值为255时,表示该像素点的颜色为白色,灰度值为0时,表示该像素点的颜色为黑色。
其中,对处理后区域中的每一个像素点进行以下操作:将像素点作为第二中心像素点,然后以该第二中心像素点作为圆心,在处理后区域中标记出半径为r的圆。定义一个变量L,初始化变量L为0,然后计算圆内除去第二中心像素点的其它像素点的灰度值,若计算到其它像素点中的一个像素点的灰度值不大于第二中心像素点的灰度值,则L值加1,重复上述操作,直接计算完圆内所有其它像素点的灰度值,然后判断L是否满足预设条件,具体地,可以取L值与圆内所有像素点(包括第二中心像素点)的比例,若比例不小于0.472则确定第二中心像素点属于静脉血管线。
若第二中心像素点属于静脉血管线,则将第二中心像素点的灰度值设置为第一灰度值,若不属于静脉血管线,则将第二中心像素点的灰度值设置为第二灰度值。
作为一个可选方式,在第二中心像素点属于静脉血管线,将第二中心像素点的灰度值设置为255(白色),在第二中心像素点不属于静脉血管线,将第二中心像素点的灰度值设置为0(黑色)。
请参阅图7b,图7b为本发明一些实施例提供的手指静脉血管图的示意图; 图7b是在图6b基础上提取静脉血管线而得到的手指静脉血管图。
请参阅图8,图8为本发明一些实施例提供的去噪处理方法的流程示意图;如图8所示,本发明实施例提供的对手指静脉血管图进行去噪处理包括:
801、依次确定所述手指静脉血管图中的像素点的灰度值,以及所述像素点的8个相邻像素点的灰度值;
可以理解,手指静脉血管图中的像素点呈矩阵排列,则像素点具有8个相邻位置上的像素点。
802、根据所述像素点的灰度值,以及所述像素点的相邻8个像素点的灰度值,判断是否需要将所述像素点的灰度值更改为预设条件对应的灰度值;
803、若是,则将所述像素点的灰度值更改为预设条件对应的灰度值,所述预设条件对应的灰度值包括第一灰度值和第二灰度值,其中,当所述第一灰度值为255时,所述第二灰度值为0;当所述第一灰度值为0时,所述第二灰度值为255。
其中,上述预设条件具体为:若所述像素点的灰度值为所述第一灰度值,且所述像素点的左边像素点和右边像素点的灰度值均为所述第二灰度值,则将所述像素点的灰度值更改为所述第二灰度值;或者,若所述像素点的灰度值为所述第一灰度值,且所述像素点的相邻8个像素点中灰度值为所述第一灰度值的像素点的总数不大于5时,则将所述像素点的灰度值更改为所述第二灰度值。
举例来说,假设手指静脉血管图中将属于静脉血管线的像素点设置为白色,不属于静脉血管线的像素点设置为黑色,那么上述预设条件对应如下4种情况:
若一个白色的像素点的左边像素点和右边像素点都是黑色,那么将该像素点更改为黑色;
若一个黑色的像素点的左边像素点和右边像素点都是白色,那么将该像素点更改为白色;
若一个白色的像素点的8个相邻像素点中少于5个像素点是白色,那么将该像素点更改为黑色;
若一个黑色的像素点的8个相邻像素点中少于5个像素点是黑色,那么将 该像素点更改为白色。
在本发明实施例中,经过上述方式得到的手指静脉血管图,可以作为手指静脉模板保存到数据库里。然后进行手指静脉识别时,通过上述方式再获取到被识别的手指静脉血管图,将手指静脉模板和被识别的手指静脉血管图进行匹配识别。或者是数据库里保存有手指静脉模板,然后将得到的手指静脉血管图和数据库里的手指静脉模板进行匹配识别。
不管是上述哪种情况,本发明实施例提供的根据所述手指静脉血管图进行手指静脉识别包括:统计所述手指静脉血管图与另一手指静脉血管图在同一个坐标位置上同属于静脉血管线的像素点的总数Wab,以及计算所述手指静脉血管图中属于静脉血管线的像素点的总数Wa和所述另一手指静脉血管图中属于静脉血管线的像素点的总数Wb;根据所述总数Wab、总数Wa和总数Wb计算匹配率;判断所述匹配率是否满足预设条件,若是,则识别出所述手指静脉血管图的手指静脉与另一手指静脉血管图的手指静脉匹配。
举例来说,有手指静脉血管图a和b,先将手指静脉血管图a和b恢复成64*64大小的黑白二值图像,则匹配方法为:
假设处理过程中,将属于静脉血管线的像素点的灰度值更改为255(即白色),那么计算手指静脉血管图a中的灰度值为255的像素点的总数Wa,同样,计算手指静脉血管图b中的灰度值为255的像素点的总数Wb,再计算手指静脉血管图a和b中灰度值均为255的像素点的总数Wab,则可以通过以下公式(6)进行匹配识别:
Figure PCTCN2015091630-appb-000006
       (公式6)
其中,S为匹配率,若匹配率不小于某一个值时,则可以认为手指静脉血管图a和b匹配。
请参阅图9,图9为本发明实施例提供的一种手指静脉识别装置的结构示意图;如图9所示,一种手指静脉识别装置,可包括:
采集模块910,用于采集手指静脉图像;
区域提取模块920,用于采用直线拟合方式从所述手指静脉图像中提取感兴趣区域;
图像处理模块930,用于对所述感兴趣区域进行几何归一化和灰度归一化处理得到处理后区域,从所述处理后区域中确定出手指静脉血管线,得到手指静脉血管图;
识别模块940,用于根据所述手指静脉血管图进行手指静脉识别。
在本发明实施例中,采集模块910采集手指静脉图像,区域提取模块920采用直线拟合方式从手指静脉图像中提取感兴趣区域,然后图像处理模块930对区域提取模块920得到的感兴趣区域进行几何归一化和灰度归一化处理,得到处理后区域,再从处理后区域中确定出手指静脉血管线,该手指静脉血管线为手指静脉识别特征,从而得到手指静脉血管图,识别模块940则可以根据手指静脉血管图进行手指静脉识别,本发明实施例能够有效提取手指静脉识别特征,以进行手指静脉识别。
在本发明一些实施例中,本发明实施例提供的手指静脉识别装置还包括如图10所示的噪声处理模块1010:用于在识别模块940根据手指静脉血管图进行手指静脉识别之前,对上述图像处理模块930得到的手指静脉血管图进行去噪处理。
在本发明一些实施例中,上述区域提取模块920具体用于,采用直线拟合方式确定出上边沿拟合线和下边沿拟合线;根据所述感兴趣区域的左边沿线和右边沿线、所述上边沿拟合线和下边沿拟合线确定出所述感兴趣区域。
在本发明一些实施例中,上述区域提取模块920具体用于,用中间线将所述手指静脉图像平均划分成上下两部分,并在x轴上将所述手指静脉图像划分成至少2个子区域;对所述子区域在y轴上积分,分别得到上边沿拟合点和下边沿线拟合点,对所述上边沿拟合点进行拟合得到所述上边沿拟合线,对所述下边沿拟合点进行拟合得到所述下边沿拟合线。
在本发明一些实施例中,上述图像处理模块930具体用于,通过仿射变换对所述感兴趣区域进行区域正规化得到正规化区域;通过椭圆变换对所述正规化区域进行几何归一化得到几何化区域;对所述几何化区域进行灰度归一化处理得到所述处理后区域。
在本发明一些实施例中,上述图像处理模块930具体用于,确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的左边沿线的左上相交点、确定所 述感兴趣区域的下边沿拟合线与所述手指静脉图像的左边沿线的左下相交点,以及确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的右边沿线的右上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的右边沿线的右下相交点;根据所述左上相交点、右上相交点、左下相交点和右下相交点,通过仿射变换对所述感兴趣区域进行正规化得到所述正规化区域。
在本发明一些实施例中,上述图像处理模块930具体用于,确定出所述手指静脉图像的左边沿线上的第一起点,根据所述第一起点确定所述左上相交点和所述左下相交点,所述第一起点到所述感兴趣区域的上边沿拟合线的距离与所述第一起点到所述感兴趣区域的下边沿拟合线的距离相等;和,确定出所述手指静脉图像的右边沿线上的第一终点,根据所述第一终点确定所述右上相交点和所述右下相交点,所述第一终点到所述感兴趣区域的上边沿拟合线的距离与所述第一终点到所述感兴趣区域的下边沿拟合线的距离相等。
在本发明一些实施例中,上述图像处理模块930具体用于,根据所述第一起点与所述第一终点确定所述感兴趣区域的中线;从所述中线的所述第一起点开始到所述中线的所述第一终点结束,逐个确定出所述中线上的中线相交点,并确定出经过所述中线相交点与所述中线垂直的直线,将所述中线相交点与所述感兴趣区域的上边沿拟合线之间的线段正规化到所述直线上,以及将所述中线相交点与所述感兴趣区域的下边沿线之间的线段正规化到所述直线上得到所述正规化区域。
在本发明一些实施例中,上述图像处理模块930具体还用于,计算所述正规化区域中每一个像素点进行几何变换后的坐标点;根据所述坐标点对所述正规化区域进行几何变换得到所述几何化区域。
在本发明一些实施例中,上述图像处理模块930具体还用于,依次以所述几何化区域中的像素点作为第一中心像素点,以所述第一中心像素点为正方形中点,标记边长为d的正方形;求取所述正方形内所有像素点的灰度值的平均值和方差;根据所述平均值和方差更改所述第一中心像素点的灰度值,得到所述处理后区域。
在本发明一些实施例中,图像处理模块930具体还用于,依次以所述处理后区域的像素点作为第二中心像素点,以所述第二中心像素点为圆心,标记半 径为r的圆;在所述圆内,查询灰度值不大于所述第二中心像素点的灰度值的像素点的总数;当确定所述总数满足预设条件时,确定所述第二中心像素点属于手指静脉血管线,将所述第二中心像素点的灰度值设置为第一灰度值;当确定所述总数不满足预设条件时,确定所述第二中心像素点不属于手指静脉血管线,将所述第二中心像素点的灰度值设置为第二灰度值;其中,当所述第一灰度值为255时,所述第二灰度值为0;当所述第一灰度值为0时,所述第二灰度值为255。
在本发明一些实施例中,上述噪声处理模块1010具体用于,依次确定所述手指静脉血管图中的像素点的灰度值,以及所述像素点的8个相邻像素点的灰度值;根据所述像素点的灰度值,以及所述像素点的相邻8个像素点的灰度值,判断是否需要将所述像素点的灰度值更改为预设条件对应的灰度值;若是,则将所述像素点的灰度值更改为预设条件对应的灰度值,所述预设条件对应的灰度值包括第一灰度值和第二灰度值,其中,当所述第一灰度值为255时,所述第二灰度值为0;当所述第一灰度值为0时,所述第二灰度值为255。
其中,所述预设条件为若所述像素点的灰度值为所述第一灰度值,且所述像素点的左边像素点和右边像素点的灰度值均为所述第二灰度值,则将所述像素点的灰度值更改为所述第二灰度值;或者,若所述像素点的灰度值为所述第一灰度值,且所述像素点的相邻8个像素点中灰度值为所述第一灰度值的像素点的总数不大于5时,则将所述像素点的灰度值更改为所述第二灰度值。
在本发明一些实施例中,识别模块940具体用于,统计所述手指静脉血管图与另一手指静脉血管图在同一个坐标位置上同属于静脉血管线的像素点的总数Wab,以及计算所述手指静脉血管图中属于静脉血管线的像素点的总数Wa和所述另一手指静脉血管图中属于静脉血管线的像素点的总数Wb;根据所述总数Wab、总数Wa和总数Wb计算匹配率;判断所述匹配率是否满足预设条件,若是,则识别出所述手指静脉血管图的手指静脉与另一手指静脉血管图的手指静脉匹配。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述 的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上对本发明所提供的一种手指静脉识别方法及装置进行了详细介绍,对于本领域的一般技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (28)

  1. 一种手指静脉识别方法,其特征在于,包括:
    采集手指静脉图像;
    采用直线拟合方式从所述手指静脉图像中提取感兴趣区域;
    对所述感兴趣区域进行几何归一化和灰度归一化处理得到处理后区域;
    从所述处理后区域中确定出手指静脉血管线,得到手指静脉血管图;
    根据所述手指静脉血管图进行手指静脉识别。
  2. 根据权利要求1所述的方法,其特征在于,所述从所述处理后区域中确定出手指静脉血管线,得到手指静脉血管图之后包括:
    对所述手指静脉血管图进行去噪处理。
  3. 根据权利要求1或2所述的方法,其特征在于,所述采用直线拟合方式从所述手指静脉图像中提取感兴趣区域包括:
    采用直线拟合方式确定出上边沿拟合线和下边沿拟合线;
    根据所述感兴趣区域的左边沿线和右边沿线、所述上边沿拟合线和下边沿拟合线确定出所述感兴趣区域。
  4. 根据权利要求3所述的方法,其特征在于,所述采用直线拟合方式确定出上边沿拟合线和下边沿拟合线包括:
    用中间线将所述手指静脉图像平均划分成上下两部分,并在x轴上将所述手指静脉图像划分成至少2个子区域;
    对所述子区域在y轴上积分,分别得到上边沿拟合点和下边沿线拟合点,对所述上边沿拟合点进行拟合得到所述上边沿拟合线,对所述下边沿拟合点进行拟合得到所述下边沿拟合线。
  5. 根据权利要求3所述的方法,其特征在于,所述对所述感兴趣区域进行几何归一化和灰度归一化处理得到处理后区域包括:
    通过仿射变换对所述感兴趣区域进行区域正规化得到正规化区域;
    通过椭圆变换对所述正规化区域进行几何归一化得到几何化区域;
    对所述几何化区域进行灰度归一化处理得到所述处理后区域。
  6. 根据权利要求5所述的方法,其特征在于,所述通过仿射变换对所述感兴趣区域进行区域正规化得到正规化区域包括:
    确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的左边沿线的左上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的左边沿线的左下相交点,以及确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的右边沿线的右上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的右边沿线的右下相交点;
    根据所述左上相交点、右上相交点、左下相交点和右下相交点,通过仿射变换对所述感兴趣区域进行正规化得到所述正规化区域。
  7. 根据权利要求6所述的方法,其特征在于,
    所述确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的左边沿线的左上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的左边沿线的左下相交点包括:
    确定出所述手指静脉图像的左边沿线上的第一起点,根据所述第一起点确定所述左上相交点和所述左下相交点,所述第一起点到所述感兴趣区域的上边沿拟合线的距离与所述第一起点到所述感兴趣区域的下边沿拟合线的距离相等;
    所述确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的右边沿线的右上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的右边沿线的右下相交点包括:
    确定出所述手指静脉图像的右边沿线上的第一终点,根据所述第一终点确定所述右上相交点和所述右下相交点,所述第一终点到所述感兴趣区域的上边沿拟合线的距离与所述第一终点到所述感兴趣区域的下边沿拟合线的距离相等。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述左上相交点、右上相交点、左下相交点和右下相交点,通过仿射变换对所述感兴趣区域进行正规化得到所述正规化区域包括:
    根据所述第一起点与所述第一终点确定所述感兴趣区域的中线;
    从所述中线的所述第一起点开始到所述中线的所述第一终点结束,逐个确定出所述中线上的中线相交点,并确定出经过所述中线相交点与所述中线垂直的直线,将所述中线相交点与所述感兴趣区域的上边沿拟合线之间的线段正规 化到所述直线上,以及将所述中线相交点与所述感兴趣区域的下边沿线之间的线段正规化到所述直线上得到所述正规化区域。
  9. 根据权利要求5~8任一项所述的方法,其特征在于,所述通过椭圆变换对所述正规化区域进行几何归一化得到几何化区域包括:
    计算所述正规化区域中每一个像素点进行几何变换后的坐标点;
    根据所述坐标点对所述正规化区域进行几何变换得到所述几何化区域。
  10. 根据权利要求5~8任一项所述的方法,其特征在于,所述对所述几何化区域进行灰度归一化处理得到所述处理后区域包括:
    依次以所述几何化区域中的像素点作为第一中心像素点,以所述第一中心像素点为正方形中点,标记边长为d的正方形;
    求取所述正方形内所有像素点的灰度值的平均值和方差;
    根据所述平均值和方差更改所述第一中心像素点的灰度值,得到所述处理后区域。
  11. 根据权利要求1或2所述的方法,其特征在于,所述从所述处理后区域中确定出手指静脉血管线包括:
    依次以所述处理后区域的像素点作为第二中心像素点,以所述第二中心像素点为圆心,标记半径为r的圆;
    在所述圆内,查询灰度值不大于所述第二中心像素点的灰度值的像素点的总数;
    当确定所述总数满足预设条件时,确定所述第二中心像素点属于手指静脉血管线,将所述第二中心像素点的灰度值设置为第一灰度值;当确定所述总数不满足预设条件时,确定所述第二中心像素点不属于手指静脉血管线,将所述第二中心像素点的灰度值设置为第二灰度值;其中,当所述第一灰度值为255时,所述第二灰度值为0;当所述第一灰度值为0时,所述第二灰度值为255。
  12. 根据权利要求2所述的方法,其特征在于,所述对所述手指静脉血管图进行去噪处理包括:
    依次确定所述手指静脉血管图中的像素点的灰度值,以及所述像素点的8个相邻像素点的灰度值;
    根据所述像素点的灰度值,以及所述像素点的相邻8个像素点的灰度值, 判断是否需要将所述像素点的灰度值更改为预设条件对应的灰度值;
    若是,则将所述像素点的灰度值更改为预设条件对应的灰度值,所述预设条件对应的灰度值包括第一灰度值和第二灰度值,其中,当所述第一灰度值为255时,所述第二灰度值为0;当所述第一灰度值为0时,所述第二灰度值为255。
  13. 根据权利要求12所述的方法,其特征在于,所述预设条件为:
    若所述像素点的灰度值为所述第一灰度值,且所述像素点的左边像素点和右边像素点的灰度值均为所述第二灰度值,则将所述像素点的灰度值更改为所述第二灰度值;或者,
    若所述像素点的灰度值为所述第一灰度值,且所述像素点的相邻8个像素点中灰度值为所述第一灰度值的像素点的总数不大于5时,则将所述像素点的灰度值更改为所述第二灰度值。
  14. 根据权利要求1所述的方法,其特征在于,所述根据所述手指静脉血管图进行手指静脉识别包括:
    统计所述手指静脉血管图与另一手指静脉血管图在同一个坐标位置上同属于静脉血管线的像素点的总数Wab,以及计算所述手指静脉血管图中属于静脉血管线的像素点的总数Wa和所述另一手指静脉血管图中属于静脉血管线的像素点的总数Wb;
    根据所述总数Wab、总数Wa和总数Wb计算匹配率;
    判断所述匹配率是否满足预设条件,若是,则识别出所述手指静脉血管图的手指静脉与另一手指静脉血管图的手指静脉匹配。
  15. 一种手指静脉识别装置,其特征在于,包括:
    采集模块,用于采集手指静脉图像;
    区域提取模块,用于采用直线拟合方式从所述手指静脉图像中提取感兴趣区域;
    图像处理模块,用于对所述感兴趣区域进行几何归一化和灰度归一化处理得到处理后区域,从所述处理后区域中确定出手指静脉血管线,得到手指静脉血管图;
    识别模块,用于根据所述手指静脉血管图进行手指静脉识别。
  16. 根据权利要求15所述的装置,其特征在于,所述装置还包括:
    噪声处理模块,用于对所述手指静脉血管图进行去噪处理。
  17. 根据权利要求15或16所述的装置,其特征在于,
    所述区域提取模块具体用于,采用直线拟合方式确定出上边沿拟合线和下边沿拟合线;根据所述感兴趣区域的左边沿线和右边沿线、所述上边沿拟合线和下边沿拟合线确定出所述感兴趣区域。
  18. 根据权利要求17所述的装置,其特征在于,
    所述区域提取模块具体用于,用中间线将所述手指静脉图像平均划分成上下两部分,并在x轴上将所述手指静脉图像划分成至少2个子区域;对所述子区域在y轴上积分,分别得到上边沿拟合点和下边沿线拟合点,对所述上边沿拟合点进行拟合得到所述上边沿拟合线,对所述下边沿拟合点进行拟合得到所述下边沿拟合线。
  19. 根据权利要求17所述的装置,其特征在于,
    所述图像处理模块具体用于,通过仿射变换对所述感兴趣区域进行区域正规化得到正规化区域;通过椭圆变换对所述正规化区域进行几何归一化得到几何化区域;对所述几何化区域进行灰度归一化处理得到所述处理后区域。
  20. 根据权利要求19所述的装置,其特征在于,
    所述图像处理模块具体用于,确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的左边沿线的左上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的左边沿线的左下相交点,以及确定所述感兴趣区域的上边沿拟合线与所述手指静脉图像的右边沿线的右上相交点、确定所述感兴趣区域的下边沿拟合线与所述手指静脉图像的右边沿线的右下相交点;根据所述左上相交点、右上相交点、左下相交点和右下相交点,通过仿射变换对所述感兴趣区域进行正规化得到所述正规化区域。
  21. 根据权利要求20所述的装置,其特征在于,
    所述图像处理模块具体用于,确定出所述手指静脉图像的左边沿线上的第一起点,根据所述第一起点确定所述左上相交点和所述左下相交点,所述第一起点到所述感兴趣区域的上边沿拟合线的距离与所述第一起点到所述感兴趣区域的下边沿拟合线的距离相等;和,确定出所述手指静脉图像的右边沿线上 的第一终点,根据所述第一终点确定所述右上相交点和所述右下相交点,所述第一终点到所述感兴趣区域的上边沿拟合线的距离与所述第一终点到所述感兴趣区域的下边沿拟合线的距离相等。
  22. 根据权利要求21所述的装置,其特征在于,
    所述图像处理模块具体用于,根据所述第一起点与所述第一终点确定所述感兴趣区域的中线;从所述中线的所述第一起点开始到所述中线的所述第一终点结束,逐个确定出所述中线上的中线相交点,并确定出经过所述中线相交点与所述中线垂直的直线,将所述中线相交点与所述感兴趣区域的上边沿拟合线之间的线段正规化到所述直线上,以及将所述中线相交点与所述感兴趣区域的下边沿线之间的线段正规化到所述直线上得到所述正规化区域。
  23. 根据权利要求19~22任一项所述的装置,其特征在于,
    所述图像处理模块具体还用于,计算所述正规化区域中每一个像素点进行几何变换后的坐标点;根据所述坐标点对所述正规化区域进行几何变换得到所述几何化区域。
  24. 根据权利要求19~22任一项所述的装置,其特征在于,
    所述图像处理模块具体还用于,依次以所述几何化区域中的像素点作为第一中心像素点,以所述第一中心像素点为正方形中点,标记边长为d的正方形;求取所述正方形内所有像素点的灰度值的平均值和方差;根据所述平均值和方差更改所述第一中心像素点的灰度值,得到所述处理后区域。
  25. 根据权利要求15或16所述的方法,其特征在于,
    所述图像处理模块具体还用于,依次以所述处理后区域的像素点作为第二中心像素点,以所述第二中心像素点为圆心,标记半径为r的圆;在所述圆内,查询灰度值不大于所述第二中心像素点的灰度值的像素点的总数;当确定所述总数满足预设条件时,确定所述第二中心像素点属于手指静脉血管线,将所述第二中心像素点的灰度值设置为第一灰度值;当确定所述总数不满足预设条件时,确定所述第二中心像素点不属于手指静脉血管线,将所述第二中心像素点的灰度值设置为第二灰度值;其中,当所述第一灰度值为255时,所述第二灰度值为0;当所述第一灰度值为0时,所述第二灰度值为255。
  26. 根据权利要求16所述的装置,其特征在于,
    噪声处理模块具体用于,依次确定所述手指静脉血管图中的像素点的灰度值,以及所述像素点的8个相邻像素点的灰度值;根据所述像素点的灰度值,以及所述像素点的相邻8个像素点的灰度值,判断是否需要将所述像素点的灰度值更改为预设条件对应的灰度值;若是,则将所述像素点的灰度值更改为预设条件对应的灰度值,所述预设条件对应的灰度值包括第一灰度值和第二灰度值,其中,当所述第一灰度值为255时,所述第二灰度值为0;当所述第一灰度值为0时,所述第二灰度值为255。
  27. 根据权利要求26所述的装置,其特征在于,所述预设条件为若所述像素点的灰度值为所述第一灰度值,且所述像素点的左边像素点和右边像素点的灰度值均为所述第二灰度值,则将所述像素点的灰度值更改为所述第二灰度值;或者,若所述像素点的灰度值为所述第一灰度值,且所述像素点的相邻8个像素点中灰度值为所述第一灰度值的像素点的总数不大于5时,则将所述像素点的灰度值更改为所述第二灰度值。
  28. 根据权利要求15所述的装置,其特征在于,
    所述识别模块具体用于,统计所述手指静脉血管图与另一手指静脉血管图在同一个坐标位置上同属于静脉血管线的像素点的总数Wab,以及计算所述手指静脉血管图中属于静脉血管线的像素点的总数Wa和所述另一手指静脉血管图中属于静脉血管线的像素点的总数Wb;根据所述总数Wab、总数Wa和总数Wb计算匹配率;判断所述匹配率是否满足预设条件,若是,则识别出所述手指静脉血管图的手指静脉与另一手指静脉血管图的手指静脉匹配。
PCT/CN2015/091630 2015-10-10 2015-10-10 手指静脉识别方法及装置 WO2017059591A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/767,176 US10762366B2 (en) 2015-10-10 2015-10-10 Finger vein identification method and device
PCT/CN2015/091630 WO2017059591A1 (zh) 2015-10-10 2015-10-10 手指静脉识别方法及装置
CN201580000591.5A CN105518716A (zh) 2015-10-10 2015-10-10 手指静脉识别方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/091630 WO2017059591A1 (zh) 2015-10-10 2015-10-10 手指静脉识别方法及装置

Publications (1)

Publication Number Publication Date
WO2017059591A1 true WO2017059591A1 (zh) 2017-04-13

Family

ID=55725030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/091630 WO2017059591A1 (zh) 2015-10-10 2015-10-10 手指静脉识别方法及装置

Country Status (3)

Country Link
US (1) US10762366B2 (zh)
CN (1) CN105518716A (zh)
WO (1) WO2017059591A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753575A (zh) * 2018-08-27 2019-05-14 广州麦仑信息科技有限公司 一种基于统计编码的掌静脉图像快速检索方法
CN110310239A (zh) * 2019-06-20 2019-10-08 四川阿泰因机器人智能装备有限公司 一种基于特性值拟合消除光照影响的图像处理方法
CN110570362A (zh) * 2019-08-02 2019-12-13 佛山科学技术学院 一种虹膜血管成像方法及装置
CN111046870A (zh) * 2019-12-10 2020-04-21 珠海格力电器股份有限公司 一种指关节定位方法、装置、介质和设备
CN112052842A (zh) * 2020-10-14 2020-12-08 福建省海峡智汇科技有限公司 一种基于掌静脉的人员识别方法和装置
CN112560710A (zh) * 2020-12-18 2021-03-26 北京曙光易通技术有限公司 一种用于构建指静脉识别系统的方法及指静脉识别系统

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017059591A1 (zh) * 2015-10-10 2017-04-13 厦门中控生物识别信息技术有限公司 手指静脉识别方法及装置
CN106250842A (zh) * 2016-07-28 2016-12-21 电子科技大学 一种基于脉向检测的指静脉识别技术
KR20180091597A (ko) * 2017-02-07 2018-08-16 삼성전자주식회사 초음파 신호를 이용하여 지문 정보를 획득하는 전자 장치
CN109523484B (zh) * 2018-11-16 2023-01-17 中国民航大学 一种基于分形特征的手指静脉血管网络修复方法
GB2595129A (en) * 2019-01-30 2021-11-17 Buddi Ltd Identification device
CN110175500B (zh) * 2019-04-03 2024-01-19 平安科技(深圳)有限公司 指静脉比对方法、装置、计算机设备及存储介质
CN110147769B (zh) * 2019-05-22 2023-11-07 成都艾希维智能科技有限公司 一种手指静脉图像匹配方法
CN112183518B (zh) * 2020-09-25 2024-05-28 伏羲九针智能科技(北京)有限公司 静脉靶点自动确定方法、装置和设备
CN112036383B (zh) * 2020-11-04 2021-02-19 北京圣点云信息技术有限公司 一种基于手静脉的身份识别方法及装置
CN112395981B (zh) * 2020-11-17 2023-08-18 华北电力大学扬中智能电气研究中心 一种基于指静脉图像的认证方法、装置、设备及介质
CN112784837B (zh) * 2021-01-26 2024-01-30 电子科技大学中山学院 一种感兴趣区域提取方法、装置、电子设备及存储介质
CN112949570B (zh) * 2021-03-26 2022-08-09 长春工业大学 一种基于残差注意力机制的指静脉识别方法
CN113128378B (zh) * 2021-04-06 2022-07-19 浙江精宏智能科技有限公司 一种指静脉快速识别方法
CN113076921B (zh) * 2021-04-21 2022-11-18 华南理工大学 一种三维指部生物特征模型的多光谱纹理同步映射方法
CN113076927B (zh) * 2021-04-25 2023-02-14 华南理工大学 基于多源域迁移的指静脉识别方法及系统
CN113361412B (zh) * 2021-06-08 2022-03-01 西南科技大学 一种基于ulbp和surf的特征递进的手指静脉图像精确匹配方法
CN116030065B (zh) * 2023-03-31 2024-06-14 云南琰搜电子科技有限公司 一种基于图像识别的道路质量检测方法
CN116798077B (zh) * 2023-08-18 2023-11-07 江苏圣点世纪科技有限公司 一种手掌照片检测方法
CN116884048B (zh) * 2023-09-08 2023-12-12 江苏圣点世纪科技有限公司 一种基于边缘形态的异常静脉图像检测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101025666B1 (ko) * 2009-08-25 2011-03-30 서울대학교산학협력단 지정맥 특징점 추출 방법 및 장치
US20110304720A1 (en) * 2010-06-10 2011-12-15 The Hong Kong Polytechnic University Method and apparatus for personal identification using finger imaging
CN103310196A (zh) * 2013-06-13 2013-09-18 黑龙江大学 感兴趣区域与方向元素的手指静脉识别方法
CN103996038A (zh) * 2014-05-01 2014-08-20 朱毅 一种调整手指静脉识别图像倾斜角度的方法
CN105518716A (zh) * 2015-10-10 2016-04-20 厦门中控生物识别信息技术有限公司 手指静脉识别方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101002682A (zh) * 2007-01-19 2007-07-25 哈尔滨工程大学 用于身份识别的手背静脉特征提取和匹配方法
US20080239335A1 (en) * 2007-04-02 2008-10-02 Samsung Electronics Co., Ltd. Encoding and decoding method for enhancing depth resolution of an image, and print system using the same
JP5504928B2 (ja) * 2010-01-29 2014-05-28 ソニー株式会社 生体認証装置、生体認証方法およびプログラム
CN101901336B (zh) * 2010-06-11 2012-03-14 哈尔滨工程大学 指纹与指静脉双模态识别决策级融合法
CN102222229B (zh) * 2011-07-28 2015-12-02 陈庆武 手指静脉图像预处理方法
CN104102913B (zh) * 2014-07-15 2018-10-16 无锡优辰电子信息科技有限公司 腕部静脉认证系统
CN104864829B (zh) * 2015-06-14 2017-06-27 吉林大学 一种叶片曲面的快速测量方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101025666B1 (ko) * 2009-08-25 2011-03-30 서울대학교산학협력단 지정맥 특징점 추출 방법 및 장치
US20110304720A1 (en) * 2010-06-10 2011-12-15 The Hong Kong Polytechnic University Method and apparatus for personal identification using finger imaging
CN103310196A (zh) * 2013-06-13 2013-09-18 黑龙江大学 感兴趣区域与方向元素的手指静脉识别方法
CN103996038A (zh) * 2014-05-01 2014-08-20 朱毅 一种调整手指静脉识别图像倾斜角度的方法
CN105518716A (zh) * 2015-10-10 2016-04-20 厦门中控生物识别信息技术有限公司 手指静脉识别方法及装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUANG, BEINING ET AL.: "Finger-vein Authentication Based on Wide Line Detector and Pattern Normalization", 2010 INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, 26 August 2010 (2010-08-26), pages 1269 - 1272, XP031772282, ISSN: 1051-4651 *
PENG, JIALIANG;: "Research on key issues of multi-modal biometric verification based on finger", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE , CHINA DOCTORAL DISSERTATIONS FULL-TEXT, 15 January 2015 (2015-01-15), pages 21 - 43, ISSN: 1674-022X *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753575A (zh) * 2018-08-27 2019-05-14 广州麦仑信息科技有限公司 一种基于统计编码的掌静脉图像快速检索方法
CN109753575B (zh) * 2018-08-27 2023-04-18 广州麦仑信息科技有限公司 一种基于统计编码的掌静脉图像快速检索方法
CN110310239A (zh) * 2019-06-20 2019-10-08 四川阿泰因机器人智能装备有限公司 一种基于特性值拟合消除光照影响的图像处理方法
CN110310239B (zh) * 2019-06-20 2023-05-05 四川阿泰因机器人智能装备有限公司 一种基于特性值拟合消除光照影响的图像处理方法
CN110570362A (zh) * 2019-08-02 2019-12-13 佛山科学技术学院 一种虹膜血管成像方法及装置
CN111046870A (zh) * 2019-12-10 2020-04-21 珠海格力电器股份有限公司 一种指关节定位方法、装置、介质和设备
CN111046870B (zh) * 2019-12-10 2023-06-02 珠海格力电器股份有限公司 一种指关节定位方法、装置、介质和设备
CN112052842A (zh) * 2020-10-14 2020-12-08 福建省海峡智汇科技有限公司 一种基于掌静脉的人员识别方法和装置
CN112052842B (zh) * 2020-10-14 2023-12-19 福建省海峡智汇科技有限公司 一种基于掌静脉的人员识别方法和装置
CN112560710A (zh) * 2020-12-18 2021-03-26 北京曙光易通技术有限公司 一种用于构建指静脉识别系统的方法及指静脉识别系统
CN112560710B (zh) * 2020-12-18 2024-03-01 北京曙光易通技术有限公司 一种用于构建指静脉识别系统的方法及指静脉识别系统

Also Published As

Publication number Publication date
CN105518716A (zh) 2016-04-20
US20180300571A1 (en) 2018-10-18
US10762366B2 (en) 2020-09-01

Similar Documents

Publication Publication Date Title
WO2017059591A1 (zh) 手指静脉识别方法及装置
Kang et al. Contactless palm vein recognition using a mutual foreground-based local binary pattern
WO2018196371A1 (zh) 一种三维手指静脉识别方法及系统
Matsuda et al. Finger-vein authentication based on deformation-tolerant feature-point matching
Woodard et al. Finger surface as a biometric identifier
WO2021027364A1 (zh) 基于指静脉识别的身份验证方法和装置
CN104239769B (zh) 基于手指静脉特征的身份识别方法及系统
US20060008124A1 (en) Iris image-based recognition system
Frucci et al. WIRE: Watershed based iris recognition
CN105760841B (zh) 一种身份识别方法及系统
Ambeth Kumar et al. Exploration of an innovative geometric parameter based on performance enhancement for foot print recognition
WO2021243926A1 (zh) 指静脉识别与防伪一体化方法、装置、存储介质和设备
WO2013087026A1 (zh) 一种虹膜定位方法和定位装置
Jaswal et al. Multiple feature fusion for unconstrained palm print authentication
CN112597812A (zh) 一种基于卷积神经网络和sift算法的手指静脉识别方法及系统
Aleem et al. Fast and accurate retinal identification system: Using retinal blood vasculature landmarks
Trabelsi et al. A new multimodal biometric system based on finger vein and hand vein recognition
CN113936303B (zh) 一种手部图像最大内接矩形确定方法及图像识别方法
Qin et al. Finger-vein image quality evaluation based on the representation of grayscale and binary image
Doroz et al. An accurate fingerprint reference point determination method based on curvature estimation of separated ridges
Jahanbin et al. Passive three dimensional face recognition using iso-geodesic contours and procrustes analysis
Zhang et al. Sweat gland extraction from optical coherence tomography using convolutional neural network
Benziane et al. Dorsal hand vein identification based on binary particle swarm optimization
WO2016004706A1 (zh) 一种改善在非理想环境下虹膜识别性能的方法
Mali et al. Fingerprint recognition using global and local structures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15905689

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15767176

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15905689

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 15905689

Country of ref document: EP

Kind code of ref document: A1