AU2019368520B2 - Three-dimensional finger vein feature extraction method and matching method therefor - Google Patents

Three-dimensional finger vein feature extraction method and matching method therefor Download PDF

Info

Publication number
AU2019368520B2
AU2019368520B2 AU2019368520A AU2019368520A AU2019368520B2 AU 2019368520 B2 AU2019368520 B2 AU 2019368520B2 AU 2019368520 A AU2019368520 A AU 2019368520A AU 2019368520 A AU2019368520 A AU 2019368520A AU 2019368520 B2 AU2019368520 B2 AU 2019368520B2
Authority
AU
Australia
Prior art keywords
finger
matching
vein
features
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
AU2019368520A
Other versions
AU2019368520A1 (en
Inventor
Qichen GONG
Wenxiong KANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Publication of AU2019368520A1 publication Critical patent/AU2019368520A1/en
Application granted granted Critical
Publication of AU2019368520B2 publication Critical patent/AU2019368520B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

A three-dimensional finger vein feature extraction method, comprising: step one: a two-dimensional finger vein image is acquired; step two: the two-dimensional finger vein image is mapped onto a three-dimensional model to construct a three-dimensional finger model; step three: the three-dimensional finger model is normalized; and step four: feature extraction is performed on the normalized three-dimensional finger model. A three-dimensional finger vein feature matching method. In said matching method, vein pattern feature and central axis geometric distance feature scores are calculated for a template sample and a sample to be matched, weighted fusion is performed, and determination is performed on fused matching scores by means of a threshold, thereby completing three-dimensional finger vein matching and identification. By means of the present three-dimensional finger vein feature extraction method and the matching method therefor, more vein pattern features can be extracted, resulting in better matching and identification results, and the problem of poor matching and identification performance caused by changes in finger position can be effectively solved, thereby improving the accuracy and effectiveness of vein matching and recognition.

Description

A METHOD FOR EXTRACTING AND MATCHING THREE-DIMENSIONAL FINGER VEIN FEATURES
FIELD OF THE INVENTION The present invention relates to the technical field of vein recognition, in particular to a method for extracting and matching three-dimensional (3D) finger vein features.
BACKGROUND OF THE INVENTION Biometric recognition technology is a kind of technology that uses one or more human physiological characteristics (such as fingerprint, face, iris and vein) or behavioral characteristics (such as gait and signature) for identity authentication. In the biometric recognition technology, finger vein recognition technology, beginning to occupy an important position in the field of identity authentication with its unique advantages, is a biometric recognition technology that uses the texture information of the blood vessels under the finger epidermis to verify individual identity. In biometric recognition, the finger vein recognition technology has unique application advantages and prospects because of its high security and stability. At present, finger vein recognition systems are based on two-dimensional (2D) vein image for recognition. The recognition performance of these systems will be greatly reduced when the fingers are not placed properly, especially when the fingers are rotated axially. However, there are relatively few studies on finger vein recognition where the fingers are in different postures. The few existing studies include: using an ellipse model to expand the acquired 2D finger vein images, so as to standardize the finger vein images, and then intercepting the effective region for matching; using a circle model to expand the 2D finger vein images; or using a 3D model (crucially still using an ellipse model) to standardize the finger vein images in six different finger postures, and then matching the standardized images. The physical models, no matter which one is used, have improved to a certain extent the situation that there is big difference among the vein images of the same finger taken in different postures. However, there are still the following problems: On the one hand, the corresponding texture regions become fewer, which is not conducive to matching; on the other hand, the quality of the vein image in an edge region is generally poor due to the influence of imaging factors, which also affects the recognition results. Another method is a 3D imaging method based on multi-view geometry. However, this method is difficult to find or even cannot find matching feature points in 3D reconstruction, so it is difficult to calculate the depth information of all vein textures. In addition, the vein texture acquired by this method is only unilateral, so there is still the problem of limited feature information. Reference to cited material or information contained in the text should not be understood as a concession that the material or information was part of the common general knowledge or was known in Australia or any other country. Each document, reference, patent application or patent cited in this text is expressly incorporated herein in their entirety by reference, which means that it should be read and considered by the reader as part of this text. That the document, reference, patent application, or patent cited in this text is not repeated in this text is merely for reasons for conciseness. Reference numbers and letters appearing between parentheses in the claims, identifying features described in the embodiment(s) and/or example(s) and/or illustrated in the accompanying drawings, are provided as an aid to the reader as an exemplification of the matter claimed. The inclusion of such reference numbers and letters is not to be interpreted as placing any limitations on the scope of the claims. Throughout the specification and claims, unless the context requires otherwise, the word "comprise" or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated integer or group of integers but not the exclusion of any other integer or group of integers.
SUMMARY OF THE INVENTION An aim of the present invention is to provide a method for extracting and matching 3D finger vein features, so as to overcome the shortcomings and deficiencies in the prior art. This method can obtain more vein texture features, achieve better matching recognition effect, and effectively solve the problem of poor matching recognition performance caused by finger posture changes, thereby improving the accuracy and effectiveness of vein matching recognition. The present invention adopts the following technical solution: A method for extracting 3D finger vein features is provided, characterized in that it comprises the following steps: step 1: using three cameras to capture finger veins from three angles in an equal angle to obtain 2D finger vein images; step 2: mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model; step 3: normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger; and step 4: performing feature extraction on the normalized 3D finger model: (1) processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map; and (2) using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time. In a preferred embodiment, the present invention provides a method for extracting three-dimensional (3D) finger vein features, characterized in that the method comprises the following steps: step 1: using three cameras to capture finger veins from three angles in an equal angle to obtain 2D finger vein images; step 2: mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model; step 3: normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger; and step 4: performing feature extraction on the normalized 3D finger model: (1) processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map; and (2) using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time; wherein the "processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map" in step 4 means specifically as follows: firstly, the sector cylinder region is defined as SC-Block(i), where i as a subscript ranges from 1to N; the 3D cylinder is rotatively cut along its axis to obtain 360 sector cylinder regions; the center angle range of the bottom surface of the sector cylinder region is set to ((i1) Aa, i Aa]; besides, the height Z of the cylinder is set to range over [z ,zmax], where zmin and zmax represent the minimum and maximum heights, respectively; N = N=360/ A a , where N represents the width of the feature map and A a represents the angle sampling interval; and then, the 3D point set of the sector cylinder region set is mapped to the 3D texture expansion map IF3DTM and the geometric distance feature map 'F3DGM through the following functions:
I[F3DTM .COl(i) = F (SC-Block(i)) (1)
IF3DGM .COl(i)=Fg(SC-Block(i)) (2)
where F3DTM and F3DGM respectively represent the 3D texture expansion map
and the geometric distance feature map, .col(i) represents the i-th column of the
feature map, and the functions g and ' respectively divide the sector cylinder region set SC-Block(i) at fixed intervals from the Z axis into M blocks; wherein each
pixel of IF3DTM is obtained by calculating the average pixel value in the region, while
each pixel of F3DGM is obtained by calculating the average value of the straight-line distance from the point set in the corresponding region to the central axis. The "mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model" in step 2 means specifically as follows: With the finger profile approximately regarded as an ellipse, the 3D finger is divided equidistantly into series of cross sections, and the contour of each section is calculated, and then the finger is approximately modeled by multiple ellipses with different radii and positions; and then all the contours are connected in series along the central axis of the finger to obtain an approximate 3D finger model. The "normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger" in step 3 means specifically as follows: The center of each approximately elliptical cross section obtained in the 3D reconstruction is regressed to a central axis by using the least square method, and then the coordinates are normalized by the following equation (1):
X -X" - Y - Z(1
S W G
where (x".,z,) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis; the above normalization can make the axis of the finger consistent with the central axis of the 3D model, and make the center point of the 3D model consistent with the origin, thereby eliminating the offset caused by horizontal and vertical movement. The "processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map" in step 4 means specifically as follows:
Firstly, a sector cylinder region is defined as SC-Block(i), where i as a
subscript ranges from 1to N; the 3D cylinder is rotatively cut along its axis to obtain 360 sector cylinder regions; the center angle range of the bottom surface of the sector
cylinder region is set to ((i1) Aa, i Aa]; besides, the height Z of the cylinder is
set to range over [z ,zmax], where zmin and zmax represent the minimum and maximum heights, respectively; N = 360/Aa, where N represents the width of the feature map and Au represents the angle sampling interval; and then, the 3D point set of the sector cylinder region set is mapped to the 3D
texture expansion map IF3DTM and the geometric distance feature map 'F3DGM
through the following functions:
I[F3DTM .COl(i) = F (SC-Block(i)) (1)
IF3DGM .COl(i)=Fg(SC-Block(i)) (2)
where F3DTM and F3DGM respectively represent the 3D texture expansion map and
the geometric distance feature map, .col(i) represents the i-th column of the feature
map, and the functions g and t respectively divide the sector cylinder region set SC-Block(i) at fixed intervals from the Z axis into M blocks; wherein each pixel of
IF3DTM is obtained by calculating the average pixel value in the region, while each
pixel of F3DGM is obtained by calculating the average value of the straight-line distance from the point set in the corresponding region to the central axis. The "using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time" in step 4 means specifically as follows: The structure of the neural network is composed of four convolutional blocks containing 3 x 3 and 1 x 1 convolutional layers continuously stacked, and such a design can effectively reduce the amount of parameters while ensuring the recognition performance; the 3D texture expansion map and the geometric distance feature map, respectively successively through the fully connected layer outputted via the neural network structure and 256 dimensions, obtain the 256-dimensional vein texture features and the 256-dimensional central axis geometric distance features; finally, the loss is calculated through the SoftMax layer, and the network is trained. The method for matching 3D finger vein features is as follows: The scores of the vein texture features and the central axis geometric distance features of a template sample and a to-be-matched sample are calculated, and weighted fusion is carried out, and then the matching scores after the fusion are judged according to a threshold to complete the matching recognition of the 3D digital veins. The method is specifically as follows: Firstly, in the feature matching stage, steps 1 to 4 are performed successively on the finger veins of the template sample and the to-be-matched sample, respectively, to obtain the vein texture and 3D finger shape features (central axis geometric distance features) of the template sample and the to-be-matched sample; the cosine distance Di of the vein texture features of the template sample and the to-be-matched sample, and the cosine distance D 2 of the 3D finger shape features of the template sample and the to-be-matched sample are calculated, respectively; the formulas of their cosine distance are respectively as follows:
DI =iF v2 , D2 |d d2
F F where A and v2 are the finger vein feature of the template sample and the
to-be-matched sample, respectively, and Fdl and Fd2 are the finger shape feature of the template sample and the to-be-matched sample, respectively. Then, the cosine distance (matching score) of the vein texture feature and the finger shape feature is subjected to score level weighted fusion to obtain the total cosine distance D; wherein, with 10% of the data randomly selected as the verification set, the fusion weight values are traversed on the verification set, and the weight value that can have the lowest equal error rate after fusion of the matching scores is taken as the optimal weight; and the optimal weight is used to perform weighted fusion on the matching results to get the final matching result: S=W-S,+(1-W)-Sg where S is the final matching score, S' is the texture matching score, Sg is the shape matching score, and w is the fusion weight. Finally, a threshold is determined through experiments; when the total cosine distance D is less than the threshold, it is judged as matching, otherwise it is judged as mismatching. Compared with the prior art, the present invention has the following advantages and beneficial effects: The method for extracting and matching 3D finger vein features of the present invention can obtain more vein texture features and achieve better matching recognition effects; in addition, it can effectively solve the problem of poor matching recognition performance caused by finger posture changes, thereby improving the accuracy and effectiveness of vein matching recognition.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a schematic flowchart of the method for extracting and matching 3D finger vein features of the present invention; Fig. 2 is a schematic diagram of the construction of a 3D finger model of the ellipse model of the present invention; Fig. 3 is a schematic diagram of 360 sector cylinder regions obtained by rotatively cutting along the axis of the 3D cylinder according to the present invention; Fig. 4 is a 3D texture expansion diagram of the present invention; Fig. 5 is a geometric distance feature diagram of the present invention; and Fig. 6 is a schematic diagram of a fully connected layer, outputted via the convolutional neural network structure and 256 dimensions, of the 3D texture expansion map and the geometric distance feature map of the present invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS The present invention will be further described below in detail with reference to drawings and specific embodiments. Example As shown in Figs. 1-6, a method for extracting 3D finger vein features of the present invention comprises the following steps: step 1: using three cameras to capture finger veins from three angles in an equal angle to obtain 2D finger vein images; step 2: mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model; step 3: normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger; and step 4: performing feature extraction on the normalized 3D finger model: (1) processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map; and (2) using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time. Among them, the "mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model" in step 2 means specifically as follows: With the finger profile approximately regarded as an ellipse, the 3D finger is divided equidistantly into several sections, and the contour of each section is calculated, and then thefinger is approximately modeled by multiple ellipses with different radii and positions; and then all the contours are connected in series along the central axis of the finger to obtain an approximate 3D finger model. The contour of each section is calculated as follows:
1) TheXO coordinate system (2D-CS) is established based on the projection
center CI, C 2 , C3 of the three cameras, as shown in Fig. 2. 2) Determining the equations of the ellipse and straight line The equation of the ellipse is supposed as follows: E C 0] Xl (x y 1] c b 0 y =0 d e f ] 11
C.(x'y) The projection center of each camera is denoted as , and thus the equations
of the straight lines CiUi(Li) and CiBi(Lni) can be obtained; here, we only discuss the occasions where the straight line has a slope: L,,i: y = k,,ix+ b, Lbi:y kbix+bbi
where i=1, 2,3 ki and b" represent the slope and intercept of the straight line L. k , respectively, and khi and bi represent the slope and intercept of the straight
line Li, respectively. 3) Determining constraints Parallel lines of these constraint straight lines are drawn, as shown in Fig. 2, and made tangent to the ellipse, assuming to have the following equations:
L, y = kbx+bb,+i
L, y = k,,x+ bb, + R
According to the condition that U? and hi are tangent to the ellipse, the following constraint equations can be obtained:
B., - 4 Ai C., 0 B, -4 A,Cb,= 0 i=1,2,3
where: Al =a+ bk ,2 +2ck,i
A =a+bkb2,+2ck,
B,, =(2kb+ 2c)(b,,+ ,)+ek,,+d
Bb =(2k b+ 2c)(bi )+ek.,+d
C.i =b(b , +gi.)2 + e(b,+ gu, + f
Cb, = b(bb ,+ goi2 + etbbi+gbo) f
4) Objective optimization function As mentioned earlier, the ellipse must be very close to the constraint equation, so our goal is to minimize the distance between the ellipse and all the straight lines:
3 2 2 min J 1 2) 1+ k' 1+k 5) Solving algorithm
0) Solving a single sectional ellipse We need to convert the coordinates of the edge points on the image to 2D-CS according to the following conversion relationship:
!x L y _ sinO. -cos0, cos0 [yo - yI sin, L.
where 61 represents the angle between and the positive direction of the x axis, i=1,2, 3 , and Ym represents the y values of the optical center of the camera, which
are related to the internal parameters of the camera. For each ellipse, the gradient descent method is used to solve the objective optimization function under the constraints shown in 3). The main problem is how to set the initial iteration point, because a proper initial iteration point plays an extremely important role in both accelerating the optimization speed and finding the global optimal solution. Through a lot of experiments, the method of setting the initial iteration point is determined as follows: An ellipse has five independent variables, including the horizontal and vertical coordinates of the center of the ellipse, the length of the semi-major axis of the ellipse, the eccentricity and the rotation angle. Our problem can be transformed into calculating the approximate inscribed ellipse of a hexagon. According to Brianchon's theory, a hexagon ABCDEF has an inscribed ellipse if and only if its main diagonals AD, BE and CF intersect at the same point:
[(Ax D),(Bx E),(Cx F)]= 0
In the calculation process of our model, these diagonals generally intersect each other.
We set the initial center point (Co) of the ellipse as the center of gravity of the triangle formed by these intersections. The calculation formula is as follows. We set the length of the initial long axis as the minimum distance from the initial center point to the six vertices of the hexagon.
CO = ((Ax D)x (Bx E)+(Ax D)x (Cx F)+(Bx E)x (Cx F))
Ro = min{COA, COB, COC, COD, COE, COF}
In addition, we set the initial eccentricity and rotation angle as fixed values, wherein
the eccentricity is set to e0 and the rotation angle is set to a=0 , and these two constant values are determined through experiments. () Reconstruction of the entire 3D finger model The more sections are calculated, the more accurate 3D finger model can be obtained. However, if the ellipse approximation method is used to calculate more ellipses, more time will be consumed. In practical application, it is not desirable to spend too much time on the 3D finger model reconstruction. Through observation, a detail is found that the change of the finger surface in the axial direction is relatively gentle. We can first select a set of some sparse edge points in the axial direction to reconstruct part of the ellipse, and then use interpolation to expand the number of approximate ellipses. However, this presents another problem: In the selected set of edge points, if there is a relatively big error in the position of the detected edge due to poor image quality or high noise at some edges, the reconstructed finger model will have relatively large defects in some places. In order to reduce the impact of this and to balance the reconstruction accuracy and time loss, we propose a more robust algorithm to construct the 3D finger model: In each corrected image, the region with the abscissa in the range of 91-490 is set as an effective region, which is divided into N sub-regions along the horizontal axis. A group of edge points in each sub-region is selected to reconstruct the ellipse. After the ellipses of all the sub-regions are obtained, the interpolation algorithm is used to expand the ellipse data. Finally, the ellipse in the 2D plane is transformed into the 3D space.
yyz (2,3)xC,0 K(2,2) 1 c 01 xl
[x y 1] c b 0 y =0 d e fj'I We set the Z coordinate value of an ellipse to the same value without changing the
equation of the 2D ellipse. K is an internal parameter of the corrected camera. The "normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger" in step 3 means specifically as follows: The center of each approximately elliptical cross section obtained in the 3D reconstruction is regressed to a central axis by using the least square method, and then the coordinates are normalized by the following equation (1): X-xn _y - _ z-ZZ, (1) S W G where (x', 1 ,z,) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis; the above normalization can make the axis of the finger consistent with the central axis of the 3D model, and make the center point of the 3D model consistent with the origin, thereby eliminating the offset caused by horizontal and vertical movement. The "processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map" in step 4 means specifically as follows:
Firstly, the sector cylinder region is defined as SC-Block(i), where i as a
subscript ranges from 1to N; the 3D cylinder is rotatively cut along its axis to obtain 360 sector cylinder regions, as shown in Fig. 3; the center angle range of the bottom
surface of the sector cylinder region is set to ((i1)Aa, i-Aa]; besides, the height
[zminZma] z Z of the cylinder is set to range over min,-max , where min and zmax represent the minimum and maximum heights, respectively; N = N=360/Aa, where N represents the width of the feature map and Aarepresents the angle sampling interval; and then, the 3D point set of the sector cylinder region set is mapped to the 3D
texture expansion map IF3DTM and the geometric distance feature map 'F3DGM
through the following functions:
I[F3DTM .COl(i) = F (SC-Block(i)) (1)
IF3DGM .COl(i)=Fg(SC-Block(i)) (2)
where F3DTM and F3DGM respectively represent the 3D texture expansion map and
the geometric distance feature map, .col(i) represents the i-th column of the feature
map, and the functions and ' respectively divide the sector cylinder region set SC-Block(i) at fixed intervals from the Z axis into M blocks; wherein each pixel of
IF3DTM is obtained by calculating the average pixel value in the region, while each
pixel of F3DGM is obtained by calculating the average value of the straight-line distance from the point set in the corresponding region to the central axis; the example sets Aa=1 and M=360; Figs. 4 and 5 show examples of the calculated feature map. The "using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time" in step 4 means specifically as follows: As shown in Fig. 6, the structure of the neural network is composed of four convolutional blocks containing 3 x 3 and 1 x1 convolutional layers continuously stacked, and such a design can effectively reduce the amount of parameters while ensuring the recognition performance; the 3D texture expansion map and the geometric distance feature map, respectively successively through the fully connected layer outputted via the neural network structure and 256 dimensions, obtain the vein texture features and the central axis geometric distance features; finally, the loss is calculated through the SoftMax layer, and the network is trained. The method for matching 3D finger vein features according to the present invention is as follows: The scores of the vein texture features and the central axis geometric distance features of the template sample and the to-be-matched sample are calculated, and weighted fusion is carried out, and then the matching scores after the fusion are judged according to a threshold to complete the matching recognition of the 3D digital veins. Specifically, firstly, in the feature matching stage, steps 1 to 4 are performed successively on the finger veins of the template sample and the to-be-matched sample, respectively, to obtain the vein texture and 3D finger shape features of the template sample and the to-be-matched sample; the cosine distance Di of the vein texture features of the template sample and the to-be-matched sample, and the cosine distance D 2 of the 3D finger shape features of the template sample and the to-be-matched sample are calculated, respectively; the formulas of their cosine distance are respectively as follows:
DI =iF v2 , D2 |d d2
F F where A and v2 are the finger vein feature of the template sample and the
to-be-matched sample, respectively, and Fdl and Fd2 are the finger shape feature of the template sample and the to-be-matched sample, respectively.
Then, the cosine distance (matching score) of the vein texture feature and the finger shape feature is subjected to score level weighted fusion to obtain the total cosine distance D; wherein, with 10% of the data randomly selected as the verification set, the fusion weight values are traversed on the verification set, and the weight value that can have the lowest equal error rate after fusion of the matching scores is taken as the optimal weight; and the optimal weight is used to perform weighted fusion on the matching results to get the final matching result; S=w-S,+(1-w).Sg
where S is the final matching score, S' is the texture matching score, S9 is the shape matching score, and W is the fusion weight. Finally, a threshold is determined through experiments; when the total cosine distance D is less than the threshold, it is judged as matching, otherwise it is judged as mismatching. The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited thereto, and any other alterations, modifications, replacements, combinations and simplifications should be equivalent substitutions and included in the scope of protection of the present invention.

Claims (7)

1. A method for extracting three-dimensional (3D) finger vein features, characterized in that the method comprises the following steps: step 1: using three cameras to capture finger veins from three angles in an equal angle to obtain 2D finger vein images; step 2: mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model; step 3: normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger; and step 4: performing feature extraction on the normalized 3D finger model: (1) processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map; and (2) using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time; wherein the "processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map" in step 4 means specifically as follows:
firstly, the sector cylinder region is defined as SC-Block(i), where i as a subscript
ranges from 1 to N; the 3D cylinder is rotatively cut along its axis to obtain 360 sector cylinder regions; the center angle range of the bottom surface of the sector cylinder region
is set to ((i1)Aa, i-Aa]; besides, the height Z of the cylinder is set to range over
minZmax], where and zmax represent the minimum and maximum heights,
respectively; N = N=360/ A a , where N represents the width of the feature map and A a
represents the angle sampling interval; and then, the 3D point set of the sector cylinder region set is mapped to the 3D texture
expansion map IF3DTM and the geometric distance feature map 'F3DGM through the following functions:
I F3DTM.col(i)=F,(SC-Block(i)) (1)
IF3DGM.COl(i)=Fg(SC-Block(i)) (2) where F3DTM and F3DGM respectively represent the 3D texture expansion map and the geometric distance feature map, .col(i) represents the i-th column of the feature map, and the functions and respectively divide the sector cylinder region set SC-Block(i) at fixed intervals from the Z axis into M blocks; wherein each pixel of IF3DTM is obtained by calculating the average pixel value in the region, while each pixel of IF3DGM is obtained by calculating the average value of the straight-line distance from the point set in the corresponding region to the central axis.
2. The method for extracting 3D finger vein features according to claim 1, characterized in that: the "mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model" in step 2 means specifically as follows: with thefinger profile approximately regarded as an ellipse, the 3D finger is divided equidistantly into several sections, and the contour of each section is calculated, and then the finger is approximately modeled by multiple ellipses with different radii and positions; and then all the contours are connected in series along the central axis of the finger to obtain an approximate 3D finger model.
3. The method for extracting 3D finger vein features according to claim 1, characterized in that: the "normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger" in step 3 means specifically as follows: the center of each approximately elliptical cross section obtained in the 3D reconstruction is regressed to a central axis by using the least square method, and then the coordinates are normalized by the following equation (1):
S W G
where (X. 1'",,z,) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis.
4. The method for extracting 3D finger vein features according to claim 1, characterized in that: the "using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time" in step 4 means specifically as follows: the structure of the neural network is composed of four convolutional blocks containing 3 x 3 and 1 x 1 convolutional layers continuously stacked; the 3D texture expansion map and the geometric distance feature map, respectively successively through the fully connected layer outputted via the neural network structure and 256 dimensions, obtain the 256-dimensional vein texture features and the 256-dimensional central axis geometric distance features; finally, the loss is calculated through the SoftMax layer, and the network is trained.
5. The method for matching 3D finger vein features according to claim 1, characterized in that: the scores of the vein texture features and the central axis geometric distance features of a template sample and a to-be-matched sample are calculated, and weighted fusion is carried out, and then the matching scores after the fusion are judged according to a threshold to complete the matching recognition of the 3D digital veins.
6. The method for matching 3D finger vein features according to claim 5, characterized in that: the "the scores of the vein texture features and the central axis geometric distance features of the template sample and the to-be-matched sample are calculated, and weighted fusion is carried out, and then the matching scores after the fusion are judged according to a threshold to complete the matching recognition of the 3D digital veins" means specifically as follows: firstly, in the feature matching stage, steps 1 to 4 are performed successively on the finger veins of the template sample and the to-be-matched sample, respectively, to obtain the vein texture and 3D finger shape features of the template sample and the to-be-matched sample; the cosine distance Di of the vein texture features of the template sample and the to-be-matched sample, and the cosine distance D 2 of the 3D finger shape features of the template sample and the to-be-matched sample are calculated, respectively; the formulas of the cosine distance are respectively as follows: F -F2 F *F d2 D i v2 , D2 _ |d
F F where A and v2 are the finger vein feature of the template sample and the
to-be-matched sample, respectively, and Fdl and Fd2 are the finger shape feature of the template sample and the to-be-matched sample, respectively.
7. The method for matching 3D finger vein features according to claim 6, characterized in that: the cosine distance of the vein texture feature and the finger shape feature is subjected to score level weighted fusion to obtain the total cosine distance D; wherein, with 10% of the data randomly selected as the verification set, the fusion weight values are traversed on the verification set, and the weight value that can have the lowest equal error rate after fusion of the matching scores is taken as the optimal weight; and the optimal weight is used to perform weighted fusion on the matching results to get the final matching result; S=w- S,+(1-w).Sg where S is the final matching score, S' is the texture matching score, Sg is the shape matching score, and W is the fusion weight; finally, a threshold is determined through experiments; when the total cosine distance D is less than the threshold, it is judged as matching, otherwise it is judged as mismatching.
AU2019368520A 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor Active AU2019368520B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201811235227.9A CN109543535B (en) 2018-10-23 2018-10-23 Three-dimensional finger vein feature extraction method and matching method thereof
CN201811235227.9 2018-10-23
PCT/CN2019/113883 WO2020083407A1 (en) 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor

Publications (2)

Publication Number Publication Date
AU2019368520A1 AU2019368520A1 (en) 2021-05-06
AU2019368520B2 true AU2019368520B2 (en) 2022-10-06

Family

ID=65844535

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2019368520A Active AU2019368520B2 (en) 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor

Country Status (3)

Country Link
CN (1) CN109543535B (en)
AU (1) AU2019368520B2 (en)
WO (1) WO2020083407A1 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543535B (en) * 2018-10-23 2021-12-21 华南理工大学 Three-dimensional finger vein feature extraction method and matching method thereof
CN110363250A (en) * 2019-07-23 2019-10-22 北京隆普智能科技有限公司 A kind of method and its system of 3-D image intelligent Matching
CN110378425B (en) * 2019-07-23 2021-10-22 武汉珞思雅设科技有限公司 Intelligent image comparison method and system
CN110827342B (en) * 2019-10-21 2023-06-02 中国科学院自动化研究所 Three-dimensional human body model reconstruction method, storage device and control device
CN110909778B (en) * 2019-11-12 2023-07-21 北京航空航天大学 Image semantic feature matching method based on geometric consistency
CN111009007B (en) * 2019-11-20 2023-07-14 广州光达创新科技有限公司 Finger multi-feature comprehensive three-dimensional reconstruction method
CN111612831A (en) * 2020-05-22 2020-09-01 创新奇智(北京)科技有限公司 Depth estimation method and device, electronic equipment and storage medium
CN111931758B (en) * 2020-10-19 2021-01-05 北京圣点云信息技术有限公司 Face recognition method and device combining facial veins
CN112101332B (en) * 2020-11-23 2021-02-19 北京圣点云信息技术有限公司 Feature extraction and comparison method and device based on 3D finger veins
CN112560710B (en) * 2020-12-18 2024-03-01 北京曙光易通技术有限公司 Method for constructing finger vein recognition system and finger vein recognition system
CN113012271B (en) * 2021-03-23 2022-05-24 华南理工大学 Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping
CN112990160B (en) * 2021-05-17 2021-11-09 北京圣点云信息技术有限公司 Facial vein identification method and identification device based on photoacoustic imaging technology
CN113689344B (en) * 2021-06-30 2022-05-27 中国矿业大学 Low-exposure image enhancement method based on feature decoupling learning
CN113780095B (en) * 2021-08-17 2023-12-26 中移(杭州)信息技术有限公司 Training data expansion method, terminal equipment and medium of face recognition model
CN113673477B (en) * 2021-09-02 2024-07-16 青岛奥美克生物信息科技有限公司 Palm vein non-contact three-dimensional modeling method, device and authentication method
CN113705519B (en) * 2021-09-03 2024-05-24 杭州乐盯科技有限公司 Fingerprint identification method based on neural network
CN114821682B (en) * 2022-06-30 2022-09-23 广州脉泽科技有限公司 Multi-sample mixed palm vein identification method based on deep learning algorithm
CN118116040B (en) * 2023-12-06 2024-10-29 珠海易胜智能科技有限公司 Palm vein recognition method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106919941A (en) * 2017-04-26 2017-07-04 华南理工大学 A kind of three-dimensional finger vein identification method and system
CN108009520A (en) * 2017-12-21 2018-05-08 东南大学 A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851126B (en) * 2015-04-30 2017-10-20 中国科学院深圳先进技术研究院 Threedimensional model dividing method and device based on generalized cylinder
US10089452B2 (en) * 2016-09-02 2018-10-02 International Business Machines Corporation Three-dimensional fingerprint scanner
CN109543535B (en) * 2018-10-23 2021-12-21 华南理工大学 Three-dimensional finger vein feature extraction method and matching method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106919941A (en) * 2017-04-26 2017-07-04 华南理工大学 A kind of three-dimensional finger vein identification method and system
CN108009520A (en) * 2017-12-21 2018-05-08 东南大学 A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net

Also Published As

Publication number Publication date
CN109543535A (en) 2019-03-29
WO2020083407A1 (en) 2020-04-30
CN109543535B (en) 2021-12-21
AU2019368520A1 (en) 2021-05-06

Similar Documents

Publication Publication Date Title
AU2019368520B2 (en) Three-dimensional finger vein feature extraction method and matching method therefor
US10762366B2 (en) Finger vein identification method and device
Kanhangad et al. Contactless and pose invariant biometric identification using hand surface
EP3091479B1 (en) Method and apparatus for fingerprint identification
CN102722890B (en) Non-rigid heart image grading and registering method based on optical flow field model
CN102043961B (en) Vein feature extraction method and method for carrying out identity authentication by utilizing double finger veins and finger-shape features
Xu et al. 3D face recognition based on twin neural network combining deep map and texture
CN101034434A (en) Identification recognizing method based on binocular iris
CN103310196B (en) The finger vein identification method of area-of-interest and direction element
Pan et al. 3D face recognition from range data
CN109583398B (en) Multi-mode biological recognition method based on hand shape and palm print
CN103984920B (en) Three-dimensional face identification method based on sparse representation and multiple feature points
CN102521575A (en) Iris identification method based on multidirectional Gabor and Adaboost
Maltoni et al. Fingerprint analysis and representation
CN103425970A (en) Human-computer interaction method based on head postures
CN103927742A (en) Global automatic registering and modeling method based on depth images
CN104298995A (en) Three-dimensional face identification device and method based on three-dimensional point cloud
CN108052912A (en) A kind of three-dimensional face image recognition methods based on square Fourier descriptor
CN107025449A (en) A kind of inclination image linear feature matching process of unchanged view angle regional area constraint
Hernández-Palancar et al. Using a triangular matching approach for latent fingerprint and palmprint identification
CN117274339A (en) Point cloud registration method based on improved ISS-3DSC characteristics combined with ICP
CN109598261B (en) Three-dimensional face recognition method based on region segmentation
Liu et al. Layer segmentation of OCT fingerprints with an adaptive Gaussian prior guided transformer
Yang et al. A novel system and experimental study for 3D finger multibiometrics
Yuan et al. A review of recent advances in ear recognition

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)