AU2019368520A1 - Three-dimensional finger vein feature extraction method and matching method therefor - Google Patents

Three-dimensional finger vein feature extraction method and matching method therefor Download PDF

Info

Publication number
AU2019368520A1
AU2019368520A1 AU2019368520A AU2019368520A AU2019368520A1 AU 2019368520 A1 AU2019368520 A1 AU 2019368520A1 AU 2019368520 A AU2019368520 A AU 2019368520A AU 2019368520 A AU2019368520 A AU 2019368520A AU 2019368520 A1 AU2019368520 A1 AU 2019368520A1
Authority
AU
Australia
Prior art keywords
finger
matching
vein
features
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
AU2019368520A
Other versions
AU2019368520B2 (en
Inventor
Qichen GONG
Wenxiong KANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Publication of AU2019368520A1 publication Critical patent/AU2019368520A1/en
Application granted granted Critical
Publication of AU2019368520B2 publication Critical patent/AU2019368520B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Abstract

A three-dimensional finger vein feature extraction method, comprising: step one: a two-dimensional finger vein image is acquired; step two: the two-dimensional finger vein image is mapped onto a three-dimensional model to construct a three-dimensional finger model; step three: the three-dimensional finger model is normalized; and step four: feature extraction is performed on the normalized three-dimensional finger model. A three-dimensional finger vein feature matching method. In said matching method, vein pattern feature and central axis geometric distance feature scores are calculated for a template sample and a sample to be matched, weighted fusion is performed, and determination is performed on fused matching scores by means of a threshold, thereby completing three-dimensional finger vein matching and identification. By means of the present three-dimensional finger vein feature extraction method and the matching method therefor, more vein pattern features can be extracted, resulting in better matching and identification results, and the problem of poor matching and identification performance caused by changes in finger position can be effectively solved, thereby improving the accuracy and effectiveness of vein matching and recognition.

Description

A METHOD FOR EXTRACTING AND MATCHING THREE-DIMENSIONAL FINGER VEIN FEATURES
FIELD OF THE INVENTION The present invention relates to the technical field of vein recognition, in particular to a method for extracting and matching three-dimensional (3D) finger vein features.
BACKGROUND OF THE INVENTION Biometric recognition technology is a kind of technology that uses one or more human physiological characteristics (such as fingerprint, face, iris and vein) or behavioral characteristics (such as gait and signature) for identity authentication. In the biometric recognition technology, finger vein recognition technology, beginning to occupy an important position in the field of identity authentication with its unique advantages, is a biometric recognition technology that uses the texture information of the blood vessels under the finger epidermis to verify individual identity. In biometric recognition, the finger vein recognition technology has unique application advantages and prospects because of its high security and stability. At present, finger vein recognition systems are based on two-dimensional (2D) vein image for recognition. The recognition performance of these systems will be greatly reduced when the fingers are not placed properly, especially when the fingers are rotated axially. However, there are relatively few studies on finger vein recognition where the fingers are in different postures. The few existing studies include: using an ellipse model to expand the acquired 2D finger vein images, so as to standardize the finger vein images, and then intercepting the effective region for matching; using a circle model to expand the 2D finger vein images; or using a 3D model (crucially still using an ellipse model) to standardize the finger vein images in six different finger postures, and then matching the standardized images. The physical models, no matter which one is used, have improved to a certain extent the situation that there is big difference among the vein images of the same finger taken in different postures. However, there are still the following problems: On the one hand, the corresponding texture regions become fewer, which is not conducive to matching; on the other hand, the quality of the vein image in an edge region is generally poor due to the influence of imaging factors, which also affects the recognition results. Another method is a 3D imaging method based on multi-view geometry. However, this method is difficult to find or even cannot find matching feature points in 3D reconstruction, so it is difficult to calculate the depth information of all vein textures. In addition, the vein texture acquired by this method is only unilateral, so there is still the problem of limited feature information.
CONTENTS OF THE INVENTION An object of the present invention is to provide a method for extracting and matching 3D finger vein features, so as to overcome the shortcomings and deficiencies in the prior art. This method can obtain more vein texture features, achieve better matching recognition effect, and effectively solve the problem of poor matching recognition performance caused by finger posture changes, thereby improving the accuracy and effectiveness of vein matching recognition. In order to achieve the above object, the present invention adopts the following technical solution: A method for extracting 3D finger vein features is provided, characterized in that it comprises the following steps: step 1: using three cameras to capture finger veins from three angles in an equal angle to obtain 2D finger vein images; step 2: mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model; step 3: normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger; and step 4: performing feature extraction on the normalized 3D finger model: (1) processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map; and (2) using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time. The "mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model" in step 2 means specifically as follows: With the finger profile approximately regarded as an ellipse, the 3D finger is divided equidistantly into series of cross sections, and the contour of each section is calculated, and then the finger is approximately modeled by multiple ellipses with different radii and positions; and then all the contours are connected in series along the central axis of the finger to obtain an approximate 3D finger model.
The "normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger" in step 3 means specifically as follows: The center of each approximately elliptical cross section obtained in the 3D reconstruction is regressed to a central axis by using the least square method, and then the coordinates are normalized by the following equation (1):
S W G
where ( e-nY-,z,) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis; the above normalization can make the axis of the finger consistent with the central axis of the 3D model, and make the center point of the 3D model consistent with the origin, thereby eliminating the offset caused by horizontal and vertical movement. The "processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map" in step 4 means specifically as follows: Firstly, a sector cylinder region is defined as SC-Block(i), where i as a subscript ranges from 1 to N; the 3D cylinder is rotatively cut along its axis to obtain 360 sector cylinder regions; the center angle range of the bottom surface of the sector cylinder region
is set to ((i-1) Aa, i. Aa] besides, the height Z of the cylinder is set to range over
min,max, where min and zmax represent the minimum and maximum heights, respectively; N = 360/Aa, where N represents the width of the feature map and AU represents the angle sampling interval; and then, the 3D point set of the sector cylinder region set is mapped to the 3D texture
expansion map IF3DTM and the geometric distance feature map IF3DGM through the following functions:
IF3DTM.col(i)=F(SC-Block(i)) (1)
IF3DGM.col(i)=Fg(SC-Block(i)) (2) where F3DTM and F3DGM respectively represent the 3D texture expansion map and the
geometric distance feature map, .col(i) represents the i-th column of the feature map,
and the functions F and ' respectively divide the sector cylinder region set SC-Block(i) at fixed intervals from the Z axis into M blocks; wherein each pixel of
IF3DTM is obtained by calculating the average pixel value in the region, while each pixel of IF3DGM is obtained by calculating the average value of the straight-line distance from the point set in the corresponding region to the central axis. The "using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time" in step 4 means specifically as follows: The structure of the neural network is composed of four convolutional blocks containing 3 x 3 and 1 x 1 convolutional layers continuously stacked, and such a design can effectively reduce the amount of parameters while ensuring the recognition performance; the 3D texture expansion map and the geometric distance feature map, respectively successively through the fully connected layer outputted via the neural network structure and 256 dimensions, obtain the 256-dimensional vein texture features and the 256-dimensional central axis geometric distance features; finally, the loss is calculated through the SoftMax layer, and the network is trained. The method for matching 3D finger vein features is as follows: The scores of the vein texture features and the central axis geometric distance features of a template sample and a to-be-matched sample are calculated, and weighted fusion is carried out, and then the matching scores after the fusion are judged according to a threshold to complete the matching recognition of the 3D digital veins. The method is specifically as follows: Firstly, in the feature matching stage, steps 1 to 4 are performed successively on the finger veins of the template sample and the to-be-matched sample, respectively, to obtain the vein texture and 3D finger shape features (central axis geometric distance features) of the template sample and the to-be-matched sample; the cosine distance Di of the vein texture features of the template sample and the to-be-matched sample, and the cosine distance D 2 of the 3D finger shape features of the template sample and the to-be-matched sample are calculated, respectively; the formulas of their cosine distance are respectively as follows:
D = FI Fv2 , D2 ||Fdl ||Fd2
F F where vi and v2 are the finger vein feature of the template sample and the
to-be-matched sample, respectively, and Fdl and Fd2 are the finger shape feature of the template sample and the to-be-matched sample, respectively. Then, the cosine distance (matching score) of the vein texture feature and the finger shape feature is subjected to score level weighted fusion to obtain the total cosine distance D; wherein, with 10% of the data randomly selected as the verification set, the fusion weight values are traversed on the verification set, and the weight value that can have the lowest equal error rate after fusion of the matching scores is taken as the optimal weight; and the optimal weight is used to perform weighted fusion on the matching results to get the final matching result: S=w-S,+(1-W)-Sg where S is the final matching score, S, is the texture matching score, S9 is the shape matching score, and W is the fusion weight. Finally, a threshold is determined through experiments; when the total cosine distance D is less than the threshold, it is judged as matching, otherwise it is judged as mismatching. Compared with the prior art, the present invention has the following advantages and beneficial effects: The method for extracting and matching 3D finger vein features of the present invention can obtain more vein texture features and achieve better matching recognition effects; in addition, it can effectively solve the problem of poor matching recognition performance caused by finger posture changes, thereby improving the accuracy and effectiveness of vein matching recognition.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a schematic flowchart of the method for extracting and matching 3D finger vein features of the present invention; Fig. 2 is a schematic diagram of the construction of a 3D finger model of the ellipse model of the present invention; Fig. 3 is a schematic diagram of 360 sector cylinder regions obtained by rotatively cutting along the axis of the 3D cylinder according to the present invention; Fig. 4 is a 3D texture expansion diagram of the present invention; Fig. 5 is a geometric distance feature diagram of the present invention; and Fig. 6 is a schematic diagram of a fully connected layer, outputted via the convolutional neural network structure and 256 dimensions, of the 3D texture expansion map and the geometric distance feature map of the present invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The present invention will be further described below in detail with reference to drawings and specific embodiments. Example As shown in Figs. 1-6, a method for extracting 3D finger vein features of the present invention comprises the following steps: step 1: using three cameras to capture finger veins from three angles in an equal angle to obtain 2D finger vein images; step 2: mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model; step 3: normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger; and step 4: performing feature extraction on the normalized 3D finger model: (1) processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map; and (2) using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time. Among them, the "mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model" in step 2 means specifically as follows: With the finger profile approximately regarded as an ellipse, the 3D finger is divided equidistantly into several sections, and the contour of each section is calculated, and then the finger is approximately modeled by multiple ellipses with different radii and positions; and then all the contours are connected in series along the central axis of the finger to obtain an approximate 3D finger model. The contour of each section is calculated as follows:
1) Thex~y coordinate system (2D-CS) is established based on the projection
center CIC2,C3 of the three cameras, as shown in Fig. 2. 2) Determining the equations of the ellipse and straight line The equation of the ellipse is supposed as follows: 1 C 0] Xl (x y 1] c b 0 y = 0 d e f] 1 I
Cix,y) , and thus the equations of the The projection center of each camera is denoted as
straight lines CU(L) and CiBi(L") can be obtained; here, we only discuss the occasions where the straight line has a slope:
L, : y = kuix + b
where i=, 2 ,3 kuand bai represent the slope and intercept of the straight line
respectively, and kbi and bbi represent the slope and intercept of the straight line Li, respectively. 3) Determining constraints Parallel lines of these constraint straight lines are drawn, as shown in Fig. 2, and made tangent to the ellipse, assuming to have the following equations:
,I y = k, x+ b, +
Lbi y=kbi x+bi+i
According to the condition that L and L are tangent to the ellipse, the following constraint equations can be obtained:
B -4 1 Ci =0 Bb, -4AbCb =0 i=1,2,3
where: A =a+bk 2+2cki
Ah =a+bk +2cki
Bi =(2k b+2c)(b + )+eki +d
Bb,=(2kbb +2c)(b ekb +d
C.,=b(bi + i)2+e(bi+ ,)+f
+e(bb,+ Cb,=b(b +,bi) b)+ f
4) Objective optimization function As mentioned earlier, the ellipse must be very close to the constraint equation, so our goal is to minimize the distance between the ellipse and all the straight lines:
2 2 3 min J= " + 4i 1+,k, 1+kb
5) Solving algorithm ( Solving a single sectional ellipse
We need to convert the coordinates of the edge points on the image to 2D-CS according to the following conversion relationship:
x. L sin 1 cosOT Yo-Ym 7 yj _-cos0 sinOij Lo
where Oi represents the angle between and the positive direction of the x axis, i=1,2,3, and m represents the y values of the optical center of the camera, which are
related to the internal parameters of the camera. For each ellipse, the gradient descent method is used to solve the objective optimization function under the constraints shown in 3). The main problem is how to set the initial iteration point, because a proper initial iteration point plays an extremely important role in both accelerating the optimization speed and finding the global optimal solution. Through a lot of experiments, the method of setting the initial iteration point is determined as follows: An ellipse has five independent variables, including the horizontal and vertical coordinates of the center of the ellipse, the length of the semi-major axis of the ellipse, the eccentricity and the rotation angle. Our problem can be transformed into calculating the approximate inscribed ellipse of a hexagon. According to Brianchon's theory, a hexagon ABCDEF has an inscribed ellipse if and only if its main diagonals AD, BE and CF intersect at the same point:
[(Ax D),(Bx E),(Cx F)]=0
In the calculation process of our model, these diagonals generally intersect each other. We
set the initial center point ( Co) of the ellipse as the center of gravity of the triangle formed by these intersections. The calculation formula is as follows. We set the length of the initial long axis as the minimum distance from the initial center point to the six vertices of the hexagon.
CO ((Ax D)x (Bx E)+(Ax D)x (Cx F)+(Bx E)x (Cx F))
Ro =min{COA, COB, COC, COD, COE, C0F}
In addition, we set the initial eccentricity and rotation angle as fixed values, wherein the
eccentricity is set to 0 1.4and the rotation angle is set to a=0 , and these two constant values are determined through experiments. @ Reconstruction of the entire 3D finger model The more sections are calculated, the more accurate 3D finger model can be obtained. However, if the ellipse approximation method is used to calculate more ellipses, more time will be consumed. In practical application, it is not desirable to spend too much time on the 3D finger model reconstruction. Through observation, a detail is found that the change of the finger surface in the axial direction is relatively gentle. We can first select a set of some sparse edge points in the axial direction to reconstruct part of the ellipse, and then use interpolation to expand the number of approximate ellipses. However, this presents another problem: In the selected set of edge points, if there is a relatively big error in the position of the detected edge due to poor image quality or high noise at some edges, the reconstructed finger model will have relatively large defects in some places. In order to reduce the impact of this and to balance the reconstruction accuracy and time loss, we propose a more robust algorithm to construct the 3D finger model: In each corrected image, the region with the abscissa in the range of 91-490 is set as an effective region, which is divided into N sub-regions along the horizontal axis. A group of edge points in each sub-region is selected to reconstruct the ellipse. After the ellipses of all the sub-regions are obtained, the interpolation algorithm is used to expand the ellipse data. Finally, the ellipse in the 2D plane is transformed into the 3D space.
y-K(2,3)x Co K(2,2)
(x y 1] c b 0 y =0 d e f 13 We set the Z coordinate value of an ellipse to the same value without changing the
equation of the 2D ellipse. K is an internal parameter of the corrected camera. The "normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger" in step 3 means specifically as follows: The center of each approximately elliptical cross section obtained in the 3D reconstruction is regressed to a central axis by using the least square method, and then the coordinates are normalized by the following equation (1):
X--Xn - YYrn=ZZn, S W G (1)
where (xeIYn,z-) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis; the above normalization can make the axis of the finger consistent with the central axis of the 3D model, and make the center point of the 3D model consistent with the origin, thereby eliminating the offset caused by horizontal and vertical movement. The "processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map" in step 4 means specifically as follows:
Firstly, the sector cylinder region is defined as SC-Block(i), where i as a subscript
ranges from 1 to N; the 3D cylinder is rotatively cut along its axis to obtain 360 sector cylinder regions, as shown in Fig. 3; the center angle range of the bottom surface of the
sector cylinder region is set to ((i). Aa, iAa]; besides, the height Z of the cylinder
is set to range over [zMzmax], where zmm and Zmax represent the minimum and maximum heights, respectively; N = N=360/A a , where N represents the width of the
feature map and A a represents the angle sampling interval; and
then, the 3D point set of the sector cylinder region set is mapped to the 3D texture
expansion map IF3DTM and the geometric distance feature map IF3DGM through the following functions:
IF3DTM.col(i)=F(SC-Block(i)) (1)
IF3DGM.col(i)=Fg(SC-Block(i)) (2)
where F3DTM and F3DGM respectively represent the 3D texture expansion map and the
geometric distance feature map, .col(i) represents the i-th column of the feature map,
and the functions g and ' respectively divide the sector cylinder region set SC-Block(i) at fixed intervals from the Z axis into M blocks; wherein each pixel of
IF3DTM is obtained by calculating the average pixel value in the region, while each pixel
of IF3DGM is obtained by calculating the average value of the straight-line distance from
the point set in the corresponding region to the central axis; the example sets Aa =1 and M=360; Figs. 4 and 5 show examples of the calculated feature map.
The "using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time" in step 4 means specifically as follows: As shown in Fig. 6, the structure of the neural network is composed of four convolutional blocks containing 3 x 3 and 1 x 1 convolutional layers continuously stacked, and such a design can effectively reduce the amount of parameters while ensuring the recognition performance; the 3D texture expansion map and the geometric distance feature map, respectively successively through the fully connected layer outputted via the neural network structure and 256 dimensions, obtain the vein texture features and the central axis geometric distance features; finally, the loss is calculated through the SoftMax layer, and the network is trained. The method for matching 3D finger vein features according to the present invention is as follows: The scores of the vein texture features and the central axis geometric distance features of the template sample and the to-be-matched sample are calculated, and weighted fusion is carried out, and then the matching scores after the fusion are judged according to a threshold to complete the matching recognition of the 3D digital veins. Specifically, firstly, in the feature matching stage, steps 1 to 4 are performed successively on the finger veins of the template sample and the to-be-matched sample, respectively, to obtain the vein texture and 3D finger shape features of the template sample and the to-be-matched sample; the cosine distance Di of the vein texture features of the template sample and the to-be-matched sample, and the cosine distance D 2 of the 3D finger shape features of the template sample and the to-be-matched sample are calculated, respectively; the formulas of their cosine distance are respectively as follows:
D F -F2 I= , D2 = Fdl Fd2
F F where vi and v2 are the finger vein feature of the template sample and the
to-be-matched sample, respectively, and Fdl and Fd2 are the finger shape feature of the template sample and the to-be-matched sample, respectively. Then, the cosine distance (matching score) of the vein texture feature and the finger shape feature is subjected to score level weighted fusion to obtain the total cosine distance D; wherein, with 10% of the data randomly selected as the verification set, the fusion weight values are traversed on the verification set, and the weight value that can have the lowest equal error rate after fusion of the matching scores is taken as the optimal weight; and the optimal weight is used to perform weighted fusion on the matching results to get the final matching result; S=W-S,+(1-W)-Sg where S is the final matching score, SI is the texture matching score, Sg is the shape matching score, and W is the fusion weight. Finally, a threshold is determined through experiments; when the total cosine distance D is less than the threshold, it is judged as matching, otherwise it is judged as mismatching. The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited thereto, and any other alterations, modifications, replacements, combinations and simplifications should be equivalent substitutions and included in the scope of protection of the present invention.

Claims (8)

1. A method for extracting three-dimensional (3D) finger vein features, characterized in that the method comprises the following steps: step 1: using three cameras to capture finger veins from three angles in an equal angle to obtain 2D finger vein images; step 2: mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model; step 3: normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger; and step 4: performing feature extraction on the normalized 3D finger model: (1) processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map; and (2) using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time.
2. The method for extracting 3D finger vein features according to claim 1, characterized in that: the "mapping the 2D finger vein images to a 3D model by calculating the parameters of the three cameras, so as to construct a 3D finger model" in step 2 means specifically as follows: with thefinger profile approximately regarded as an ellipse, the 3D finger is divided equidistantly into several sections, and the contour of each section is calculated, and then the finger is approximately modeled by multiple ellipses with different radii and positions; and then all the contours are connected in series along the central axis of the finger to obtain an approximate 3D finger model.
3. The method for extracting 3D finger vein features according to claim 1, characterized in that: the "normalizing the 3D finger model to eliminate the influence of horizontal and vertical offset of the finger" in step 3 means specifically as follows: the center of each approximately elliptical cross section obtained in the 3D reconstruction is regressed to a central axis by using the least square method, and then the coordinates are normalized by the following equation (1): x-xr, _ y -yr, __-_,__
S W G
where (x-,yn,z,) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis.
4. The method for extracting 3D finger vein features according to claim 1, characterized in that: the "processing the normalized 3D finger model to generate a 3D texture expansion map and a geometric distance feature map" in step 4 means specifically as follows: firstly, the sector cylinder region is defined as SC-Block(i), where i as a subscript ranges from 1 to N; the 3D cylinder is rotatively cut along its axis to obtain 360 sector cylinder regions; the center angle range of the bottom surface of the sector cylinder region is set to ((i-l).Aa, i. Aa]; besides, the height Z of the cylinder is set to range over
[znzmax], wherezmin and zmax represent the minimum and maximum heights, respectively; N = N=360/ A a , where N represents the width of the feature map and A a
represents the angle sampling interval; and then, the 3D point set of the sector cylinder region set is mapped to the 3D texture
expansion map IF3DTM and the geometric distance feature map IF3DGM through the following functions:
IF3DTM.coI(i)=Ft(SC-Block(i)) (1)
IF3DGM .col(i)=Fg(SC-Block(i)) (2)
where F3DTM and F3DGM respectively represent the 3D texture expansion map and the
geometric distance feature map, .col(i) represents the i-th column of the feature map,
and the functions F and ' respectively divide the sector cylinder region set SC-Block(i) at fixed intervals from the Z axis into M blocks; wherein each pixel of
IF3DTM is obtained by calculating the average pixel value in the region, while each pixel
of IF3DGM is obtained by calculating the average value of the straight-line distance from the point set in the corresponding region to the central axis.
5. The method for extracting 3D finger vein features according to claim 1, characterized in that: the "using the convolutional neural network to respectively extract the features of the 3D texture expansion map and the geometric distance feature map to obtain the vein texture features and the central axis geometric distance features; and training the neural network at the same time" in step 4 means specifically as follows: the structure of the neural network is composed of four convolutional blocks containing 3 x 3 and 1 x 1 convolutional layers continuously stacked; the 3D texture expansion map and the geometric distance feature map, respectively successively through the fully connected layer outputted via the neural network structure and 256 dimensions, obtain the 256-dimensional vein texture features and the 256-dimensional central axis geometric distance features; finally, the loss is calculated through the SoftMax layer, and the network is trained.
6. The method for matching 3D finger vein features according to claim 1, characterized in that: the scores of the vein texture features and the central axis geometric distance features of a template sample and a to-be-matched sample are calculated, and weighted fusion is carried out, and then the matching scores after the fusion are judged according to a threshold to complete the matching recognition of the 3D digital veins.
7. The method for matching 3D finger vein features according to claim 6, characterized in that: the "the scores of the vein texture features and the central axis geometric distance features of the template sample and the to-be-matched sample are calculated, and weighted fusion is carried out, and then the matching scores after the fusion are judged according to a threshold to complete the matching recognition of the 3D digital veins" means specifically as follows: firstly, in the feature matching stage, steps 1 to 4 are performed successively on the finger veins of the template sample and the to-be-matched sample, respectively, to obtain the vein texture and 3D finger shape features of the template sample and the to-be-matched sample; the cosine distance Di of the vein texture features of the template sample and the to-be-matched sample, and the cosine distance D 2 of the 3D finger shape features of the template sample and the to-be-matched sample are calculated, respectively; the formulas of the cosine distance are respectively as follows:
D F -F2 I= , D2 = Fdl Fd2
F F where vi and v2 are the finger vein feature of the template sample and the
to-be-matched sample, respectively, and Fdl and Fd2 are the finger shape feature of the template sample and the to-be-matched sample, respectively.
8. The method for matching 3D finger vein features according to claim 7, characterized in that: the cosine distance of the vein texture feature and the finger shape feature is subjected to score level weighted fusion to obtain the total cosine distance D; wherein, with 10% of the data randomly selected as the verification set, the fusion weight values are traversed on the verification set, and the weight value that can have the lowest equal error rate after fusion of the matching scores is taken as the optimal weight; and the optimal weight is used to perform weighted fusion on the matching results to get the final matching result; S=W-S,+(1-W )-Sg where S is the final matching score, SI is the texture matching score, Sg is the shape matching score, and W is the fusion weight; finally, a threshold is determined through experiments; when the total cosine distance D is less than the threshold, it is judged as matching, otherwise it is judged as mismatching.
AU2019368520A 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor Active AU2019368520B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201811235227.9 2018-10-23
CN201811235227.9A CN109543535B (en) 2018-10-23 2018-10-23 Three-dimensional finger vein feature extraction method and matching method thereof
PCT/CN2019/113883 WO2020083407A1 (en) 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor

Publications (2)

Publication Number Publication Date
AU2019368520A1 true AU2019368520A1 (en) 2021-05-06
AU2019368520B2 AU2019368520B2 (en) 2022-10-06

Family

ID=65844535

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2019368520A Active AU2019368520B2 (en) 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor

Country Status (3)

Country Link
CN (1) CN109543535B (en)
AU (1) AU2019368520B2 (en)
WO (1) WO2020083407A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543535B (en) * 2018-10-23 2021-12-21 华南理工大学 Three-dimensional finger vein feature extraction method and matching method thereof
CN110363250A (en) * 2019-07-23 2019-10-22 北京隆普智能科技有限公司 A kind of method and its system of 3-D image intelligent Matching
CN110378425B (en) * 2019-07-23 2021-10-22 武汉珞思雅设科技有限公司 Intelligent image comparison method and system
CN110827342B (en) * 2019-10-21 2023-06-02 中国科学院自动化研究所 Three-dimensional human body model reconstruction method, storage device and control device
CN110909778B (en) * 2019-11-12 2023-07-21 北京航空航天大学 Image semantic feature matching method based on geometric consistency
CN111009007B (en) * 2019-11-20 2023-07-14 广州光达创新科技有限公司 Finger multi-feature comprehensive three-dimensional reconstruction method
CN111612831A (en) * 2020-05-22 2020-09-01 创新奇智(北京)科技有限公司 Depth estimation method and device, electronic equipment and storage medium
CN111931758B (en) * 2020-10-19 2021-01-05 北京圣点云信息技术有限公司 Face recognition method and device combining facial veins
CN112101332B (en) * 2020-11-23 2021-02-19 北京圣点云信息技术有限公司 Feature extraction and comparison method and device based on 3D finger veins
CN112560710B (en) * 2020-12-18 2024-03-01 北京曙光易通技术有限公司 Method for constructing finger vein recognition system and finger vein recognition system
CN113012271B (en) * 2021-03-23 2022-05-24 华南理工大学 Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping
CN112990160B (en) * 2021-05-17 2021-11-09 北京圣点云信息技术有限公司 Facial vein identification method and identification device based on photoacoustic imaging technology
CN113689344B (en) * 2021-06-30 2022-05-27 中国矿业大学 Low-exposure image enhancement method based on feature decoupling learning
CN113780095B (en) * 2021-08-17 2023-12-26 中移(杭州)信息技术有限公司 Training data expansion method, terminal equipment and medium of face recognition model
CN113673477A (en) * 2021-09-02 2021-11-19 青岛奥美克生物信息科技有限公司 Palm vein non-contact three-dimensional modeling method and device and authentication method
CN114821682B (en) * 2022-06-30 2022-09-23 广州脉泽科技有限公司 Multi-sample mixed palm vein identification method based on deep learning algorithm

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851126B (en) * 2015-04-30 2017-10-20 中国科学院深圳先进技术研究院 Threedimensional model dividing method and device based on generalized cylinder
US10089452B2 (en) * 2016-09-02 2018-10-02 International Business Machines Corporation Three-dimensional fingerprint scanner
CN106919941B (en) * 2017-04-26 2018-10-09 华南理工大学 A kind of three-dimensional finger vein identification method and system
CN108009520B (en) * 2017-12-21 2020-09-01 西安格威西联科技有限公司 Finger vein identification method and system based on convolution variational self-encoder network
CN109543535B (en) * 2018-10-23 2021-12-21 华南理工大学 Three-dimensional finger vein feature extraction method and matching method thereof

Also Published As

Publication number Publication date
CN109543535B (en) 2021-12-21
CN109543535A (en) 2019-03-29
WO2020083407A1 (en) 2020-04-30
AU2019368520B2 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
AU2019368520A1 (en) Three-dimensional finger vein feature extraction method and matching method therefor
CN102682302B (en) Human body posture identification method based on multi-characteristic fusion of key frame
CN104867126B (en) Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle
CN103822616B (en) A kind of figure segmentation retrains with topographic relief the Remote Sensing Images Matching Method combined
CN103310196B (en) The finger vein identification method of area-of-interest and direction element
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN109949375A (en) A kind of mobile robot method for tracking target based on depth map area-of-interest
CN102043961B (en) Vein feature extraction method and method for carrying out identity authentication by utilizing double finger veins and finger-shape features
CN110807781B (en) Point cloud simplifying method for retaining details and boundary characteristics
CN107093205A (en) A kind of three dimensions building window detection method for reconstructing based on unmanned plane image
CN106529591A (en) Improved MSER image matching algorithm
CN111145228A (en) Heterogeneous image registration method based on local contour point and shape feature fusion
CN110211129B (en) Low-coverage point cloud registration algorithm based on region segmentation
CN102222357A (en) Foot-shaped three-dimensional surface reconstruction method based on image segmentation and grid subdivision
CN103136525A (en) Hetero-type expanded goal high-accuracy positioning method with generalized Hough transposition
Maltoni et al. Fingerprint analysis and representation
CN107025449A (en) A kind of inclination image linear feature matching process of unchanged view angle regional area constraint
Yu et al. Improvement of face recognition algorithm based on neural network
CN105631899A (en) Ultrasonic image motion object tracking method based on gray-scale texture feature
CN112085675A (en) Depth image denoising method, foreground segmentation method and human motion monitoring method
CN112819869A (en) Three-dimensional point cloud registration method based on IHarris-TICP algorithm
CN109598261B (en) Three-dimensional face recognition method based on region segmentation
CN117274339A (en) Point cloud registration method based on improved ISS-3DSC characteristics combined with ICP
CN106886988A (en) A kind of linear goal detection method and system based on unmanned aerial vehicle remote sensing
CN104933723A (en) Tongue image segmentation method based on sparse representation

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)