三维指静脉特征提取方法及其匹配方法Three-dimensional finger vein feature extraction method and matching method
技术领域Technical field
本发明涉及静脉识别技术领域,更具体地说,涉及一种三维指静脉特征提取方法及其匹配方法。The present invention relates to the technical field of vein recognition, and more specifically, to a three-dimensional finger vein feature extraction method and matching method.
背景技术Background technique
生物特征识别技术是利用一种或多种人类的生理特征(如指纹、人脸、虹膜、静脉等)或行为特征(如步态、签名等)进行身份认证的一种技术。其中,手指静脉识别技术以其独特的优势开始在身份认证领域占得重要的一席地位,它是一种利用手指表皮下的血管的纹路信息作为个体身份验证的生物特征识别技术。在生物特征识别中,指静脉识别技术因具有安全性高,稳定性强的特点而有着独特的应用优势和应用前景。目前指静脉识别系统都是基于二维静脉图像进行识别,在面对手指摆放姿态不当,特别是手指轴向旋转问题时,其识别性能会大大降低。Biometric recognition technology is a technology that uses one or more human physiological characteristics (such as fingerprints, faces, irises, veins, etc.) or behavioral characteristics (such as gait, signature, etc.) for identity authentication. Among them, the finger vein recognition technology begins to occupy an important position in the field of identity authentication with its unique advantages. It is a biometric recognition technology that uses the vein information of the blood vessels under the skin of the finger as individual identity verification. In biometrics recognition, finger vein recognition technology has unique application advantages and application prospects due to its high security and strong stability. At present, finger vein recognition systems are based on two-dimensional vein images for recognition. When the finger is placed improperly, especially when the finger is rotated axially, its recognition performance will be greatly reduced.
然而,针对不同姿态下手指静脉识别的研究相对较少,现有的少量研究包括:采用椭圆模型对采集的二维手指静脉图像进行拓展,从而实现指静脉图像的标准化,然后再截取有效区域进行匹配;采用圆模型将二维手指静脉图像扩展;或者,采用三维模型的方法,其关键仍然是使用椭圆模型,将六种不同姿态下的手指静脉图像标准化,再做匹配。不管是用哪种物理模型,都在一定程度上改善了同一手指在不同姿态下拍摄的静脉图像之间存在较大差异的情况,但仍然存在的问题是:一方面,对应的纹理区域变少了,不利于匹配;另一方面,边缘区域静脉图像质量受成像因素的影响一般会比较差,同样影响到识别结果。还有一种方法是基于多视图几何的三维成像方法,但这种方案在三维重建时难以找到甚至找不到匹配的特征点,因而难以计算全部静脉纹理的深度信息,此外,这种方法采集的静脉纹理也是只有单侧的,因此仍然存在特征信息 有限的问题。However, there are relatively few studies on finger vein recognition in different postures. A few existing studies include: using ellipse model to expand the collected two-dimensional finger vein images, so as to standardize the finger vein images, and then intercept the effective area for Matching; using the circular model to expand the two-dimensional finger vein image; or, using the three-dimensional model method, the key is still to use the ellipse model to standardize the finger vein images in six different poses before matching. Regardless of which physical model is used, the situation where there are large differences between vein images taken by the same finger in different poses is improved to a certain extent, but the problem still remains: on the one hand, the corresponding texture area becomes less However, it is not conducive to matching; on the other hand, the quality of the vein image in the edge area is generally affected by the imaging factors, which also affects the recognition result. There is another method based on the multi-view geometry 3D imaging method, but this method is difficult to find or even find matching feature points during 3D reconstruction, so it is difficult to calculate the depth information of all vein textures. In addition, this method collects The vein texture is also only one-sided, so there is still a problem of limited feature information.
发明内容Summary of the invention
本发明的目的在于克服现有技术中的缺点与不足,提供一种三维指静脉特征提取方法及其匹配方法,该方法能够获得更多的静脉纹理特征,得到更好的匹配识别效果,同时还可以有效解决手指姿态变化带来匹配识别性能差的问题,从而提高静脉匹配识别的准确性和有效性。The purpose of the present invention is to overcome the shortcomings and deficiencies in the prior art, to provide a three-dimensional finger vein feature extraction method and matching method, the method can obtain more vein texture features, get a better matching recognition effect, and at the same time It can effectively solve the problem of poor matching recognition performance caused by the change of finger posture, thereby improving the accuracy and effectiveness of vein matching recognition.
为了达到上述目的,本发明通过下述技术方案予以实现:一种三维指静脉特征提取方法,其特征在于:包括以下步骤:In order to achieve the above object, the present invention is implemented by the following technical solution: A three-dimensional finger vein feature extraction method, characterized in that it includes the following steps:
第一步,通过三个摄像头在等分角度下从三个角度拍摄手指静脉,获取二维手指静脉图像;In the first step, three fingers are used to capture the finger vein from three angles at equal angles to obtain a two-dimensional finger vein image;
第二步,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建;In the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model to realize the construction of the three-dimensional finger model;
第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger;
第四步,对归一化后的三维手指模型进行特征提取:In the fourth step, feature extraction is performed on the normalized 3D finger model:
(1)对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图;(1) Process the normalized three-dimensional finger model to generate three-dimensional texture expansion map and geometric distance feature map;
(2)采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练。(2) The convolutional neural network is used to extract features from the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and the central axis geometric distance features; at the same time, the neural network is trained.
第二步中,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建是指:将手指剖面图近似视为一个椭圆,将三维手指等距离分割成若干个剖面,计算每个剖面的轮廓,用多个不同半径不同位置的椭圆来对手指近似建模;再将所有轮廓按手指中轴方向串接起来,即可获得近似的三维手指模型。In the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model, and the construction of the three-dimensional finger model means that the finger profile is approximately regarded as an ellipse, and the three-dimensional finger is divided into equal distances into For several profiles, calculate the profile of each profile, and use multiple ellipses with different radii and different positions to approximate the model of the finger; then connect all the profiles in series according to the direction of the central axis of the finger to obtain an approximate three-dimensional finger model.
第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响是指:采用最小二乘法将三维重建中得到的每个横截面上近似椭圆的中心回归到一条中轴线上,然后利用下列等式(1),对坐标进行归一化;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger. It means that the least square method is used to return the center of the approximate ellipse on each cross-section obtained in the three-dimensional reconstruction to a line. On the central axis, then use the following equation (1) to normalize the coordinates;
其中,(x
m,y
m,z
m)代表椭圆的中点,(S,W,G)代表中轴线的方向。
Among them, (x m , y m , z m ) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis.
通过上述归一化,可以使得手指轴向与三维模型的中轴线一致,并使得三维模型的中心点与原点一致,进而消除了水平和垂直运动带来的偏移。Through the above normalization, the axial direction of the finger can be consistent with the central axis of the three-dimensional model, and the center point of the three-dimensional model can be consistent with the origin, thereby eliminating the offset caused by the horizontal and vertical movements.
第四步中,对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图是指:In the fourth step, processing the normalized three-dimensional finger model to generate a three-dimensional texture expansion map and geometric distance feature map means:
首先,定义扇形柱体区域为SC-Block(i),其中i作为下标取值从1到N;沿着三维柱体的轴心进行旋转切割,得到360个扇形柱体区域;将扇形柱体区域的底面圆心角范围设置为((i-1)·Δα,i·Δα];同时,设置柱体高度Z的范围为[z
min,z
max],其中,z
min和z
max分别表示高度的最小值和最大值;N表示特征图的宽度,N=360/Δα,Δα是角度采样间隔;
First, define the sector-shaped cylinder area as SC-Block (i), where i is a subscript from 1 to N; rotate and cut along the axis of the three-dimensional cylinder to obtain 360 sector-shaped cylinder areas; The range of the center angle of the bottom surface of the volume area is set to ((i-1) · Δα, i · Δα]; at the same time, the range of the cylinder height Z is set to [z min , z max ], where z min and z max represent respectively The minimum and maximum heights; N represents the width of the feature map, N = 360 / Δα, and Δα is the angle sampling interval;
然后,通过以下函数将扇形柱体区域集合的三维点集映射到三维纹理展开图I
F3DTM和几何距离特征图I
F3DGM上:
Then, the three-dimensional point set of the fan-shaped cylinder region set is mapped to the three-dimensional texture expansion map I F3DTM and the geometric distance feature map I F3DGM by the following function:
I
F3DTM.col(i)=Γ
t(SC-Block(i)) (1)
I F3DTM .col (i) = Γ t (SC-Block (i)) (1)
I
F3DGM.col(i)=Γ
g(SC-Block(i)) (2)
I F3DGM .col (i) = Γ g (SC-Block (i)) (2)
其中,F3DTM和F3DGM分别代表三维纹理展开图和几何距离特征图,.col(i)表示特征图的第i列,而函数Γ
g、Γ
t则分别将扇形柱体区域集合SC-Block(i)以固定的间隔从Z轴进行划分切割为M块;其中,I
F3DTM每一像素通过计算区域内的平均像素值获得,而I
F3DGM的每一像素则通过计算对应区域内点集到中轴线的直线距离的平均值来获得。
Among them, F3DTM and F3DGM respectively represent the three-dimensional texture expansion map and geometric distance feature map, .col (i) represents the i-th column of the feature map, and the functions Γ g and Γ t respectively collect the fan-shaped cylinder area set SC-Block (i ) Divide and cut into M blocks from the Z axis at fixed intervals; wherein, each pixel of IF3DTM is obtained by calculating the average pixel value in the area, and each pixel of IF3DGM is calculated by calculating the point set in the corresponding area to the central axis The average value of the linear distance is obtained.
第四步中,采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练是指:该神经网络结构是由四个包含3×3和1×1卷积层的卷积块连续堆叠而成,这样设计能在有效减少参数量的同时保证其识别性能;三维纹理展开图和几何距离特征图分别依次通过神经网络结构和256维输出的全连接层,得到256维的静脉纹理特征和256维的中轴几何距离特征;最后通过SoftMax层计算损失并对网络进行训练。In the fourth step, the convolutional neural network is used to extract features of the three-dimensional texture expansion map and the geometric distance feature map to obtain the vein texture feature and the central axis geometric distance feature; at the same time, training the neural network means that the structure of the neural network is It is composed of four convolutional blocks containing 3 × 3 and 1 × 1 convolutional layers continuously stacked, so that the design can effectively reduce the number of parameters while ensuring its recognition performance; the three-dimensional texture expansion map and geometric distance feature map are passed in turn The neural network structure and the 256-dimensional output of the fully connected layer obtain the 256-dimensional vein texture feature and the 256-dimensional central axis geometric distance feature. Finally, the SoftMax layer calculates the loss and trains the network.
以下对提出的三维指静脉匹配方法进行说明:The following describes the proposed three-dimensional finger vein matching method:
通过计算模板样本和待匹配样本的静脉纹理特征和中轴几何距离特征分数,并进行加权融合,通过阈值来对融合后的匹配分数进行判定,完成三维指静脉的匹配识别。By calculating the vein texture feature and median geometric distance feature score of the template sample and the sample to be matched, and performing weighted fusion, the fusion matching score is judged by the threshold, and the matching recognition of the three-dimensional finger vein is completed.
具体为:在特征匹配阶段,先对需要匹配的模板样本手指静脉和待匹配样本手指静脉分别依次进行第一步至第四步,分别得到模板样本的静脉纹理和三维手指形状特征(中轴几何距离特征),以及待匹配样本的静脉纹理和三维手指形状特征;分别计算模板样本和待匹配样本静脉纹理特征的余弦距离D
1,以及模板样本和待匹配样本的三维手指形状特征的余弦距离D
2。他们的余弦距离的公式分别,如下:
Specifically, in the feature matching stage, firstly, the template sample finger vein to be matched and the sample finger vein to be matched are sequentially performed from the first step to the fourth step, respectively, to obtain the vein texture and three-dimensional finger shape features (middle-axis geometry) Distance feature), and the vein texture and three-dimensional finger shape feature of the sample to be matched; the cosine distance D 1 of the template sample and the vein texture feature of the sample to be matched, and the cosine distance D of the three-dimensional finger shape feature of the template sample and the sample to be matched, respectively 2 . Their cosine distance formulas are as follows:
其中F
v1,F
v2分别为模板样本和带匹配样本手指的静脉特征向量,F
d1,F
d2或手指形状特征向量。
F v1 and F v2 are the template sample and the vein feature vector of the finger with the matching sample, F d1 , F d2 or finger shape feature vector, respectively.
之后,对静脉纹理特征和手指形状特征的余弦距离(匹配分数)进行分数层加权融合,得到总余弦距离D。其中,融合的权重通过在数据中随机抽取10%作为验证集,在验证集上遍历权重值,取使得融合匹配分数之后等误率最低的权重值作为最佳权重,使用这个最佳权重对匹配结果进行加权融合,得到最终的匹配结果;Then, the cosine distance (matching score) of the vein texture feature and the finger shape feature is scored and weighted to obtain the total cosine distance D. Among them, the fusion weight is randomly selected 10% in the data as the verification set, and the weight value is traversed on the verification set, and the weight value with the lowest error rate after the fusion matching score is taken as the best weight, and the best weight is used to match The results are weighted and fused to get the final matching result;
S=w·S
t+(1-w)·S
g
S = w · S t + (1-w) · S g
其中,S为最终匹配分数,S
t为纹理匹配分数,S
g为形状匹配分数,w为融合权重。
Where S is the final matching score, S t is the texture matching score, S g is the shape matching score, and w is the fusion weight.
最后通过实验确定一个阈值,当总余弦距离D小于阈值时,判断为匹配,否则不匹配。Finally, a threshold is determined through experiments. When the total cosine distance D is less than the threshold, it is determined as a match, otherwise it does not match.
与现有技术相比,本发明具有如下优点与有益效果:本发明三维指静脉特征提取方法及其匹配方法能够获得更多的静脉纹理特征,得到更好的匹配识别效果,同时还可以有效解决手指姿态变化带来匹配识别性能差的问题,从而提高静脉匹配识别的准确性和有效性。Compared with the prior art, the present invention has the following advantages and beneficial effects: The three-dimensional finger vein feature extraction method and matching method of the present invention can obtain more vein texture features, get a better matching recognition effect, and can also be effectively solved The change of finger pose brings the problem of poor matching recognition performance, thereby improving the accuracy and effectiveness of vein matching recognition.
附图说明BRIEF DESCRIPTION
图1是本发明三维指静脉特征提取方法和匹配方法的流程示意图;1 is a schematic flowchart of a three-dimensional finger vein feature extraction method and matching method of the present invention;
图2是本发明椭圆模型的三维手指模型构建示意图;2 is a schematic diagram of the construction of the three-dimensional finger model of the ellipse model of the present invention;
图3是本发明沿三维柱体轴心进行旋转切割得到360个扇形柱体区域示意图;3 is a schematic view of 360 sector-shaped cylinder regions obtained by rotating and cutting along the axis of a three-dimensional cylinder of the present invention;
图4是本发明三维纹理展开图;4 is a three-dimensional texture development diagram of the present invention;
图5是本发明几何距离特征图;Figure 5 is a geometric distance characteristic diagram of the present invention;
图6是本发明三维纹理展开图和几何距离特征图通过卷积神经网络结构和256维输出的全连接层的示意图。6 is a schematic diagram of a fully connected layer through a convolutional neural network structure and a 256-dimensional output of the three-dimensional texture expansion map and geometric distance feature map of the present invention.
具体实施方式detailed description
下面结合附图与具体实施方式对本发明作进一步详细的描述。The present invention will be described in further detail below with reference to the drawings and specific embodiments.
实施例Examples
如图1至图6所示,本发明一种三维指静脉特征提取方法包括以下步骤:As shown in FIGS. 1 to 6, a three-dimensional finger vein feature extraction method of the present invention includes the following steps:
第一步,通过三个摄像头在等分角度下从三个角度拍摄手指静脉,获取二维手指静脉图像;In the first step, three fingers are used to capture the finger vein from three angles at equal angles to obtain a two-dimensional finger vein image;
第二步,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建;In the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model to realize the construction of the three-dimensional finger model;
第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger;
第四步,对归一化后的三维手指模型进行特征提取:In the fourth step, feature extraction is performed on the normalized 3D finger model:
(1)对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图;(1) Process the normalized three-dimensional finger model to generate three-dimensional texture expansion map and geometric distance feature map;
(2)采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练。(2) The convolutional neural network is used to extract features from the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and the central axis geometric distance features; at the same time, the neural network is trained.
其中,第二步中,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建是指:将手指剖面图近似视为一个椭圆,将三维手指等距离分割成若干个剖面,计算每个剖面的轮廓,用多个不同半径 不同位置的椭圆来对手指近似建模;再将所有轮廓按手指中轴方向串接起来,即可获得近似的三维手指模型。Among them, in the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model, and the construction of the three-dimensional finger model refers to: the finger profile is approximately regarded as an ellipse, and the three-dimensional finger is equidistant Split into several sections, calculate the contour of each section, and use multiple ellipses with different radii and different positions to approximate the model of the finger; then connect all the contours in series according to the direction of the central axis of the finger to obtain an approximate three-dimensional finger model .
计算每个剖面的轮廓的方法是:The method for calculating the profile of each section is:
1)根据三个摄像头的投影中心C
1,C
2,C
3建立xOy坐标系(2D-CS),如图2所示.
1) Establish the xOy coordinate system (2D-CS) according to the projection centers C 1 , C 2 , and C 3 of the three cameras, as shown in Figure 2.
2)确定椭圆和直线方程的。2) Determine the ellipse and straight line equations.
设椭圆的方程如下:The equation for the ellipse is as follows:
每个摄像机的投影中心记为C
i(x,y),这样,直线C
iU
i(L
ui),C
iB
i(L
ni)的方程可以求得,在这里我们只讨论直线斜率存在的情况:
The projection center of each camera is recorded as C i (x, y). In this way, the equations of the straight lines C i U i (L ui ) and C i B i (L ni ) can be obtained. Here we only discuss the existence of straight line slope Case:
L
ui:y=k
uix+b
ui
L ui : y = k ui x + b ui
L
bi:y=k
bix+b
bi
L bi : y = k bi x + b bi
其中i=1,2,3,k
ui和b
ui分别代表直线L
ui的斜率和截距,k
bi和b
bi分别代表直线L
bi的斜率和截距。
Where i = 1, 2, 3, k ui and b ui respectively represent the slope and intercept of the straight line L ui , and k bi and b bi respectively represent the slope and intercept of the straight line L bi .
3)确定约束条件3) Determine the constraints
作这些约束直线的平行线,就像图2所示,使它们与椭圆相切,假设这些平行线的方程为:To make these parallel lines constraining the straight lines, as shown in Figure 2, make them tangent to the ellipse, assuming the equations for these parallel lines are:
L
ui:y=k
uix+b
ui+ξ
ui
L ui : y = k ui x + b ui + ξ ui
L
bi:y=k
bix+b
bi+ξ
bi
L bi : y = k bi x + b bi + ξ bi
由,L
ui,L
bi与椭圆相切的条件可以得到下面的约束方程:
From the condition that Lui , Lbi and the ellipse are tangent, the following constraint equation can be obtained:
其中:among them:
A
ui=a+bk
ui
2+2ck
ui
A ui = a + bk ui 2 + 2ck ui
B
ui=(2k
uib+2c)(b
ui+ξ
ui)+ek
ui+d
B ui = (2k ui b + 2c) (b ui + ξ ui ) + ek ui + d
B
bi=(2k
bib+2c)(b
bi+ξ
bi)+ek
bi+d
B bi = (2k bi b + 2c) (b bi + ξ bi ) + ek bi + d
C
ui=b(b
ui+ξ
ui)
2+e(b
ui+ξ
ui)+f
C ui = b (b ui + ξ ui ) 2 + e (b ui + ξ ui ) + f
C
bi=b(b
bi+ξ
bi)
2+e(b
bi+ξ
bi)+f
C bi = b (b bi + ξ bi ) 2 + e (b bi + ξ bi ) + f
4)目标优化函数4) Objective optimization function
正如前面所说,椭圆必须非常接近约束方程,所以我们的目标是使椭圆到所有直线的距离的和最小化:As mentioned earlier, the ellipse must be very close to the constraint equation, so our goal is to minimize the sum of the distances from the ellipse to all straight lines:
5)求解算法5) Solving algorithm
①单个剖面椭圆求解①Single section ellipse solution
我们需要将图像上的边缘点坐标转换到2D-CS上,转换关系为下列公式:We need to convert the coordinates of the edge points on the image to 2D-CS, the conversion relationship is the following formula:
式中,θ
i表示
与x轴正方向的夹角,i=1,2,3,y
m表示摄像机光学中心的y值,它与摄像机内参数有关。
Where θ i represents The angle to the positive direction of the x-axis, i = 1, 2, 3, y m represents the y value of the optical center of the camera, which is related to the parameters within the camera.
对于每一个椭圆,在3)中所示的约束条件下用梯度下降法求解目标优化函数。主要的问题是如何设置初始迭代点,因为一个恰当的初始迭代点对加快寻优的速度和找到全局最优解两方面具有极其重要的作用。通过大量的实验,确定了设置初始迭代点的方法如下:For each ellipse, the objective optimization function is solved by the gradient descent method under the constraints shown in 3). The main problem is how to set the initial iteration point, because a proper initial iteration point plays an extremely important role in speeding up the speed of optimization and finding the global optimal solution. Through a large number of experiments, the method of setting the initial iteration point is determined as follows:
一个椭圆共有五个独立变量,包括椭圆中心的横、纵坐标值,椭圆长半轴长度,离心率和旋转角度,我们的问题能够转化为计算六边形的近似内切椭圆,根据Brianchon’s理论,一个六边形ABCDEF有一个内切椭圆当且仅当它的主对角线AD,BE和CF交于同一点:There are five independent variables in an ellipse, including the horizontal and vertical coordinates of the center of the ellipse, the length of the major axis of the ellipse, the eccentricity and the angle of rotation. Our problem can be transformed into calculating the approximate inscribed ellipse of the hexagon. According to Brianchon's theory, A hexagon ABCDEF has an inscribed ellipse if and only if its main diagonal AD, BE and CF intersect at the same point:
[(A×D),(B×E),(C×F)]=0[(A × D), (B × E), (C × F)] = 0
在我们的模型的计算过程中,这些对角线一般是两两相交的,我们设置椭圆的初始中心点(C
0)为这些交点组成的三角形的重点,计算如公式如下所示,设置初始长轴长为初始中心点到六边形六个顶点距离最小值。
In the calculation process of our model, these diagonal lines generally intersect each other. We set the initial center point (C 0 ) of the ellipse as the key point of the triangle formed by these intersection points. The calculation is as shown in the formula below. Set the initial length The axis length is the minimum distance from the initial center point to the six vertices of the hexagon.
此外,我们设置初始的离心率和旋转角为固定值,其中离心率设为e
0=1.4,旋转角设为α=0,这两个常数值是通过实验确定的。
In addition, we set the initial eccentricity and rotation angle to fixed values, where the eccentricity is set to e 0 = 1.4, and the rotation angle is set to α = 0. These two constant values are determined by experiment.
②整个三维手指模型重建②Reconstruction of the entire 3D finger model
考虑到计算越多的剖面就可以得到越精确的三维手指模型,但是如果使用椭圆近似法计算越多的椭圆,就会消耗越多的时间,而在实际应用时,并不希望在三维手指重建上耗费太多时间,通过观察,发现到这样一个细节,手指表面在轴向上的变化是比较平缓的,我们可以先在轴向上选择一些稀疏的边缘点集合重建部分椭圆,然后用插值的方法扩充近似椭圆的数量。然而,这就出现了另外一个问题,如果选取的边缘点集合中,由于图像质量较差或部分边缘地方噪声较大,从而导致所检测边缘的位置是存在比较大误差,那么重建的手指模型就会在一些地方存在比较大的缺陷,为了减小这方面的影响,以及权衡重建精度与时间损耗,我们提出了更加鲁棒的算法构建手指三维模型:Considering that the more sections are calculated, the more accurate three-dimensional finger model can be obtained, but if the ellipse approximation method is used to calculate more ellipses, it will consume more time, and in practical applications, it is not desirable to reconstruct the three-dimensional finger It took too much time to observe. Through observation, we found such a detail that the change in the axial direction of the finger surface is relatively smooth. We can first select some sparse sets of edge points in the axial direction to reconstruct part of the ellipse, and then use the interpolated The method expands the number of approximate ellipses. However, there is another problem. If the selected edge point set has poor image quality or noise at some edges, resulting in a relatively large error in the position of the detected edge, then the reconstructed finger model There will be relatively large defects in some places. In order to reduce the impact of this aspect and weigh the reconstruction accuracy and time loss, we propose a more robust algorithm to build a three-dimensional model of the finger:
假设在每个矫正之后的图像中,将横坐标在91~490范围内的区域设置为有效的区域。按横轴方向,将有效区域等分成N个子区域,在每个子区域,我们选择一组边缘点重建出椭圆,获得所有子区域的椭圆之后,我们采用插值算法扩展椭圆数据。最后将二维平面下的椭圆转换到三维空间下。Suppose that in each corrected image, the area with the abscissa in the range of 91 to 490 is set as the effective area. According to the direction of the horizontal axis, the effective area is equally divided into N sub-areas. In each sub-area, we select a set of edge points to reconstruct the ellipse. After obtaining the ellipses of all sub-areas, we use interpolation algorithm to expand the ellipse data. Finally, the ellipse in the two-dimensional plane is converted to the three-dimensional space.
我们将一个椭圆的z坐标值设置为相同值,无需改变二维椭圆的方程,K是矫正后的摄像机内参数。We set the z-coordinate value of an ellipse to the same value without changing the equation of the two-dimensional ellipse. K is the corrected camera parameter.
第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响是指:采用最小二乘法将三维重建中得到的每个横截面上近似椭圆的中心回归到一条中轴线上,然后利用下列等式(1),对坐标进行归一化;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger. It means that the least square method is used to return the center of the approximate ellipse on each cross-section obtained in the three-dimensional reconstruction to a line. On the central axis, then use the following equation (1) to normalize the coordinates;
其中,(x
m,y
m,z
m)代表椭圆的中点,(S,W,G)代表中轴线的方向。通过上述归一化,可以使得手指轴向与三维模型的中轴线一致,并使得三维模型的中心点与原点一致,进而消除了水平和垂直运动带来的偏移。
Among them, (x m , y m , z m ) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis. Through the above normalization, the axial direction of the finger can be consistent with the central axis of the three-dimensional model, and the center point of the three-dimensional model can be consistent with the origin, thereby eliminating the offset caused by the horizontal and vertical movements.
第四步中,对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图是指:In the fourth step, processing the normalized three-dimensional finger model to generate a three-dimensional texture expansion map and geometric distance feature map means:
首先,定义扇形柱体区域为SC-Block(i),其中i作为下标取值从1到N;沿着三维柱体的轴心进行旋转切割,得到360个扇形柱体区域,如图3所示。将扇形柱体区域的底面圆心角范围设置为((i-1)·Δα,i·Δα];同时,设置柱体高度Z的范围为[z
min,z
max],其中,z
min和z
max分别表示高度的最小值和最大值;N表示特征图的宽度,N=360/Δα,Δα是角度采样间隔;
First, define the sector-shaped cylinder area as SC-Block (i), where i is a subscript from 1 to N; rotate and cut along the axis of the three-dimensional cylinder to obtain 360 sector-shaped cylinder areas, as shown in Figure 3 As shown. Set the range of the center angle of the bottom of the fan-shaped cylinder area to ((i-1) · Δα, i · Δα]; meanwhile, set the range of the cylinder height Z to [z min , z max ], where z min and z max represents the minimum and maximum values of height respectively; N represents the width of the feature map, N = 360 / Δα, and Δα is the angle sampling interval;
然后,通过以下函数将扇形柱体区域集合的三维点集映射到三维纹理展开图I
F3DTM和几何距离特征图I
F3DGM上:
Then, the three-dimensional point set of the fan-shaped cylinder region set is mapped to the three-dimensional texture expansion map I F3DTM and the geometric distance feature map I F3DGM by the following function:
I
F3DTM.col(i)=Γ
t(SC-Block(i)) (1)
I F3DTM .col (i) = Γ t (SC-Block (i)) (1)
I
F3DGM.col(i)=Γ
g(SC-Block(i)) (2)
I F3DGM .col (i) = Γ g (SC-Block (i)) (2)
其中,F3DTM和F3DGM分别代表三维纹理展开图和几何距离特征图,.col(i)表示特征图的第i列,而函数Γ
g、Γ
t则分别将扇形柱体区域集合SC-Block(i)以固定的间隔从Z轴进行划分切割为M块;其中,I
F3DTM每一像素通过计算区域内的平均像素值获得,而I
F3DGM的每一像素则通过计算对应区域内点集到中轴线的直线距离的平均值来获得。本实施例设置Δα=1,M=360。图4和图5为计算所得的特征图示例。
Among them, F3DTM and F3DGM respectively represent the three-dimensional texture expansion map and geometric distance feature map, .col (i) represents the i-th column of the feature map, and the functions Γ g and Γ t respectively collect the fan-shaped cylinder area set SC-Block (i ) Divide and cut into M blocks from the Z axis at fixed intervals; wherein, each pixel of IF3DTM is obtained by calculating the average pixel value in the area, and each pixel of IF3DGM is calculated by calculating the point set in the corresponding area to the central axis The average value of the linear distance is obtained. In this embodiment, Δα = 1 and M = 360. Figures 4 and 5 are examples of calculated feature maps.
第四步中,采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练是指:如图6所示,该神经网络结构是由四个包含3×3和1×1卷积层的卷积块连续堆叠而成,这样设计能在有效减少参数量的同时保证其识别性能;三 维纹理展开图和几何距离特征图分别依次通过神经网络结构和256维输出的全连接层,得到静脉纹理特征和中轴几何距离特征;最后通过SoftMax层计算损失并对网络进行训练。In the fourth step, the convolutional neural network is used to extract features of the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and central axis geometric distance features; meanwhile, training the neural network means: as shown in Figure 6 , The neural network structure is composed of four convolutional blocks containing 3 × 3 and 1 × 1 convolutional layers stacked in succession, so that the design can effectively reduce the number of parameters while ensuring its recognition performance; 3D texture expansion map and geometry The distance feature map respectively passes through the neural network structure and the 256-dimensional output fully connected layer to obtain the vein texture feature and the central axis geometric distance feature; finally, the SoftMax layer calculates the loss and trains the network.
本发明三维指静脉特征匹配方法是这样的:通过计算模板样本和待匹配样本的静脉纹理特征和中轴几何距离特征分数,并进行加权融合,通过阈值来对融合后的匹配分数进行判定,完成三维指静脉的匹配识别。The three-dimensional finger vein feature matching method of the present invention is as follows: by calculating the vein texture feature and mid-axis geometric distance feature score of the template sample and the sample to be matched, and performing weighted fusion, the fusion score is determined by a threshold to complete Three-dimensional finger vein matching recognition.
具体为:在特征匹配阶段,先对需要匹配的模板样本手指静脉和待匹配样本手指静脉分别依次进行第一步至第四步,分别得到模板样本的静脉纹理和三维手指形状特征,以及待匹配样本的静脉纹理和三维手指形状特征;分别计算模板样本和待匹配样本静脉纹理特征的余弦距离D
1,以及模板样本和待匹配样本的三维手指形状特征的余弦距离D
2。他们的余弦距离的公式分别,如下:
Specifically, in the feature matching stage, the first step and the fourth step are performed on the finger veins of the template sample to be matched and the finger veins of the sample to be matched, respectively, to obtain the vein texture and three-dimensional finger shape features of the template sample, and to be matched The vein texture and three-dimensional finger shape features of the sample; the cosine distance D 1 of the vein texture feature of the template sample and the sample to be matched, and the cosine distance D 2 of the three-dimensional finger shape feature of the template sample and the sample to be matched are calculated respectively. Their cosine distance formulas are as follows:
其中F
v1,F
v2分别为模板样本和带匹配样本手指的静脉特征向量,F
d1,F
d2分别为模板样本和带匹配样本手指的形状特征向量。
Among them, F v1 and F v2 are the template sample and the vein feature vector of the finger with the matching sample, and F d1 and F d2 are the shape feature vector of the template sample and the finger with the matching sample, respectively.
之后,对静脉纹理特征和手指形状特征的余弦距离(匹配分数)进行分数层加权融合得到总余弦距离D。其中,融合的权重通过在数据中随机抽取10%作为验证集,在验证集上遍历权重值,取使得融合匹配分数之后等误率最低的权重值作为最佳权重,使用这个最佳权重对匹配结果进行加权融合,得到最终的匹配结果;After that, the cosine distance (matching score) of the vein texture feature and the finger shape feature is scored and weighted to obtain the total cosine distance D. Among them, the fusion weight is randomly selected 10% in the data as the verification set, and the weight value is traversed on the verification set, and the weight value with the lowest error rate after the fusion matching score is taken as the best weight, and the best weight is used to match The results are weighted and fused to get the final matching result;
S=w·S
t+(1-w)·S
g
S = w · S t + (1-w) · S g
其中,S为最终匹配分数,S
t为纹理匹配分数,S
g为形状匹配分数,w为融合权重。
Where S is the final matching score, S t is the texture matching score, S g is the shape matching score, and w is the fusion weight.
最后通过实验确定一个阈值,当总余弦距离D小于阈值时,判断为匹配,否则不匹配。Finally, a threshold is determined through experiments. When the total cosine distance D is less than the threshold, it is determined as a match, otherwise it does not match.
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited by the above embodiments. Any other changes, modifications, substitutions, combinations, changes, modifications, substitutions, combinations, etc. made without departing from the spirit and principle of the present invention The simplifications should all be equivalent replacement methods, which are all included in the protection scope of the present invention.