WO2020083407A1 - Three-dimensional finger vein feature extraction method and matching method therefor - Google Patents

Three-dimensional finger vein feature extraction method and matching method therefor Download PDF

Info

Publication number
WO2020083407A1
WO2020083407A1 PCT/CN2019/113883 CN2019113883W WO2020083407A1 WO 2020083407 A1 WO2020083407 A1 WO 2020083407A1 CN 2019113883 W CN2019113883 W CN 2019113883W WO 2020083407 A1 WO2020083407 A1 WO 2020083407A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
finger
feature
vein
matching
Prior art date
Application number
PCT/CN2019/113883
Other languages
French (fr)
Chinese (zh)
Inventor
康文雄
龚启琛
Original Assignee
华南理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华南理工大学 filed Critical 华南理工大学
Priority to AU2019368520A priority Critical patent/AU2019368520B2/en
Publication of WO2020083407A1 publication Critical patent/WO2020083407A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Definitions

  • the present invention relates to the technical field of vein recognition, and more specifically, to a three-dimensional finger vein feature extraction method and matching method.
  • Biometric recognition technology is a technology that uses one or more human physiological characteristics (such as fingerprints, faces, irises, veins, etc.) or behavioral characteristics (such as gait, signature, etc.) for identity authentication.
  • the finger vein recognition technology begins to occupy an important position in the field of identity authentication with its unique advantages. It is a biometric recognition technology that uses the vein information of the blood vessels under the skin of the finger as individual identity verification.
  • biometrics recognition finger vein recognition technology has unique application advantages and application prospects due to its high security and strong stability.
  • finger vein recognition systems are based on two-dimensional vein images for recognition. When the finger is placed improperly, especially when the finger is rotated axially, its recognition performance will be greatly reduced.
  • a few existing studies include: using ellipse model to expand the collected two-dimensional finger vein images, so as to standardize the finger vein images, and then intercept the effective area for Matching; using the circular model to expand the two-dimensional finger vein image; or, using the three-dimensional model method, the key is still to use the ellipse model to standardize the finger vein images in six different poses before matching.
  • the situation where there are large differences between vein images taken by the same finger in different poses is improved to a certain extent, but the problem still remains: on the one hand, the corresponding texture area becomes less However, it is not conducive to matching; on the other hand, the quality of the vein image in the edge area is generally affected by the imaging factors, which also affects the recognition result.
  • There is another method based on the multi-view geometry 3D imaging method but this method is difficult to find or even find matching feature points during 3D reconstruction, so it is difficult to calculate the depth information of all vein textures.
  • this method collects The vein texture is also only one-sided, so there is still a problem of limited feature information.
  • the purpose of the present invention is to overcome the shortcomings and deficiencies in the prior art, to provide a three-dimensional finger vein feature extraction method and matching method, the method can obtain more vein texture features, get a better matching recognition effect, and at the same time It can effectively solve the problem of poor matching recognition performance caused by the change of finger posture, thereby improving the accuracy and effectiveness of vein matching recognition.
  • a three-dimensional finger vein feature extraction method characterized in that it includes the following steps:
  • three fingers are used to capture the finger vein from three angles at equal angles to obtain a two-dimensional finger vein image
  • the two-dimensional finger vein image is mapped to the three-dimensional model to realize the construction of the three-dimensional finger model;
  • the third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger;
  • the convolutional neural network is used to extract features from the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and the central axis geometric distance features; at the same time, the neural network is trained.
  • the two-dimensional finger vein image is mapped to the three-dimensional model, and the construction of the three-dimensional finger model means that the finger profile is approximately regarded as an ellipse, and the three-dimensional finger is divided into equal distances into
  • calculate the profile of each profile and use multiple ellipses with different radii and different positions to approximate the model of the finger; then connect all the profiles in series according to the direction of the central axis of the finger to obtain an approximate three-dimensional finger model.
  • the third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger. It means that the least square method is used to return the center of the approximate ellipse on each cross-section obtained in the three-dimensional reconstruction to a line. On the central axis, then use the following equation (1) to normalize the coordinates;
  • (x m , y m , z m ) represents the midpoint of the ellipse
  • (S, W, G) represents the direction of the central axis.
  • the axial direction of the finger can be consistent with the central axis of the three-dimensional model, and the center point of the three-dimensional model can be consistent with the origin, thereby eliminating the offset caused by the horizontal and vertical movements.
  • processing the normalized three-dimensional finger model to generate a three-dimensional texture expansion map and geometric distance feature map means :
  • the sector-shaped cylinder area as SC-Block (i), where i is a subscript from 1 to N; rotate and cut along the axis of the three-dimensional cylinder to obtain 360 sector-shaped cylinder areas;
  • the range of the center angle of the bottom surface of the volume area is set to ((i-1) ⁇ ⁇ , i ⁇ ⁇ ]; at the same time, the range of the cylinder height Z is set to [z min , z max ], where z min and z max represent respectively The minimum and maximum heights;
  • the three-dimensional point set of the fan-shaped cylinder region set is mapped to the three-dimensional texture expansion map I F3DTM and the geometric distance feature map I F3DGM by the following function:
  • I F3DTM .col (i) ⁇ t (SC-Block (i)) (1)
  • I F3DGM .col (i) ⁇ g (SC-Block (i)) (2)
  • F3DTM and F3DGM respectively represent the three-dimensional texture expansion map and geometric distance feature map
  • .col (i) represents the i-th column of the feature map
  • the functions ⁇ g and ⁇ t respectively collect the fan-shaped cylinder area set SC-Block (i ) Divide and cut into M blocks from the Z axis at fixed intervals; wherein, each pixel of IF3DTM is obtained by calculating the average pixel value in the area, and each pixel of IF3DGM is calculated by calculating the point set in the corresponding area to the central axis The average value of the linear distance is obtained.
  • the convolutional neural network is used to extract features of the three-dimensional texture expansion map and the geometric distance feature map to obtain the vein texture feature and the central axis geometric distance feature; at the same time, training the neural network means that the structure of the neural network is It is composed of four convolutional blocks containing 3 ⁇ 3 and 1 ⁇ 1 convolutional layers continuously stacked, so that the design can effectively reduce the number of parameters while ensuring its recognition performance; the three-dimensional texture expansion map and geometric distance feature map are passed in turn The neural network structure and the 256-dimensional output of the fully connected layer obtain the 256-dimensional vein texture feature and the 256-dimensional central axis geometric distance feature. Finally, the SoftMax layer calculates the loss and trains the network.
  • the fusion matching score is judged by the threshold, and the matching recognition of the three-dimensional finger vein is completed.
  • the template sample finger vein to be matched and the sample finger vein to be matched are sequentially performed from the first step to the fourth step, respectively, to obtain the vein texture and three-dimensional finger shape features (middle-axis geometry) Distance feature), and the vein texture and three-dimensional finger shape feature of the sample to be matched; the cosine distance D 1 of the template sample and the vein texture feature of the sample to be matched, and the cosine distance D of the three-dimensional finger shape feature of the template sample and the sample to be matched, respectively 2 .
  • Their cosine distance formulas are as follows:
  • F v1 and F v2 are the template sample and the vein feature vector of the finger with the matching sample, F d1 , F d2 or finger shape feature vector, respectively.
  • the cosine distance (matching score) of the vein texture feature and the finger shape feature is scored and weighted to obtain the total cosine distance D.
  • the fusion weight is randomly selected 10% in the data as the verification set, and the weight value is traversed on the verification set, and the weight value with the lowest error rate after the fusion matching score is taken as the best weight, and the best weight is used to match The results are weighted and fused to get the final matching result;
  • S is the final matching score
  • S t is the texture matching score
  • S g is the shape matching score
  • w is the fusion weight.
  • a threshold is determined through experiments. When the total cosine distance D is less than the threshold, it is determined as a match, otherwise it does not match.
  • the present invention has the following advantages and beneficial effects:
  • the three-dimensional finger vein feature extraction method and matching method of the present invention can obtain more vein texture features, get a better matching recognition effect, and can also be effectively solved
  • the change of finger pose brings the problem of poor matching recognition performance, thereby improving the accuracy and effectiveness of vein matching recognition.
  • FIG. 1 is a schematic flowchart of a three-dimensional finger vein feature extraction method and matching method of the present invention
  • FIG. 2 is a schematic diagram of the construction of the three-dimensional finger model of the ellipse model of the present invention
  • FIG. 3 is a schematic view of 360 sector-shaped cylinder regions obtained by rotating and cutting along the axis of a three-dimensional cylinder of the present invention
  • Figure 5 is a geometric distance characteristic diagram of the present invention.
  • FIG. 6 is a schematic diagram of a fully connected layer through a convolutional neural network structure and a 256-dimensional output of the three-dimensional texture expansion map and geometric distance feature map of the present invention.
  • a three-dimensional finger vein feature extraction method of the present invention includes the following steps:
  • three fingers are used to capture the finger vein from three angles at equal angles to obtain a two-dimensional finger vein image
  • the two-dimensional finger vein image is mapped to the three-dimensional model to realize the construction of the three-dimensional finger model;
  • the third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger;
  • the convolutional neural network is used to extract features from the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and the central axis geometric distance features; at the same time, the neural network is trained.
  • the two-dimensional finger vein image is mapped to the three-dimensional model, and the construction of the three-dimensional finger model refers to: the finger profile is approximately regarded as an ellipse, and the three-dimensional finger is equidistant Split into several sections, calculate the contour of each section, and use multiple ellipses with different radii and different positions to approximate the model of the finger; then connect all the contours in series according to the direction of the central axis of the finger to obtain an approximate three-dimensional finger model .
  • the method for calculating the profile of each section is:
  • k ui and b ui respectively represent the slope and intercept of the straight line L ui
  • k bi and b bi respectively represent the slope and intercept of the straight line L bi .
  • B bi (2k bi b + 2c) (b bi + ⁇ bi ) + ek bi + d
  • ⁇ i The angle to the positive direction of the x-axis
  • i 1, 2, 3
  • y m the y value of the optical center of the camera, which is related to the parameters within the camera.
  • the objective optimization function is solved by the gradient descent method under the constraints shown in 3).
  • the main problem is how to set the initial iteration point, because a proper initial iteration point plays an extremely important role in speeding up the speed of optimization and finding the global optimal solution.
  • the method of setting the initial iteration point is determined as follows:
  • a hexagon ABCDEF has an inscribed ellipse if and only if its main diagonal AD, BE and CF intersect at the same point:
  • the area with the abscissa in the range of 91 to 490 is set as the effective area.
  • the effective area is equally divided into N sub-areas.
  • the ellipse in the two-dimensional plane is converted to the three-dimensional space.
  • K is the corrected camera parameter.
  • the third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger. It means that the least square method is used to return the center of the approximate ellipse on each cross-section obtained in the three-dimensional reconstruction to a line. On the central axis, then use the following equation (1) to normalize the coordinates;
  • (x m , y m , z m ) represents the midpoint of the ellipse
  • (S, W, G) represents the direction of the central axis.
  • processing the normalized three-dimensional finger model to generate a three-dimensional texture expansion map and geometric distance feature map means :
  • the three-dimensional point set of the fan-shaped cylinder region set is mapped to the three-dimensional texture expansion map I F3DTM and the geometric distance feature map I F3DGM by the following function:
  • I F3DTM .col (i) ⁇ t (SC-Block (i)) (1)
  • I F3DGM .col (i) ⁇ g (SC-Block (i)) (2)
  • F3DTM and F3DGM respectively represent the three-dimensional texture expansion map and geometric distance feature map
  • .col (i) represents the i-th column of the feature map
  • the functions ⁇ g and ⁇ t respectively collect the fan-shaped cylinder area set SC-Block (i ) Divide and cut into M blocks from the Z axis at fixed intervals; wherein, each pixel of IF3DTM is obtained by calculating the average pixel value in the area, and each pixel of IF3DGM is calculated by calculating the point set in the corresponding area to the central axis The average value of the linear distance is obtained.
  • Figures 4 and 5 are examples of calculated feature maps.
  • the convolutional neural network is used to extract features of the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and central axis geometric distance features; meanwhile, training the neural network means: as shown in Figure 6 ,
  • the neural network structure is composed of four convolutional blocks containing 3 ⁇ 3 and 1 ⁇ 1 convolutional layers stacked in succession, so that the design can effectively reduce the number of parameters while ensuring its recognition performance; 3D texture expansion map and geometry
  • the distance feature map respectively passes through the neural network structure and the 256-dimensional output fully connected layer to obtain the vein texture feature and the central axis geometric distance feature; finally, the SoftMax layer calculates the loss and trains the network.
  • the three-dimensional finger vein feature matching method of the present invention is as follows: by calculating the vein texture feature and mid-axis geometric distance feature score of the template sample and the sample to be matched, and performing weighted fusion, the fusion score is determined by a threshold to complete Three-dimensional finger vein matching recognition.
  • the first step and the fourth step are performed on the finger veins of the template sample to be matched and the finger veins of the sample to be matched, respectively, to obtain the vein texture and three-dimensional finger shape features of the template sample, and to be matched
  • the vein texture and three-dimensional finger shape features of the sample; the cosine distance D 1 of the vein texture feature of the template sample and the sample to be matched, and the cosine distance D 2 of the three-dimensional finger shape feature of the template sample and the sample to be matched are calculated respectively.
  • Their cosine distance formulas are as follows:
  • F v1 and F v2 are the template sample and the vein feature vector of the finger with the matching sample
  • F d1 and F d2 are the shape feature vector of the template sample and the finger with the matching sample, respectively.
  • the cosine distance (matching score) of the vein texture feature and the finger shape feature is scored and weighted to obtain the total cosine distance D.
  • the fusion weight is randomly selected 10% in the data as the verification set, and the weight value is traversed on the verification set, and the weight value with the lowest error rate after the fusion matching score is taken as the best weight, and the best weight is used to match The results are weighted and fused to get the final matching result;
  • S is the final matching score
  • S t is the texture matching score
  • S g is the shape matching score
  • w is the fusion weight.
  • a threshold is determined through experiments. When the total cosine distance D is less than the threshold, it is determined as a match, otherwise it does not match.

Abstract

A three-dimensional finger vein feature extraction method, comprising: step one: a two-dimensional finger vein image is acquired; step two: the two-dimensional finger vein image is mapped onto a three-dimensional model to construct a three-dimensional finger model; step three: the three-dimensional finger model is normalized; and step four: feature extraction is performed on the normalized three-dimensional finger model. A three-dimensional finger vein feature matching method. In said matching method, vein pattern feature and central axis geometric distance feature scores are calculated for a template sample and a sample to be matched, weighted fusion is performed, and determination is performed on fused matching scores by means of a threshold, thereby completing three-dimensional finger vein matching and identification. By means of the present three-dimensional finger vein feature extraction method and the matching method therefor, more vein pattern features can be extracted, resulting in better matching and identification results, and the problem of poor matching and identification performance caused by changes in finger position can be effectively solved, thereby improving the accuracy and effectiveness of vein matching and recognition.

Description

三维指静脉特征提取方法及其匹配方法Three-dimensional finger vein feature extraction method and matching method 技术领域Technical field
本发明涉及静脉识别技术领域,更具体地说,涉及一种三维指静脉特征提取方法及其匹配方法。The present invention relates to the technical field of vein recognition, and more specifically, to a three-dimensional finger vein feature extraction method and matching method.
背景技术Background technique
生物特征识别技术是利用一种或多种人类的生理特征(如指纹、人脸、虹膜、静脉等)或行为特征(如步态、签名等)进行身份认证的一种技术。其中,手指静脉识别技术以其独特的优势开始在身份认证领域占得重要的一席地位,它是一种利用手指表皮下的血管的纹路信息作为个体身份验证的生物特征识别技术。在生物特征识别中,指静脉识别技术因具有安全性高,稳定性强的特点而有着独特的应用优势和应用前景。目前指静脉识别系统都是基于二维静脉图像进行识别,在面对手指摆放姿态不当,特别是手指轴向旋转问题时,其识别性能会大大降低。Biometric recognition technology is a technology that uses one or more human physiological characteristics (such as fingerprints, faces, irises, veins, etc.) or behavioral characteristics (such as gait, signature, etc.) for identity authentication. Among them, the finger vein recognition technology begins to occupy an important position in the field of identity authentication with its unique advantages. It is a biometric recognition technology that uses the vein information of the blood vessels under the skin of the finger as individual identity verification. In biometrics recognition, finger vein recognition technology has unique application advantages and application prospects due to its high security and strong stability. At present, finger vein recognition systems are based on two-dimensional vein images for recognition. When the finger is placed improperly, especially when the finger is rotated axially, its recognition performance will be greatly reduced.
然而,针对不同姿态下手指静脉识别的研究相对较少,现有的少量研究包括:采用椭圆模型对采集的二维手指静脉图像进行拓展,从而实现指静脉图像的标准化,然后再截取有效区域进行匹配;采用圆模型将二维手指静脉图像扩展;或者,采用三维模型的方法,其关键仍然是使用椭圆模型,将六种不同姿态下的手指静脉图像标准化,再做匹配。不管是用哪种物理模型,都在一定程度上改善了同一手指在不同姿态下拍摄的静脉图像之间存在较大差异的情况,但仍然存在的问题是:一方面,对应的纹理区域变少了,不利于匹配;另一方面,边缘区域静脉图像质量受成像因素的影响一般会比较差,同样影响到识别结果。还有一种方法是基于多视图几何的三维成像方法,但这种方案在三维重建时难以找到甚至找不到匹配的特征点,因而难以计算全部静脉纹理的深度信息,此外,这种方法采集的静脉纹理也是只有单侧的,因此仍然存在特征信息 有限的问题。However, there are relatively few studies on finger vein recognition in different postures. A few existing studies include: using ellipse model to expand the collected two-dimensional finger vein images, so as to standardize the finger vein images, and then intercept the effective area for Matching; using the circular model to expand the two-dimensional finger vein image; or, using the three-dimensional model method, the key is still to use the ellipse model to standardize the finger vein images in six different poses before matching. Regardless of which physical model is used, the situation where there are large differences between vein images taken by the same finger in different poses is improved to a certain extent, but the problem still remains: on the one hand, the corresponding texture area becomes less However, it is not conducive to matching; on the other hand, the quality of the vein image in the edge area is generally affected by the imaging factors, which also affects the recognition result. There is another method based on the multi-view geometry 3D imaging method, but this method is difficult to find or even find matching feature points during 3D reconstruction, so it is difficult to calculate the depth information of all vein textures. In addition, this method collects The vein texture is also only one-sided, so there is still a problem of limited feature information.
发明内容Summary of the invention
本发明的目的在于克服现有技术中的缺点与不足,提供一种三维指静脉特征提取方法及其匹配方法,该方法能够获得更多的静脉纹理特征,得到更好的匹配识别效果,同时还可以有效解决手指姿态变化带来匹配识别性能差的问题,从而提高静脉匹配识别的准确性和有效性。The purpose of the present invention is to overcome the shortcomings and deficiencies in the prior art, to provide a three-dimensional finger vein feature extraction method and matching method, the method can obtain more vein texture features, get a better matching recognition effect, and at the same time It can effectively solve the problem of poor matching recognition performance caused by the change of finger posture, thereby improving the accuracy and effectiveness of vein matching recognition.
为了达到上述目的,本发明通过下述技术方案予以实现:一种三维指静脉特征提取方法,其特征在于:包括以下步骤:In order to achieve the above object, the present invention is implemented by the following technical solution: A three-dimensional finger vein feature extraction method, characterized in that it includes the following steps:
第一步,通过三个摄像头在等分角度下从三个角度拍摄手指静脉,获取二维手指静脉图像;In the first step, three fingers are used to capture the finger vein from three angles at equal angles to obtain a two-dimensional finger vein image;
第二步,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建;In the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model to realize the construction of the three-dimensional finger model;
第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger;
第四步,对归一化后的三维手指模型进行特征提取:In the fourth step, feature extraction is performed on the normalized 3D finger model:
(1)对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图;(1) Process the normalized three-dimensional finger model to generate three-dimensional texture expansion map and geometric distance feature map;
(2)采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练。(2) The convolutional neural network is used to extract features from the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and the central axis geometric distance features; at the same time, the neural network is trained.
第二步中,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建是指:将手指剖面图近似视为一个椭圆,将三维手指等距离分割成若干个剖面,计算每个剖面的轮廓,用多个不同半径不同位置的椭圆来对手指近似建模;再将所有轮廓按手指中轴方向串接起来,即可获得近似的三维手指模型。In the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model, and the construction of the three-dimensional finger model means that the finger profile is approximately regarded as an ellipse, and the three-dimensional finger is divided into equal distances into For several profiles, calculate the profile of each profile, and use multiple ellipses with different radii and different positions to approximate the model of the finger; then connect all the profiles in series according to the direction of the central axis of the finger to obtain an approximate three-dimensional finger model.
第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响是指:采用最小二乘法将三维重建中得到的每个横截面上近似椭圆的中心回归到一条中轴线上,然后利用下列等式(1),对坐标进行归一化;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger. It means that the least square method is used to return the center of the approximate ellipse on each cross-section obtained in the three-dimensional reconstruction to a line. On the central axis, then use the following equation (1) to normalize the coordinates;
Figure PCTCN2019113883-appb-000001
Figure PCTCN2019113883-appb-000001
其中,(x m,y m,z m)代表椭圆的中点,(S,W,G)代表中轴线的方向。 Among them, (x m , y m , z m ) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis.
通过上述归一化,可以使得手指轴向与三维模型的中轴线一致,并使得三维模型的中心点与原点一致,进而消除了水平和垂直运动带来的偏移。Through the above normalization, the axial direction of the finger can be consistent with the central axis of the three-dimensional model, and the center point of the three-dimensional model can be consistent with the origin, thereby eliminating the offset caused by the horizontal and vertical movements.
第四步中,对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图是指:In the fourth step, processing the normalized three-dimensional finger model to generate a three-dimensional texture expansion map and geometric distance feature map means:
首先,定义扇形柱体区域为SC-Block(i),其中i作为下标取值从1到N;沿着三维柱体的轴心进行旋转切割,得到360个扇形柱体区域;将扇形柱体区域的底面圆心角范围设置为((i-1)·Δα,i·Δα];同时,设置柱体高度Z的范围为[z min,z max],其中,z min和z max分别表示高度的最小值和最大值;N表示特征图的宽度,N=360/Δα,Δα是角度采样间隔; First, define the sector-shaped cylinder area as SC-Block (i), where i is a subscript from 1 to N; rotate and cut along the axis of the three-dimensional cylinder to obtain 360 sector-shaped cylinder areas; The range of the center angle of the bottom surface of the volume area is set to ((i-1) · Δα, i · Δα]; at the same time, the range of the cylinder height Z is set to [z min , z max ], where z min and z max represent respectively The minimum and maximum heights; N represents the width of the feature map, N = 360 / Δα, and Δα is the angle sampling interval;
然后,通过以下函数将扇形柱体区域集合的三维点集映射到三维纹理展开图I F3DTM和几何距离特征图I F3DGM上: Then, the three-dimensional point set of the fan-shaped cylinder region set is mapped to the three-dimensional texture expansion map I F3DTM and the geometric distance feature map I F3DGM by the following function:
I F3DTM.col(i)=Γ t(SC-Block(i))    (1) I F3DTM .col (i) = Γ t (SC-Block (i)) (1)
I F3DGM.col(i)=Γ g(SC-Block(i))    (2) I F3DGM .col (i) = Γ g (SC-Block (i)) (2)
其中,F3DTM和F3DGM分别代表三维纹理展开图和几何距离特征图,.col(i)表示特征图的第i列,而函数Γ g、Γ t则分别将扇形柱体区域集合SC-Block(i)以固定的间隔从Z轴进行划分切割为M块;其中,I F3DTM每一像素通过计算区域内的平均像素值获得,而I F3DGM的每一像素则通过计算对应区域内点集到中轴线的直线距离的平均值来获得。 Among them, F3DTM and F3DGM respectively represent the three-dimensional texture expansion map and geometric distance feature map, .col (i) represents the i-th column of the feature map, and the functions Γ g and Γ t respectively collect the fan-shaped cylinder area set SC-Block (i ) Divide and cut into M blocks from the Z axis at fixed intervals; wherein, each pixel of IF3DTM is obtained by calculating the average pixel value in the area, and each pixel of IF3DGM is calculated by calculating the point set in the corresponding area to the central axis The average value of the linear distance is obtained.
第四步中,采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练是指:该神经网络结构是由四个包含3×3和1×1卷积层的卷积块连续堆叠而成,这样设计能在有效减少参数量的同时保证其识别性能;三维纹理展开图和几何距离特征图分别依次通过神经网络结构和256维输出的全连接层,得到256维的静脉纹理特征和256维的中轴几何距离特征;最后通过SoftMax层计算损失并对网络进行训练。In the fourth step, the convolutional neural network is used to extract features of the three-dimensional texture expansion map and the geometric distance feature map to obtain the vein texture feature and the central axis geometric distance feature; at the same time, training the neural network means that the structure of the neural network is It is composed of four convolutional blocks containing 3 × 3 and 1 × 1 convolutional layers continuously stacked, so that the design can effectively reduce the number of parameters while ensuring its recognition performance; the three-dimensional texture expansion map and geometric distance feature map are passed in turn The neural network structure and the 256-dimensional output of the fully connected layer obtain the 256-dimensional vein texture feature and the 256-dimensional central axis geometric distance feature. Finally, the SoftMax layer calculates the loss and trains the network.
以下对提出的三维指静脉匹配方法进行说明:The following describes the proposed three-dimensional finger vein matching method:
通过计算模板样本和待匹配样本的静脉纹理特征和中轴几何距离特征分数,并进行加权融合,通过阈值来对融合后的匹配分数进行判定,完成三维指静脉的匹配识别。By calculating the vein texture feature and median geometric distance feature score of the template sample and the sample to be matched, and performing weighted fusion, the fusion matching score is judged by the threshold, and the matching recognition of the three-dimensional finger vein is completed.
具体为:在特征匹配阶段,先对需要匹配的模板样本手指静脉和待匹配样本手指静脉分别依次进行第一步至第四步,分别得到模板样本的静脉纹理和三维手指形状特征(中轴几何距离特征),以及待匹配样本的静脉纹理和三维手指形状特征;分别计算模板样本和待匹配样本静脉纹理特征的余弦距离D 1,以及模板样本和待匹配样本的三维手指形状特征的余弦距离D 2。他们的余弦距离的公式分别,如下: Specifically, in the feature matching stage, firstly, the template sample finger vein to be matched and the sample finger vein to be matched are sequentially performed from the first step to the fourth step, respectively, to obtain the vein texture and three-dimensional finger shape features (middle-axis geometry) Distance feature), and the vein texture and three-dimensional finger shape feature of the sample to be matched; the cosine distance D 1 of the template sample and the vein texture feature of the sample to be matched, and the cosine distance D of the three-dimensional finger shape feature of the template sample and the sample to be matched, respectively 2 . Their cosine distance formulas are as follows:
Figure PCTCN2019113883-appb-000002
Figure PCTCN2019113883-appb-000002
其中F v1,F v2分别为模板样本和带匹配样本手指的静脉特征向量,F d1,F d2或手指形状特征向量。 F v1 and F v2 are the template sample and the vein feature vector of the finger with the matching sample, F d1 , F d2 or finger shape feature vector, respectively.
之后,对静脉纹理特征和手指形状特征的余弦距离(匹配分数)进行分数层加权融合,得到总余弦距离D。其中,融合的权重通过在数据中随机抽取10%作为验证集,在验证集上遍历权重值,取使得融合匹配分数之后等误率最低的权重值作为最佳权重,使用这个最佳权重对匹配结果进行加权融合,得到最终的匹配结果;Then, the cosine distance (matching score) of the vein texture feature and the finger shape feature is scored and weighted to obtain the total cosine distance D. Among them, the fusion weight is randomly selected 10% in the data as the verification set, and the weight value is traversed on the verification set, and the weight value with the lowest error rate after the fusion matching score is taken as the best weight, and the best weight is used to match The results are weighted and fused to get the final matching result;
S=w·S t+(1-w)·S g S = w · S t + (1-w) · S g
其中,S为最终匹配分数,S t为纹理匹配分数,S g为形状匹配分数,w为融合权重。 Where S is the final matching score, S t is the texture matching score, S g is the shape matching score, and w is the fusion weight.
最后通过实验确定一个阈值,当总余弦距离D小于阈值时,判断为匹配,否则不匹配。Finally, a threshold is determined through experiments. When the total cosine distance D is less than the threshold, it is determined as a match, otherwise it does not match.
与现有技术相比,本发明具有如下优点与有益效果:本发明三维指静脉特征提取方法及其匹配方法能够获得更多的静脉纹理特征,得到更好的匹配识别效果,同时还可以有效解决手指姿态变化带来匹配识别性能差的问题,从而提高静脉匹配识别的准确性和有效性。Compared with the prior art, the present invention has the following advantages and beneficial effects: The three-dimensional finger vein feature extraction method and matching method of the present invention can obtain more vein texture features, get a better matching recognition effect, and can also be effectively solved The change of finger pose brings the problem of poor matching recognition performance, thereby improving the accuracy and effectiveness of vein matching recognition.
附图说明BRIEF DESCRIPTION
图1是本发明三维指静脉特征提取方法和匹配方法的流程示意图;1 is a schematic flowchart of a three-dimensional finger vein feature extraction method and matching method of the present invention;
图2是本发明椭圆模型的三维手指模型构建示意图;2 is a schematic diagram of the construction of the three-dimensional finger model of the ellipse model of the present invention;
图3是本发明沿三维柱体轴心进行旋转切割得到360个扇形柱体区域示意图;3 is a schematic view of 360 sector-shaped cylinder regions obtained by rotating and cutting along the axis of a three-dimensional cylinder of the present invention;
图4是本发明三维纹理展开图;4 is a three-dimensional texture development diagram of the present invention;
图5是本发明几何距离特征图;Figure 5 is a geometric distance characteristic diagram of the present invention;
图6是本发明三维纹理展开图和几何距离特征图通过卷积神经网络结构和256维输出的全连接层的示意图。6 is a schematic diagram of a fully connected layer through a convolutional neural network structure and a 256-dimensional output of the three-dimensional texture expansion map and geometric distance feature map of the present invention.
具体实施方式detailed description
下面结合附图与具体实施方式对本发明作进一步详细的描述。The present invention will be described in further detail below with reference to the drawings and specific embodiments.
实施例Examples
如图1至图6所示,本发明一种三维指静脉特征提取方法包括以下步骤:As shown in FIGS. 1 to 6, a three-dimensional finger vein feature extraction method of the present invention includes the following steps:
第一步,通过三个摄像头在等分角度下从三个角度拍摄手指静脉,获取二维手指静脉图像;In the first step, three fingers are used to capture the finger vein from three angles at equal angles to obtain a two-dimensional finger vein image;
第二步,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建;In the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model to realize the construction of the three-dimensional finger model;
第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger;
第四步,对归一化后的三维手指模型进行特征提取:In the fourth step, feature extraction is performed on the normalized 3D finger model:
(1)对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图;(1) Process the normalized three-dimensional finger model to generate three-dimensional texture expansion map and geometric distance feature map;
(2)采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练。(2) The convolutional neural network is used to extract features from the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and the central axis geometric distance features; at the same time, the neural network is trained.
其中,第二步中,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建是指:将手指剖面图近似视为一个椭圆,将三维手指等距离分割成若干个剖面,计算每个剖面的轮廓,用多个不同半径 不同位置的椭圆来对手指近似建模;再将所有轮廓按手指中轴方向串接起来,即可获得近似的三维手指模型。Among them, in the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model, and the construction of the three-dimensional finger model refers to: the finger profile is approximately regarded as an ellipse, and the three-dimensional finger is equidistant Split into several sections, calculate the contour of each section, and use multiple ellipses with different radii and different positions to approximate the model of the finger; then connect all the contours in series according to the direction of the central axis of the finger to obtain an approximate three-dimensional finger model .
计算每个剖面的轮廓的方法是:The method for calculating the profile of each section is:
1)根据三个摄像头的投影中心C 1,C 2,C 3建立xOy坐标系(2D-CS),如图2所示. 1) Establish the xOy coordinate system (2D-CS) according to the projection centers C 1 , C 2 , and C 3 of the three cameras, as shown in Figure 2.
2)确定椭圆和直线方程的。2) Determine the ellipse and straight line equations.
设椭圆的方程如下:The equation for the ellipse is as follows:
Figure PCTCN2019113883-appb-000003
Figure PCTCN2019113883-appb-000003
每个摄像机的投影中心记为C i(x,y),这样,直线C iU i(L ui),C iB i(L ni)的方程可以求得,在这里我们只讨论直线斜率存在的情况: The projection center of each camera is recorded as C i (x, y). In this way, the equations of the straight lines C i U i (L ui ) and C i B i (L ni ) can be obtained. Here we only discuss the existence of straight line slope Case:
L ui:y=k uix+b ui L ui : y = k ui x + b ui
L bi:y=k bix+b bi L bi : y = k bi x + b bi
其中i=1,2,3,k ui和b ui分别代表直线L ui的斜率和截距,k bi和b bi分别代表直线L bi的斜率和截距。 Where i = 1, 2, 3, k ui and b ui respectively represent the slope and intercept of the straight line L ui , and k bi and b bi respectively represent the slope and intercept of the straight line L bi .
3)确定约束条件3) Determine the constraints
作这些约束直线的平行线,就像图2所示,使它们与椭圆相切,假设这些平行线的方程为:To make these parallel lines constraining the straight lines, as shown in Figure 2, make them tangent to the ellipse, assuming the equations for these parallel lines are:
L ui:y=k uix+b uiui L ui : y = k ui x + b ui + ξ ui
L bi:y=k bix+b bibi L bi : y = k bi x + b bi + ξ bi
由,L ui,L bi与椭圆相切的条件可以得到下面的约束方程: From the condition that Lui , Lbi and the ellipse are tangent, the following constraint equation can be obtained:
Figure PCTCN2019113883-appb-000004
Figure PCTCN2019113883-appb-000004
其中:among them:
A ui=a+bk ui 2+2ck ui A ui = a + bk ui 2 + 2ck ui
Figure PCTCN2019113883-appb-000005
Figure PCTCN2019113883-appb-000005
B ui=(2k uib+2c)(b uiui)+ek ui+d B ui = (2k ui b + 2c) (b ui + ξ ui ) + ek ui + d
B bi=(2k bib+2c)(b bibi)+ek bi+d B bi = (2k bi b + 2c) (b bi + ξ bi ) + ek bi + d
C ui=b(b uiui) 2+e(b uiui)+f C ui = b (b ui + ξ ui ) 2 + e (b ui + ξ ui ) + f
C bi=b(b bibi) 2+e(b bibi)+f C bi = b (b bi + ξ bi ) 2 + e (b bi + ξ bi ) + f
4)目标优化函数4) Objective optimization function
正如前面所说,椭圆必须非常接近约束方程,所以我们的目标是使椭圆到所有直线的距离的和最小化:As mentioned earlier, the ellipse must be very close to the constraint equation, so our goal is to minimize the sum of the distances from the ellipse to all straight lines:
Figure PCTCN2019113883-appb-000006
Figure PCTCN2019113883-appb-000006
5)求解算法5) Solving algorithm
①单个剖面椭圆求解①Single section ellipse solution
我们需要将图像上的边缘点坐标转换到2D-CS上,转换关系为下列公式:We need to convert the coordinates of the edge points on the image to 2D-CS, the conversion relationship is the following formula:
Figure PCTCN2019113883-appb-000007
Figure PCTCN2019113883-appb-000007
式中,θ i表示
Figure PCTCN2019113883-appb-000008
与x轴正方向的夹角,i=1,2,3,y m表示摄像机光学中心的y值,它与摄像机内参数有关。
Where θ i represents
Figure PCTCN2019113883-appb-000008
The angle to the positive direction of the x-axis, i = 1, 2, 3, y m represents the y value of the optical center of the camera, which is related to the parameters within the camera.
对于每一个椭圆,在3)中所示的约束条件下用梯度下降法求解目标优化函数。主要的问题是如何设置初始迭代点,因为一个恰当的初始迭代点对加快寻优的速度和找到全局最优解两方面具有极其重要的作用。通过大量的实验,确定了设置初始迭代点的方法如下:For each ellipse, the objective optimization function is solved by the gradient descent method under the constraints shown in 3). The main problem is how to set the initial iteration point, because a proper initial iteration point plays an extremely important role in speeding up the speed of optimization and finding the global optimal solution. Through a large number of experiments, the method of setting the initial iteration point is determined as follows:
一个椭圆共有五个独立变量,包括椭圆中心的横、纵坐标值,椭圆长半轴长度,离心率和旋转角度,我们的问题能够转化为计算六边形的近似内切椭圆,根据Brianchon’s理论,一个六边形ABCDEF有一个内切椭圆当且仅当它的主对角线AD,BE和CF交于同一点:There are five independent variables in an ellipse, including the horizontal and vertical coordinates of the center of the ellipse, the length of the major axis of the ellipse, the eccentricity and the angle of rotation. Our problem can be transformed into calculating the approximate inscribed ellipse of the hexagon. According to Brianchon's theory, A hexagon ABCDEF has an inscribed ellipse if and only if its main diagonal AD, BE and CF intersect at the same point:
[(A×D),(B×E),(C×F)]=0[(A × D), (B × E), (C × F)] = 0
在我们的模型的计算过程中,这些对角线一般是两两相交的,我们设置椭圆的初始中心点(C 0)为这些交点组成的三角形的重点,计算如公式如下所示,设置初始长轴长为初始中心点到六边形六个顶点距离最小值。 In the calculation process of our model, these diagonal lines generally intersect each other. We set the initial center point (C 0 ) of the ellipse as the key point of the triangle formed by these intersection points. The calculation is as shown in the formula below. Set the initial length The axis length is the minimum distance from the initial center point to the six vertices of the hexagon.
Figure PCTCN2019113883-appb-000009
Figure PCTCN2019113883-appb-000009
Figure PCTCN2019113883-appb-000010
Figure PCTCN2019113883-appb-000010
此外,我们设置初始的离心率和旋转角为固定值,其中离心率设为e 0=1.4,旋转角设为α=0,这两个常数值是通过实验确定的。 In addition, we set the initial eccentricity and rotation angle to fixed values, where the eccentricity is set to e 0 = 1.4, and the rotation angle is set to α = 0. These two constant values are determined by experiment.
②整个三维手指模型重建②Reconstruction of the entire 3D finger model
考虑到计算越多的剖面就可以得到越精确的三维手指模型,但是如果使用椭圆近似法计算越多的椭圆,就会消耗越多的时间,而在实际应用时,并不希望在三维手指重建上耗费太多时间,通过观察,发现到这样一个细节,手指表面在轴向上的变化是比较平缓的,我们可以先在轴向上选择一些稀疏的边缘点集合重建部分椭圆,然后用插值的方法扩充近似椭圆的数量。然而,这就出现了另外一个问题,如果选取的边缘点集合中,由于图像质量较差或部分边缘地方噪声较大,从而导致所检测边缘的位置是存在比较大误差,那么重建的手指模型就会在一些地方存在比较大的缺陷,为了减小这方面的影响,以及权衡重建精度与时间损耗,我们提出了更加鲁棒的算法构建手指三维模型:Considering that the more sections are calculated, the more accurate three-dimensional finger model can be obtained, but if the ellipse approximation method is used to calculate more ellipses, it will consume more time, and in practical applications, it is not desirable to reconstruct the three-dimensional finger It took too much time to observe. Through observation, we found such a detail that the change in the axial direction of the finger surface is relatively smooth. We can first select some sparse sets of edge points in the axial direction to reconstruct part of the ellipse, and then use the interpolated The method expands the number of approximate ellipses. However, there is another problem. If the selected edge point set has poor image quality or noise at some edges, resulting in a relatively large error in the position of the detected edge, then the reconstructed finger model There will be relatively large defects in some places. In order to reduce the impact of this aspect and weigh the reconstruction accuracy and time loss, we propose a more robust algorithm to build a three-dimensional model of the finger:
假设在每个矫正之后的图像中,将横坐标在91~490范围内的区域设置为有效的区域。按横轴方向,将有效区域等分成N个子区域,在每个子区域,我们选择一组边缘点重建出椭圆,获得所有子区域的椭圆之后,我们采用插值算法扩展椭圆数据。最后将二维平面下的椭圆转换到三维空间下。Suppose that in each corrected image, the area with the abscissa in the range of 91 to 490 is set as the effective area. According to the direction of the horizontal axis, the effective area is equally divided into N sub-areas. In each sub-area, we select a set of edge points to reconstruct the ellipse. After obtaining the ellipses of all sub-areas, we use interpolation algorithm to expand the ellipse data. Finally, the ellipse in the two-dimensional plane is converted to the three-dimensional space.
Figure PCTCN2019113883-appb-000011
Figure PCTCN2019113883-appb-000011
我们将一个椭圆的z坐标值设置为相同值,无需改变二维椭圆的方程,K是矫正后的摄像机内参数。We set the z-coordinate value of an ellipse to the same value without changing the equation of the two-dimensional ellipse. K is the corrected camera parameter.
第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响是指:采用最小二乘法将三维重建中得到的每个横截面上近似椭圆的中心回归到一条中轴线上,然后利用下列等式(1),对坐标进行归一化;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger. It means that the least square method is used to return the center of the approximate ellipse on each cross-section obtained in the three-dimensional reconstruction to a line. On the central axis, then use the following equation (1) to normalize the coordinates;
Figure PCTCN2019113883-appb-000012
Figure PCTCN2019113883-appb-000012
其中,(x m,y m,z m)代表椭圆的中点,(S,W,G)代表中轴线的方向。通过上述归一化,可以使得手指轴向与三维模型的中轴线一致,并使得三维模型的中心点与原点一致,进而消除了水平和垂直运动带来的偏移。 Among them, (x m , y m , z m ) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis. Through the above normalization, the axial direction of the finger can be consistent with the central axis of the three-dimensional model, and the center point of the three-dimensional model can be consistent with the origin, thereby eliminating the offset caused by the horizontal and vertical movements.
第四步中,对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图是指:In the fourth step, processing the normalized three-dimensional finger model to generate a three-dimensional texture expansion map and geometric distance feature map means:
首先,定义扇形柱体区域为SC-Block(i),其中i作为下标取值从1到N;沿着三维柱体的轴心进行旋转切割,得到360个扇形柱体区域,如图3所示。将扇形柱体区域的底面圆心角范围设置为((i-1)·Δα,i·Δα];同时,设置柱体高度Z的范围为[z min,z max],其中,z min和z max分别表示高度的最小值和最大值;N表示特征图的宽度,N=360/Δα,Δα是角度采样间隔; First, define the sector-shaped cylinder area as SC-Block (i), where i is a subscript from 1 to N; rotate and cut along the axis of the three-dimensional cylinder to obtain 360 sector-shaped cylinder areas, as shown in Figure 3 As shown. Set the range of the center angle of the bottom of the fan-shaped cylinder area to ((i-1) · Δα, i · Δα]; meanwhile, set the range of the cylinder height Z to [z min , z max ], where z min and z max represents the minimum and maximum values of height respectively; N represents the width of the feature map, N = 360 / Δα, and Δα is the angle sampling interval;
然后,通过以下函数将扇形柱体区域集合的三维点集映射到三维纹理展开图I F3DTM和几何距离特征图I F3DGM上: Then, the three-dimensional point set of the fan-shaped cylinder region set is mapped to the three-dimensional texture expansion map I F3DTM and the geometric distance feature map I F3DGM by the following function:
I F3DTM.col(i)=Γ t(SC-Block(i))    (1) I F3DTM .col (i) = Γ t (SC-Block (i)) (1)
I F3DGM.col(i)=Γ g(SC-Block(i))    (2) I F3DGM .col (i) = Γ g (SC-Block (i)) (2)
其中,F3DTM和F3DGM分别代表三维纹理展开图和几何距离特征图,.col(i)表示特征图的第i列,而函数Γ g、Γ t则分别将扇形柱体区域集合SC-Block(i)以固定的间隔从Z轴进行划分切割为M块;其中,I F3DTM每一像素通过计算区域内的平均像素值获得,而I F3DGM的每一像素则通过计算对应区域内点集到中轴线的直线距离的平均值来获得。本实施例设置Δα=1,M=360。图4和图5为计算所得的特征图示例。 Among them, F3DTM and F3DGM respectively represent the three-dimensional texture expansion map and geometric distance feature map, .col (i) represents the i-th column of the feature map, and the functions Γ g and Γ t respectively collect the fan-shaped cylinder area set SC-Block (i ) Divide and cut into M blocks from the Z axis at fixed intervals; wherein, each pixel of IF3DTM is obtained by calculating the average pixel value in the area, and each pixel of IF3DGM is calculated by calculating the point set in the corresponding area to the central axis The average value of the linear distance is obtained. In this embodiment, Δα = 1 and M = 360. Figures 4 and 5 are examples of calculated feature maps.
第四步中,采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练是指:如图6所示,该神经网络结构是由四个包含3×3和1×1卷积层的卷积块连续堆叠而成,这样设计能在有效减少参数量的同时保证其识别性能;三 维纹理展开图和几何距离特征图分别依次通过神经网络结构和256维输出的全连接层,得到静脉纹理特征和中轴几何距离特征;最后通过SoftMax层计算损失并对网络进行训练。In the fourth step, the convolutional neural network is used to extract features of the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and central axis geometric distance features; meanwhile, training the neural network means: as shown in Figure 6 , The neural network structure is composed of four convolutional blocks containing 3 × 3 and 1 × 1 convolutional layers stacked in succession, so that the design can effectively reduce the number of parameters while ensuring its recognition performance; 3D texture expansion map and geometry The distance feature map respectively passes through the neural network structure and the 256-dimensional output fully connected layer to obtain the vein texture feature and the central axis geometric distance feature; finally, the SoftMax layer calculates the loss and trains the network.
本发明三维指静脉特征匹配方法是这样的:通过计算模板样本和待匹配样本的静脉纹理特征和中轴几何距离特征分数,并进行加权融合,通过阈值来对融合后的匹配分数进行判定,完成三维指静脉的匹配识别。The three-dimensional finger vein feature matching method of the present invention is as follows: by calculating the vein texture feature and mid-axis geometric distance feature score of the template sample and the sample to be matched, and performing weighted fusion, the fusion score is determined by a threshold to complete Three-dimensional finger vein matching recognition.
具体为:在特征匹配阶段,先对需要匹配的模板样本手指静脉和待匹配样本手指静脉分别依次进行第一步至第四步,分别得到模板样本的静脉纹理和三维手指形状特征,以及待匹配样本的静脉纹理和三维手指形状特征;分别计算模板样本和待匹配样本静脉纹理特征的余弦距离D 1,以及模板样本和待匹配样本的三维手指形状特征的余弦距离D 2。他们的余弦距离的公式分别,如下: Specifically, in the feature matching stage, the first step and the fourth step are performed on the finger veins of the template sample to be matched and the finger veins of the sample to be matched, respectively, to obtain the vein texture and three-dimensional finger shape features of the template sample, and to be matched The vein texture and three-dimensional finger shape features of the sample; the cosine distance D 1 of the vein texture feature of the template sample and the sample to be matched, and the cosine distance D 2 of the three-dimensional finger shape feature of the template sample and the sample to be matched are calculated respectively. Their cosine distance formulas are as follows:
Figure PCTCN2019113883-appb-000013
Figure PCTCN2019113883-appb-000013
其中F v1,F v2分别为模板样本和带匹配样本手指的静脉特征向量,F d1,F d2分别为模板样本和带匹配样本手指的形状特征向量。 Among them, F v1 and F v2 are the template sample and the vein feature vector of the finger with the matching sample, and F d1 and F d2 are the shape feature vector of the template sample and the finger with the matching sample, respectively.
之后,对静脉纹理特征和手指形状特征的余弦距离(匹配分数)进行分数层加权融合得到总余弦距离D。其中,融合的权重通过在数据中随机抽取10%作为验证集,在验证集上遍历权重值,取使得融合匹配分数之后等误率最低的权重值作为最佳权重,使用这个最佳权重对匹配结果进行加权融合,得到最终的匹配结果;After that, the cosine distance (matching score) of the vein texture feature and the finger shape feature is scored and weighted to obtain the total cosine distance D. Among them, the fusion weight is randomly selected 10% in the data as the verification set, and the weight value is traversed on the verification set, and the weight value with the lowest error rate after the fusion matching score is taken as the best weight, and the best weight is used to match The results are weighted and fused to get the final matching result;
S=w·S t+(1-w)·S g S = w · S t + (1-w) · S g
其中,S为最终匹配分数,S t为纹理匹配分数,S g为形状匹配分数,w为融合权重。 Where S is the final matching score, S t is the texture matching score, S g is the shape matching score, and w is the fusion weight.
最后通过实验确定一个阈值,当总余弦距离D小于阈值时,判断为匹配,否则不匹配。Finally, a threshold is determined through experiments. When the total cosine distance D is less than the threshold, it is determined as a match, otherwise it does not match.
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited by the above embodiments. Any other changes, modifications, substitutions, combinations, changes, modifications, substitutions, combinations, etc. made without departing from the spirit and principle of the present invention The simplifications should all be equivalent replacement methods, which are all included in the protection scope of the present invention.

Claims (8)

  1. 一种三维指静脉特征提取方法,其特征在于:包括以下步骤:A three-dimensional finger vein feature extraction method, characterized in that it includes the following steps:
    第一步,通过三个摄像头在等分角度下从三个角度拍摄手指静脉,获取二维手指静脉图像;In the first step, three fingers are used to capture the finger vein from three angles at equal angles to obtain a two-dimensional finger vein image;
    第二步,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建;In the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model to realize the construction of the three-dimensional finger model;
    第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响;The third step is to normalize the three-dimensional finger model to achieve the effect of eliminating the horizontal and vertical offset of the finger;
    第四步,对归一化后的三维手指模型进行特征提取:In the fourth step, feature extraction is performed on the normalized 3D finger model:
    (1)对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征图;(1) Process the normalized three-dimensional finger model to generate three-dimensional texture expansion map and geometric distance feature map;
    (2)采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练。(2) The convolutional neural network is used to extract features from the three-dimensional texture expansion map and the geometric distance feature map to obtain vein texture features and the central axis geometric distance features; at the same time, the neural network is trained.
  2. 根据权利要求1所述的三维指静脉特征提取方法,其特征在于:第二步中,通过计算三个摄像头的参数,将二维手指静脉图像映射到三维模型,实现三维手指模型的构建是指:将手指剖面图近似视为一个椭圆,将三维手指等距离分割成若干个剖面,计算每个剖面的轮廓,用多个不同半径不同位置的椭圆来对手指近似建模;再将所有轮廓按手指中轴方向串接起来,即可获得近似的三维手指模型。The three-dimensional finger vein feature extraction method according to claim 1, characterized in that in the second step, by calculating the parameters of the three cameras, the two-dimensional finger vein image is mapped to the three-dimensional model to realize the construction of the three-dimensional finger model means : The finger profile is approximately regarded as an ellipse, the three-dimensional finger is divided into several sections at equal distances, the contour of each section is calculated, and multiple ellipses with different radii and different positions are used to approximate the model of the finger; The three-dimensional finger model can be obtained by connecting the fingers in the axial direction in series.
  3. 根据权利要求1所述的三维指静脉特征提取方法,其特征在于:第三步,对三维手指模型进行归一化,实现消除手指水平和垂直偏移带来的影响是指:采用最小二乘法将三维重建中得到的每个横截面上近似椭圆的中心回归到一条中轴线上,然后利用下列等式(1),对坐标进行归一化;The three-dimensional finger vein feature extraction method according to claim 1, characterized in that: in the third step, the three-dimensional finger model is normalized to eliminate the influence of the horizontal and vertical displacement of the finger means: using the least square method Return the center of the approximate ellipse on each cross-section obtained in the three-dimensional reconstruction to a central axis, and then use the following equation (1) to normalize the coordinates;
    Figure PCTCN2019113883-appb-100001
    Figure PCTCN2019113883-appb-100001
    其中,(x m,y m,z m)代表椭圆的中点,(S,W,G)代表中轴线的方向。 Among them, (x m , y m , z m ) represents the midpoint of the ellipse, and (S, W, G) represents the direction of the central axis.
  4. 根据权利要求1所述的三维指静脉特征提取方法,其特征在于:第四步中,对归一化后的三维手指模型进行处理生成三维纹理展开图和几何距离特征 图是指:The three-dimensional finger vein feature extraction method according to claim 1, wherein in the fourth step, the normalized three-dimensional finger model is processed to generate a three-dimensional texture expansion map and a geometric distance feature map means:
    首先,定义扇形柱体区域为SC-Block(i),其中i作为下标取值从1到N;沿着三维柱体的轴心进行旋转切割,得到360个扇形柱体区域;将扇形柱体区域的底面圆心角范围设置为((i-1)·△α,i·△α];同时,设置柱体高度Z的范围为[z min,z max],其中,z min和z max分别表示高度的最小值和最大值;N表示特征图的宽度,N=360/Δα,Δα是角度采样间隔; First, define the sector-shaped cylinder area as SC-Block (i), where i is a subscript from 1 to N; rotate and cut along the axis of the three-dimensional cylinder to obtain 360 sector-shaped cylinder areas; The range of the center angle of the bottom of the volume area is set to ((i-1) · △ α, i · △ α]; meanwhile, the range of the cylinder height Z is set to [z min , z max ], where z min and z max Respectively represent the minimum and maximum values of the height; N represents the width of the feature map, N = 360 / Δα, Δα is the angle sampling interval;
    然后,通过以下函数将扇形柱体区域集合的三维点集映射到三维纹理展开图I F3DTM和几何距离特征图I F3DGM上: Then, the three-dimensional point set of the fan-shaped cylinder region set is mapped to the three-dimensional texture expansion map I F3DTM and the geometric distance feature map I F3DGM by the following function:
    I F3DTM.col(i)=Γ t(SC-Block(i))   (1) I F3DTM .col (i) = Γ t (SC-Block (i)) (1)
    I F3DGM.col(i)=Γ g(SC-Block(i))   (2) I F3DGM .col (i) = Γ g (SC-Block (i)) (2)
    其中,F3DTM和F3DGM分别代表三维纹理展开图和几何距离特征图,.col(i)表示特征图的第i列,而函数Γ g、Γ t则分别将扇形柱体区域集合SC-Block(i)以固定的间隔从Z轴进行划分切割为M块;其中,I F3DTM每一像素通过计算区域内的平均像素值获得,而I F3DGM的每一像素则通过计算对应区域内点集到中轴线的直线距离的平均值来获得。 Among them, F3DTM and F3DGM respectively represent the three-dimensional texture expansion map and geometric distance feature map, .col (i) represents the i-th column of the feature map, and the functions Γ g and Γ t respectively collect the fan-shaped cylinder area set SC-Block (i ) Divide and cut into M blocks from the Z axis at fixed intervals; wherein, each pixel of IF3DTM is obtained by calculating the average pixel value in the area, and each pixel of IF3DGM is calculated by calculating the point set in the corresponding area to the central axis The average value of the linear distance is obtained.
  5. 根据权利要求1所述的三维指静脉特征提取方法,其特征在于:第四步中,采用卷积神经网络分别对三维纹理展开图和几何距离特征图进行特征提取,得到静脉纹理特征和中轴几何距离特征;同时对神经网络进行训练是指:该神经网络结构是由四个包含3×3和1×1卷积层的卷积块连续堆叠而成;三维纹理展开图和几何距离特征图分别依次通过神经网络结构和256维输出的全连接层,得到256维的静脉纹理特征和256维的中轴几何距离特征;最后通过SoftMax层计算损失并对网络进行训练。The three-dimensional finger vein feature extraction method according to claim 1, characterized in that: in the fourth step, a convolutional neural network is used to perform feature extraction on the three-dimensional texture expansion map and the geometric distance feature map, respectively, to obtain the vein texture feature and the central axis Geometric distance feature; training the neural network at the same time means that the neural network structure is formed by continuously stacking four convolution blocks containing 3 × 3 and 1 × 1 convolutional layers; three-dimensional texture expansion map and geometric distance feature map Through the neural network structure and the 256-dimensional output fully connected layer in turn, the 256-dimensional vein texture feature and the 256-dimensional geometric axis geometric distance feature are obtained; finally, the SoftMax layer calculates the loss and trains the network.
  6. 根据权利要求1所述的三维指静脉特征匹配方法,其特征在于:通过计算模板样本和待匹配样本的静脉纹理特征和中轴几何距离特征分数,并进行加权融合,通过阈值来对融合后的匹配分数进行判定,完成三维指静脉的匹配识别。The three-dimensional finger vein feature matching method according to claim 1, characterized in that: by calculating the vein texture feature and the median geometric distance feature score of the template sample and the sample to be matched, and performing weighted fusion, the fusion The matching score is judged to complete the matching recognition of the three-dimensional finger veins.
  7. 根据权利要求6所述的三维指静脉特征匹配方法,其特征在于:所述通过计算模板样本和待匹配样本的静脉纹理特征和中轴几何距离特征分数,并进 行加权融合,通过阈值来对融合后的匹配分数进行判定,完成三维指静脉的匹配识别是指:The three-dimensional finger vein feature matching method according to claim 6, characterized in that: the vein texture feature and the median geometric distance feature score of the template sample and the sample to be matched are calculated, and weighted fusion is performed, and the fusion is performed by a threshold After the matching score is judged, the completion of the three-dimensional finger vein matching recognition is:
    在特征匹配阶段,先对需要匹配的模板样本手指静脉和待匹配样本手指静脉分别依次进行第一步至第四步,分别得到模板样本的静脉纹理和三维手指形状特征,以及待匹配样本的静脉纹理和三维手指形状特征;分别计算模板样本和待匹配样本静脉纹理特征的余弦距离D 1,以及模板样本和待匹配样本的三维手指形状特征的余弦距离D 2,余弦距离的公式分别如下: In the feature matching stage, the first step and the fourth step are performed on the finger vein of the template sample to be matched and the finger vein of the sample to be matched, respectively, to obtain the vein texture and three-dimensional finger shape characteristics of the template sample, and the vein of the sample to be matched Texture and three-dimensional finger shape features; calculate the cosine distance D 1 of the template sample and the vein texture features of the sample to be matched, and the cosine distance D 2 of the three-dimensional finger shape features of the template sample and the sample to be matched, and the formulas of the cosine distance are as follows:
    Figure PCTCN2019113883-appb-100002
    Figure PCTCN2019113883-appb-100002
    其中F v1,F v2分别为模板样本和带匹配样本手指的静脉特征向量,F d1,F d2分别为模板样本和带匹配样本手指的形状特征向量。 Among them, F v1 and F v2 are the template sample and the vein feature vector of the finger with the matching sample, and F d1 and F d2 are the shape feature vector of the template sample and the finger with the matching sample, respectively.
  8. 根据权利要求7所述的三维指静脉特征匹配方法,其特征在于:对静脉纹理特征和手指形状特征的余弦距离进行分数层加权融合得到总余弦距离D;其中,融合的权重通过在数据中随机抽取10%作为验证集,在验证集上遍历权重值,取使得融合匹配分数之后等误率最低的权重值作为最佳权重,使用这个最佳权重对匹配结果进行加权融合,得到最终的匹配结果;The three-dimensional finger vein feature matching method according to claim 7, wherein: the cosine distance between the vein texture feature and the finger shape feature is fractionally layered and weighted to obtain the total cosine distance D; wherein, the fusion weight is obtained by randomizing the data Extract 10% as the verification set, traverse the weight values on the verification set, take the weight value with the lowest error rate after merging the matching scores as the optimal weight, and use this optimal weight to fuse the matching results to obtain the final matching result ;
    S=w·S t+(1-w)·S g S = w · S t + (1-w) · S g
    其中,S为最终匹配分数,S t为纹理匹配分数,S g为形状匹配分数,w为融合权重; Where S is the final matching score, S t is the texture matching score, S g is the shape matching score, and w is the fusion weight;
    最后通过实验确定一个阈值,当总余弦距离D小于阈值时,判断为匹配,否则不匹配。Finally, a threshold is determined through experiments. When the total cosine distance D is less than the threshold, it is determined as a match, otherwise it does not match.
PCT/CN2019/113883 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor WO2020083407A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2019368520A AU2019368520B2 (en) 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811235227.9A CN109543535B (en) 2018-10-23 2018-10-23 Three-dimensional finger vein feature extraction method and matching method thereof
CN201811235227.9 2018-10-23

Publications (1)

Publication Number Publication Date
WO2020083407A1 true WO2020083407A1 (en) 2020-04-30

Family

ID=65844535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113883 WO2020083407A1 (en) 2018-10-23 2019-10-29 Three-dimensional finger vein feature extraction method and matching method therefor

Country Status (3)

Country Link
CN (1) CN109543535B (en)
AU (1) AU2019368520B2 (en)
WO (1) WO2020083407A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612831A (en) * 2020-05-22 2020-09-01 创新奇智(北京)科技有限公司 Depth estimation method and device, electronic equipment and storage medium
CN112560710A (en) * 2020-12-18 2021-03-26 北京曙光易通技术有限公司 Method for constructing finger vein recognition system and finger vein recognition system
CN113689344A (en) * 2021-06-30 2021-11-23 中国矿业大学 Low-exposure image enhancement method based on feature decoupling learning
CN113780095A (en) * 2021-08-17 2021-12-10 中移(杭州)信息技术有限公司 Training data expansion method of face recognition model, terminal device and medium
CN114821682A (en) * 2022-06-30 2022-07-29 广州脉泽科技有限公司 Multi-sample mixed palm vein identification method based on deep learning algorithm

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543535B (en) * 2018-10-23 2021-12-21 华南理工大学 Three-dimensional finger vein feature extraction method and matching method thereof
CN110378425B (en) * 2019-07-23 2021-10-22 武汉珞思雅设科技有限公司 Intelligent image comparison method and system
CN110363250A (en) * 2019-07-23 2019-10-22 北京隆普智能科技有限公司 A kind of method and its system of 3-D image intelligent Matching
CN110827342B (en) * 2019-10-21 2023-06-02 中国科学院自动化研究所 Three-dimensional human body model reconstruction method, storage device and control device
CN110909778B (en) * 2019-11-12 2023-07-21 北京航空航天大学 Image semantic feature matching method based on geometric consistency
CN111009007B (en) * 2019-11-20 2023-07-14 广州光达创新科技有限公司 Finger multi-feature comprehensive three-dimensional reconstruction method
CN111931758B (en) * 2020-10-19 2021-01-05 北京圣点云信息技术有限公司 Face recognition method and device combining facial veins
CN112101332B (en) * 2020-11-23 2021-02-19 北京圣点云信息技术有限公司 Feature extraction and comparison method and device based on 3D finger veins
CN113012271B (en) * 2021-03-23 2022-05-24 华南理工大学 Finger three-dimensional model texture mapping method based on UV (ultraviolet) mapping
CN112990160B (en) * 2021-05-17 2021-11-09 北京圣点云信息技术有限公司 Facial vein identification method and identification device based on photoacoustic imaging technology
CN113673477A (en) * 2021-09-02 2021-11-19 青岛奥美克生物信息科技有限公司 Palm vein non-contact three-dimensional modeling method and device and authentication method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851126A (en) * 2015-04-30 2015-08-19 中国科学院深圳先进技术研究院 Three-dimensional model decomposition method and three-dimensional model decomposition device based on generalized cylinder
CN106919941A (en) * 2017-04-26 2017-07-04 华南理工大学 A kind of three-dimensional finger vein identification method and system
US20180068100A1 (en) * 2016-09-02 2018-03-08 International Business Machines Corporation Three-dimensional fingerprint scanner
CN108009520A (en) * 2017-12-21 2018-05-08 东南大学 A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net
CN109543535A (en) * 2018-10-23 2019-03-29 华南理工大学 Three-dimensional refers to vena characteristic extracting method and its matching process

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851126A (en) * 2015-04-30 2015-08-19 中国科学院深圳先进技术研究院 Three-dimensional model decomposition method and three-dimensional model decomposition device based on generalized cylinder
US20180068100A1 (en) * 2016-09-02 2018-03-08 International Business Machines Corporation Three-dimensional fingerprint scanner
CN106919941A (en) * 2017-04-26 2017-07-04 华南理工大学 A kind of three-dimensional finger vein identification method and system
CN108009520A (en) * 2017-12-21 2018-05-08 东南大学 A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net
CN109543535A (en) * 2018-10-23 2019-03-29 华南理工大学 Three-dimensional refers to vena characteristic extracting method and its matching process

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612831A (en) * 2020-05-22 2020-09-01 创新奇智(北京)科技有限公司 Depth estimation method and device, electronic equipment and storage medium
CN112560710A (en) * 2020-12-18 2021-03-26 北京曙光易通技术有限公司 Method for constructing finger vein recognition system and finger vein recognition system
CN112560710B (en) * 2020-12-18 2024-03-01 北京曙光易通技术有限公司 Method for constructing finger vein recognition system and finger vein recognition system
CN113689344A (en) * 2021-06-30 2021-11-23 中国矿业大学 Low-exposure image enhancement method based on feature decoupling learning
CN113780095A (en) * 2021-08-17 2021-12-10 中移(杭州)信息技术有限公司 Training data expansion method of face recognition model, terminal device and medium
CN113780095B (en) * 2021-08-17 2023-12-26 中移(杭州)信息技术有限公司 Training data expansion method, terminal equipment and medium of face recognition model
CN114821682A (en) * 2022-06-30 2022-07-29 广州脉泽科技有限公司 Multi-sample mixed palm vein identification method based on deep learning algorithm

Also Published As

Publication number Publication date
CN109543535A (en) 2019-03-29
AU2019368520A1 (en) 2021-05-06
CN109543535B (en) 2021-12-21
AU2019368520B2 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
WO2020083407A1 (en) Three-dimensional finger vein feature extraction method and matching method therefor
KR101314131B1 (en) Three dimensional human face recognition method based on intermediate frequency information in geometry image
Kang et al. Study of a full-view 3D finger vein verification technique
CN105956582B (en) A kind of face identification system based on three-dimensional data
WO2018196371A1 (en) Three-dimensional finger vein recognition method and system
CN102880866B (en) Method for extracting face features
CN100559398C (en) Automatic deepness image registration method
CN103310196B (en) The finger vein identification method of area-of-interest and direction element
CN101558996B (en) Gait recognition method based on orthogonal projection three-dimensional reconstruction of human motion structure
CN101398886A (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN107123161A (en) A kind of the whole network three-dimensional rebuilding method of contact net zero based on NARF and FPFH
CN102043961B (en) Vein feature extraction method and method for carrying out identity authentication by utilizing double finger veins and finger-shape features
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN101650777B (en) Corresponding three-dimensional face recognition method based on dense point
CN107093205A (en) A kind of three dimensions building window detection method for reconstructing based on unmanned plane image
CN110807781B (en) Point cloud simplifying method for retaining details and boundary characteristics
CN106485721B (en) The method and its system of retinal structure are obtained from optical coherence tomography image
CN106446773B (en) Full-automatic robust three-dimensional face detection method
CN106204557A (en) A kind of extracting method of the non-complete data symmetrical feature estimated with M based on extension Gaussian sphere
WO2015131468A1 (en) Method and system for estimating fingerprint pose
CN105631899A (en) Ultrasonic image motion object tracking method based on gray-scale texture feature
CN110910387A (en) Point cloud building facade window extraction method based on significance analysis
CN110751680A (en) Image processing method with fast alignment algorithm
CN106778491A (en) The acquisition methods and equipment of face 3D characteristic informations
CN104268502B (en) Means of identification after human vein image characteristics extraction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19876247

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019368520

Country of ref document: AU

Date of ref document: 20191029

Kind code of ref document: A

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 24.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19876247

Country of ref document: EP

Kind code of ref document: A1