CN112241680A - Multi-mode identity authentication method based on vein similar image knowledge migration network - Google Patents
Multi-mode identity authentication method based on vein similar image knowledge migration network Download PDFInfo
- Publication number
- CN112241680A CN112241680A CN202010962646.3A CN202010962646A CN112241680A CN 112241680 A CN112241680 A CN 112241680A CN 202010962646 A CN202010962646 A CN 202010962646A CN 112241680 A CN112241680 A CN 112241680A
- Authority
- CN
- China
- Prior art keywords
- model
- vein
- dimensional feature
- fine
- feature vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 210000003462 vein Anatomy 0.000 title claims abstract description 80
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000013508 migration Methods 0.000 title claims description 6
- 230000005012 migration Effects 0.000 title claims description 6
- 239000013598 vector Substances 0.000 claims abstract description 71
- 238000012546 transfer Methods 0.000 claims abstract description 25
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 238000013526 transfer learning Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims abstract 2
- 238000012417 linear regression Methods 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 32
- 239000011159 matrix material Substances 0.000 claims description 25
- 230000008569 process Effects 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 238000003333 near-infrared imaging Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 5
- 238000011480 coordinate descent method Methods 0.000 claims description 3
- 238000007477 logistic regression Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012512 characterization method Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims 2
- 239000010755 BS 2869 Class G Substances 0.000 claims 1
- 238000013528 artificial neural network Methods 0.000 claims 1
- 230000002457 bidirectional effect Effects 0.000 claims 1
- 230000005540 biological transmission Effects 0.000 claims 1
- 238000002474 experimental method Methods 0.000 description 5
- 230000007547 defect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013145 classification model Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 239000002207 metabolite Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 235000015097 nutrients Nutrition 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Computer Hardware Design (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于静脉相似图像知识迁移网络的多模态身份认证方法,基于相似图像的知识迁移学习网络模型和监督词袋模型。本发明涉及计算机视觉领域,采用一种基于静脉图像相似性的知识迁移网络,依次对人脸识别模型到静脉身份认证模型到静脉性别判定模型进行训练、微调,将微调后的网络对静脉图像进行特征提取,采用一种监督词袋模型对静脉性别判定模型输出的高维性别特征向量进行二次编码,进行身份认证和性别判定。本发明公开的基于相似图像的知识迁移网络和监督词袋模型,可以利用邻域模型之间的相似属性使得模型微调前的特征表征参数空间存在交叉性,提高了身份识别的准确率,确保了模型的判别和泛化性能。
The invention discloses a multimodal identity authentication method based on a vein similar image knowledge transfer network, a knowledge transfer learning network model and a supervised word bag model based on the similar images. The invention relates to the field of computer vision. A knowledge transfer network based on the similarity of vein images is used to train and fine-tune a face recognition model to a vein identity authentication model to a vein gender determination model in turn, and the fine-tuned network is used to perform the vein image processing. For feature extraction, a supervised bag of words model is used to perform secondary encoding on the high-dimensional gender feature vector output by the vein gender determination model for identity authentication and gender determination. The knowledge transfer network and supervised bag of words model based on similar images disclosed in the present invention can make use of the similar attributes between neighborhood models to make the feature representation parameter space before the model fine-tuning intersect, improve the accuracy of identity recognition, and ensure that The discriminative and generalization performance of the model.
Description
技术领域technical field
本发明设计手部静脉识别领域,尤其涉及一种基于静脉相似图像知识迁移网络的多模态身份认证方法。The invention is designed in the field of hand vein identification, in particular to a multimodal identity authentication method based on a vein similar image knowledge transfer network.
背景技术Background technique
静脉血管是人类携带营养和代谢物最重要的结构之一,且与其他生物识别功能(如指纹,虹膜,手势和脸部)相比,具有防伪,易接受的特点,已成为最受欢迎的个人识别方法之一。除此之外,图像采集的高便利性和鲁棒特征表现导致更广泛和准确的基于静脉的个人识别系统。Venous vessels are one of the most important structures for humans to carry nutrients and metabolites, and compared with other biometric functions (such as fingerprints, iris, gestures, and faces), they have the characteristics of anti-counterfeiting and easy acceptance, and have become the most popular. One of the methods of personal identification. In addition to this, the high convenience of image acquisition and robust feature representation lead to a broader and more accurate vein-based personal identification system.
基于静脉识别技术设计鲁棒身份认证系统虽具有潜在优势,但在传统的特征提取方法中,源静脉图像训练库较小,特征学习能力较差,因此在静脉图像的基础上,首次提出基于相似图像的知识迁移网络模型,保证特征表征参数的有效性,有效防止过拟合问题。Although the design of a robust identity authentication system based on vein recognition technology has potential advantages, in the traditional feature extraction method, the source vein image training base is small and the feature learning ability is poor. The knowledge transfer network model of the image ensures the validity of the feature representation parameters and effectively prevents the problem of overfitting.
然而传统模式特征编码模型存在特征信息不具有语义有效性,无法有效基于特征分布进行各类模式识别(特征表征、图像分割、图像去噪、显著性检测等)问题求解。首次提出一种基于具有性别属性的静脉图像的监督词袋模型,对性别判定模型输出的高维特征向量进行二次编码,进而实现冗余信息的去除,提高特征向量的表征能力。但是以上特征编码模式,但对旋转等问题样本的适应性存在缺陷。However, the traditional pattern feature encoding model has feature information that is not semantically valid, and cannot effectively solve various pattern recognition (feature representation, image segmentation, image denoising, saliency detection, etc.) problems based on feature distribution. For the first time, a supervised bag-of-words model based on vein images with gender attributes is proposed, and the high-dimensional feature vector output by the gender determination model is re-encoded to remove redundant information and improve the representation ability of feature vectors. However, the above feature encoding mode has defects in adaptability to samples with problems such as rotation.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于静脉相似图像知识迁移网络的多模态身份认证方法,本发明有效的确保了模型的判别和泛化功能,提高了分类性能,获得了更加具有鲁棒性和高效的手部静脉图像性别和身份识别方法。The purpose of the present invention is to provide a multimodal identity authentication method based on the knowledge transfer network of similar images of veins. The present invention effectively ensures the discrimination and generalization functions of the model, improves the classification performance, and obtains more robust and An efficient method for gender and identity recognition in hand vein images.
实现本发明目的的技术解决方案为:一种基于静脉相似图像知识迁移网络的多模态身份认证方法,包括以下步骤:The technical solution for realizing the purpose of the present invention is: a multimodal identity authentication method based on a knowledge transfer network of similar images of veins, comprising the following steps:
步骤1、在近红外条件下,构建静脉图像库和人脸图像库:Step 1. Under near-infrared conditions, build a vein image library and a face image library:
采集若干幅手背静脉样本图像,建立实验室静脉图像库,采用ROI提取方法对实验室静脉图像库中的图像进行处理,分别得到尺寸为M*N的有效静脉样本图像,获得静脉数据库,其中M∈[100,224],N∈[100,224];Collect several images of dorsal hand vein samples, establish a laboratory vein image database, and use the ROI extraction method to process the images in the laboratory vein image database to obtain valid vein sample images with a size of M*N respectively, and obtain a vein database, where M ∈ [100, 224], N ∈ [100, 224];
采集若干幅人脸图像,建立近红外人脸图像库,并使用VGG16卷积神经网络结构对近红外人脸图像库中的所有图像分别进行人脸检测与定位,得到尺寸为A*B的有效区域人脸数据图像,获得人脸图像库,A=M,B=N;Collect several face images, establish a near-infrared face image library, and use the VGG16 convolutional neural network structure to perform face detection and positioning on all the images in the near-infrared face image library, and obtain a valid image with a size of A*B. Regional face data image, obtain face image database, A=M, B=N;
步骤2、采用基于相似图像的“粗精度-细精度”迁移学习策略,通过线性回归分类器,获得具有身份属性的高维特征向量:Step 2. Adopt the "coarse precision-fine precision" transfer learning strategy based on similar images, and obtain high-dimensional feature vectors with identity attributes through linear regression classifiers:
步骤2-1、选择深度卷积网络对人脸图像库进行预训练,得到的VGG人脸深度卷积神经网络结构作为初始模型,将和人脸数据库共享人脸属性的近红外人脸图像库在初始模型上进行微调,得到知识迁移网络的FRM,其中,通过线性回归分类器对FRM输出层进行微调得到具有近红外属性的高维特征向量;Step 2-1. Select the deep convolutional network to pre-train the face image library, and the obtained VGG face deep convolutional neural network structure is used as the initial model, and the near-infrared face image library that shares the face attributes with the face database will be used. Perform fine-tuning on the initial model to obtain the FRM of the knowledge transfer network, in which the FRM output layer is fine-tuned by a linear regression classifier to obtain a high-dimensional feature vector with near-infrared attributes;
步骤2-2、选用和近红外人脸图像库共享近红外成像属性的实验室静脉图像库在FRM进行微调,得到VIM,其中,通过线性回归分类器对VIM输出层进行微调得到具有性别属性的高维特征向量;Step 2-2, select the laboratory vein image library that shares the near-infrared imaging attributes with the near-infrared face image library and fine-tune it in the FRM to obtain the VIM, wherein the VIM output layer is fine-tuned through the linear regression classifier to obtain the gender attribute. high-dimensional feature vector;
步骤2-3、将具有性别属性的静脉数据库在VIM上进行微调,得到VGM,其中,通过线性回归分类器对VGM输出层进行微调得到具有身份属性的高维特征向量;Step 2-3, fine-tuning the vein database with gender attribute on VIM to obtain VGM, wherein, fine-tuning the VGM output layer through a linear regression classifier to obtain a high-dimensional feature vector with identity attribute;
步骤3、采用监督词袋模型对VGM输出层输出的高维特征向量进行二次编码,丢弃冗余特征,获得具有有效信息的m维特征向量,m的大小根据最终识别性能和系统耗时性而定;
步骤4、将m维特征向量输入到改进型SVM分类器-LDM进行身份信息和性别信息的分类,完成非端对端的静脉识别任务,得到分类结果。Step 4: Input the m-dimensional feature vector into the improved SVM classifier-LDM to classify the identity information and gender information, complete the non-end-to-end vein recognition task, and obtain the classification result.
本发明与现有技术相比,其显著优点在于:Compared with the prior art, the present invention has the following significant advantages:
(1)提出了一种基于相似图像的的“粗精度-细精度”迁移学习策略,通过利用相邻模型之间的固有相关性,用于强大的任务特定深度神经网络模型生成。(1) We propose a “coarse-fine-precision” transfer learning strategy based on similar images for powerful task-specific deep neural network model generation by exploiting the inherent correlation between adjacent models.
(2)为在保证知识稳定迁移的同时提高模型对于特定任务的有效性,在对于知识迁移的网络进行微调的过程中,对网络的端对端模型中的分类函数进行改进进而得到特定分类任务特征表征参数。(2) In order to improve the effectiveness of the model for specific tasks while ensuring the stable transfer of knowledge, in the process of fine-tuning the network for knowledge transfer, the classification function in the end-to-end model of the network is improved to obtain specific classification tasks. Characterization parameters.
(3)提出并实现了一种词袋监督特征选择方法,用于更好的特征表示生成,其中突出显示预定义任务的重要维度,并且抑制冗余特征以获得更好的性能。(3) We propose and implement a bag-of-words supervised feature selection method for better feature representation generation, where important dimensions for predefined tasks are highlighted, and redundant features are suppressed for better performance.
附图说明Description of drawings
图1为本发明所述的基于静脉相似图像知识迁移网络的多模态身份认证方法流程图。FIG. 1 is a flowchart of the multimodal identity authentication method based on the vein similar image knowledge transfer network according to the present invention.
图2为实验室采集的静脉数据集样本图,其中图(a)和图(b)为女性静脉样本,图(c)和(d)为男性静脉样本。Figure 2 shows the samples of the vein dataset collected in the laboratory, in which Figures (a) and (b) are female vein samples, and Figures (c) and (d) are male vein samples.
图3为ROI提取图像效果图,其中图(a)为原始静脉图像,图(b)为ROI定位图像,图(c)为提取ROI结果图像。Figure 3 shows the effect of the ROI extraction image, in which Figure (a) is the original vein image, Figure (b) is the ROI positioning image, and Figure (c) is the result image of the ROI extraction.
图4为不同网络微调策略识别结果对比图。Figure 4 is a comparison chart of the recognition results of different network fine-tuning strategies.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面对本发明实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present invention are further described in detail below.
结合图1,本发明所述的一种基于静脉相似图像知识迁移网络的多模态身份认证方法,步骤如下:With reference to Fig. 1, a multi-modal identity authentication method based on the vein similar image knowledge transfer network according to the present invention, the steps are as follows:
步骤1、在近红外条件下,构建静脉图像库和人脸数据库:Step 1. Under near-infrared conditions, build a vein image database and a face database:
首先在近红外条件下构建静脉图像库,近红外人脸图像库和人脸图像库,采集若干幅手背静脉样本图像,采集样本图像大小设置为M*N。Firstly, a vein image library, a near-infrared face image library and a face image library are constructed under near-infrared conditions, and several sample images of veins on the back of the hand are collected, and the size of the collected sample images is set to M*N.
采集若干幅人脸图像,建立近红外人脸图像库,并使用VGG16卷积神经网络结构对近红外人脸图像库中的所有图像分别进行人脸检测与定位,得到尺寸为A*B,A=M,B=N的有效区域人脸数据图像,获得人脸图像库。Collect several face images, establish a near-infrared face image library, and use the VGG16 convolutional neural network structure to perform face detection and positioning on all the images in the near-infrared face image library, and obtain the size of A*B, A =M, B=N effective area face data image, obtain face image database.
步骤2、采用基于相似图像的“粗精度—细精度”迁移学习策略,通过线性回归分类器,获得得到具有身份属性的高维特征向量:Step 2. Adopt the "coarse precision-fine precision" migration learning strategy based on similar images, and obtain the high-dimensional feature vector with identity attributes through the linear regression classifier:
步骤2-1、选择深度卷积网络对人脸图像库进行预训练,得到的VGG人脸深度卷积神经网络结构作为初始模型,将和人脸数据库共享人脸属性的近红外人脸图像库在初始模型上进行微调,得到知识迁移网络的过渡人脸识别模型(FRM),通过线性回归分类器对FRM输出层进行微调得到具有近红外属性的高维特征向量。Step 2-1. Select the deep convolutional network to pre-train the face image library, and the obtained VGG face deep convolutional neural network structure is used as the initial model, and the near-infrared face image library that shares the face attributes with the face database will be used. After fine-tuning on the initial model, the transitional face recognition model (FRM) of the knowledge transfer network is obtained, and the FRM output layer is fine-tuned through a linear regression classifier to obtain a high-dimensional feature vector with near-infrared attributes.
构建初始人脸识别模型,采用基于相似图像的“粗精度—细精度”的迁移学习策略,选择深度卷积神经网络对人脸图像库进行预训练,预训练的模型选择为Caffe库的VGG模型,得到的VGG人脸深度卷积网络结构作为初始模型,将和人脸图像库共享人脸属性的近红外人脸图像库在初始模型上进行微调,得到知识迁移网络的过渡人脸识别模型(FRM),其中,通过线性回归分类器对FRM输出层进行微调得到具有近红外属性的高维特征向量。Construct the initial face recognition model, adopt the migration learning strategy of "coarse precision-fine precision" based on similar images, select deep convolutional neural network to pre-train the face image database, and select the VGG model of the Caffe library as the pre-training model. , the obtained VGG face deep convolutional network structure is used as the initial model, and the near-infrared face image database that shares face attributes with the face image database is fine-tuned on the initial model, and the transitional face recognition model of the knowledge transfer network is obtained ( FRM), in which the FRM output layer is fine-tuned by a linear regression classifier to obtain high-dimensional feature vectors with near-infrared properties.
步骤2-2、选用和近红外人脸图像库共享近红外成像属性的实验室静脉图像库在FRM进行微调,得到VIM,微调过程中,通过线性回归分类器对VIM输出层进行微调得到具有性别属性的高维特征向量。Step 2-2. Select the laboratory vein image library that shares the near-infrared imaging attributes with the near-infrared face image library and fine-tune it in FRM to obtain VIM. During the fine-tuning process, fine-tune the VIM output layer through a linear regression classifier to obtain gender A high-dimensional feature vector of attributes.
选用和近红外人脸图像库共享近红外成像属性的实验室静脉图像库在FRM进行微调,得到静脉身份认证模型(VIM),其中,通过线性回归分类器对VIM输出层进行微调得到具有性别属性的高维特征向量。The laboratory vein image library that shares the near-infrared imaging attributes with the near-infrared face image library is selected for fine-tuning in FRM, and the vein identity authentication model (VIM) is obtained. The high-dimensional feature vector of .
步骤2-3、将具有性别属性的静脉数据库在VIM上进行微调,并对其网络输出层和损失函数进行改进,得到VGM,通过线性回归分类器对VGM输出层进行微调得到具有身份属性的高维特征向量。Step 2-3, fine-tune the vein database with gender attributes on VIM, and improve its network output layer and loss function to obtain VGM, and fine-tune the VGM output layer through a linear regression classifier to obtain a high-quality image with identity attributes. dimensional feature vector.
基于VIM,将具有性别属性的静脉图像库在VIM上进行微调,得到静脉性别判定模型(VGM),其中,通过线性回归分类器对VGM输出层进行微调得到具有身份属性的高维特征向量。Based on VIM, the vein image library with gender attributes is fine-tuned on VIM to obtain a vein gender determination model (VGM). The VGM output layer is fine-tuned by a linear regression classifier to obtain a high-dimensional feature vector with identity attributes.
所述线性回归分类器对FRM、VIM和VGM微调过程中求解高维特征向量,具体如下:The linear regression classifier solves high-dimensional feature vectors in the fine-tuning process of FRM, VIM and VGM, as follows:
线性回归分类器对FRM、VIM、VGM微调过程中求解高维特征向量的方法具体为:The method of solving the high-dimensional feature vector in the fine-tuning process of FRM, VIM and VGM by the linear regression classifier is as follows:
假设一个深度卷积神经网络模型DCNN有K+1层,其中第k-th层设有dk个单元,其中k∈[1,K],则输入训练样本图像的灰度矩阵中的值x在DCNN第k-th层的输出如式(1)所示:Assuming that a deep convolutional neural network model DCNN has K+1 layers, the k-th layer has d k units, where k ∈ [1, K], then input the value x in the grayscale matrix of the training sample image The output of the k-th layer of DCNN is shown in formula (1):
其中,W(k)表示当前层的卷积权重,b(k)表示当前层的偏置参数,H(k)表示第k-th个隐层的特征表征结果,表示层间连接时的数据传输运算准则;in, W (k) represents the convolution weight of the current layer, b (k) represents the bias parameter of the current layer, H (k) represents the feature representation result of the k-th hidden layer, Indicates the data transfer operation criteria when connecting between layers;
FRM,VIM,VGM的主要卷积权重和偏置参数表示为: The main convolutional weights and bias parameters of FRM, VIM, VGM are expressed as:
基于线性回归分类器微调过程中,对于给定输入训练样本(xi,yi),i表示当前样本图像,采用的分类误差L(W(k),b(k),C)表示如式(2):In the fine-tuning process of the classifier based on linear regression, for a given input training sample (x i , y i ), i represents the current sample image, and the adopted classification error L (W (k) , b (k) , C) is expressed as formula (2):
其中,表示矩阵的Frobenius范数,X={x1,...xm}表示给定输入训练样本图像的灰度矩阵,Y={y1,...ym}表示对于给定输入训练样本图像的灰度矩阵,用于表示真实值,C为线性回归分类器的模型参数;in, represents the Frobenius norm of the matrix, X={x 1 ,...x m } represents the grayscale matrix of a given input training sample image, Y={y 1 ,... y m } represents that for a given input training sample The grayscale matrix of the image is used to represent the true value, and C is the model parameter of the linear regression classifier;
对于经过逻辑回归改进的网络模型的训练过程是通过调用随机子梯度下降策略对目标函数(2)进行优化求解,具体针对W(k),b(k),C三个模型参数的子梯度的计算方法如下:The training process of the network model improved by logistic regression is to optimize and solve the objective function (2) by calling the stochastic sub-gradient descent strategy, specifically for the sub-gradients of the three model parameters W (k) , b (k) , and C The calculation method is as follows:
首先用于特定梯度计算的中间变量如式(3)所示:First, the intermediate variables used for a specific gradient calculation are shown in equation (3):
基于(3)所定义的中间变量,得到的对于三个模型参数的梯度计算和模型求解方法如下所示:Based on the intermediate variables defined in (3), the gradient calculation and model solving methods for the three model parameters are obtained as follows:
当基于给定输入和模型定义后解得上述梯度后,利用L-BFGS将梯度解代入式(4)进行无约束模型求解分别得到FRM,VIM,VGM对应的高维特征向量。When the above gradient is solved based on the given input and model definition, L-BFGS is used to substitute the gradient solution into equation (4) to solve the unconstrained model to obtain the high-dimensional eigenvectors corresponding to FRM, VIM, and VGM, respectively.
步骤3、然后采用监督词袋模型对VGM输出层输出的高维特征向量进行二次编码,丢弃冗余特征,获得具有有效信息的m维特征向量,m的大小根据最终识别性能和系统耗时性而定,具体如下:
假设{(x1,y1),...,(xn,yn)}表示n个手背静脉训练样本特征向量分布,其对应的归一化向量计算表示为:Assuming that {(x 1 , y 1 ),...,(x n , y n )} represents the distribution of feature vectors of n dorsal hand vein training samples, the corresponding normalized vector calculation is expressed as:
其中,的物理含义为不同类型样本(男性和女性静脉图像)之间的分类超平面,该超平面计算公式中的支持向量si和乘积项可以通过最小化如式(8)所示的目标函数得到:in, The physical meaning of is the classification hyperplane between different types of samples (male and female vein images), the hyperplane calculates the support vector s i and the product term in the formula It can be obtained by minimizing the objective function shown in equation (8):
αi对应非零乘积项上式可以看做一个有约束项的二次规划求解问题,因此其中各个参数均可通过拉格朗日法进行求解。通过可以求解得到的分类超平面中的每一个对应元素表示其对应的m维特征向量的权重,取值越大代表该特征向量对于最终的性别分类意义越大,本方法在实际实验时考虑到最终识别性能和系统耗时性将m值大小设置为512。然后去除冗余信息,得到获得具有有效信息的m维特征向量。有效的改进VGM层直接输出的高维特征分布中含有大量的冗余信息、降低系统的识别率的缺陷。α i corresponds to the non-zero product term The above formula can be regarded as a quadratic programming problem with constraints, so each parameter can be solved by the Lagrangian method. A classification hyperplane that can be solved by Each corresponding element in represents its corresponding m-dimensional feature vector The larger the value, the greater the significance of the feature vector for the final gender classification. In the actual experiment of this method, the m value is set to 512 considering the final recognition performance and system time-consuming. Then the redundant information is removed to obtain the m-dimensional feature vector with effective information. Effectively improve the defect that the high-dimensional feature distribution directly output by the VGM layer contains a lot of redundant information and reduces the recognition rate of the system.
步骤4、将m维特征向量输入到改进型SVM分类器-LDM进行身份信息和性别信息的分类,完成非端对端的静脉识别任务,得到分类结果,具体如下:Step 4. Input the m-dimensional feature vector into the improved SVM classifier-LDM to classify the identity information and gender information, complete the non-end-to-end vein recognition task, and obtain the classification result, as follows:
最后将m维特征向量输入到改进型SVM分类器-LDM进行身份信息和性别信息的分类,分类器LDM的训练参数和微调网络时的参数保持完全一致。Finally, the m-dimensional feature vector is input into the improved SVM classifier-LDM to classify the identity information and gender information. The training parameters of the classifier LDM are exactly the same as the parameters of the fine-tuning network.
将m维有效特征信息输入LDM模型,计算分类平面解集函数γi、均值和方差 Input the m-dimensional effective feature information into the LDM model, and calculate the classification plane solution set function γ i , the mean and variance
其中,x={x1,...xm}为m维特征向量,y=(y1,...ym)T,y为一个m×m大小的对角矩阵,y1,...ym为对角矩阵元素,是由内核k引入的输入x的特征映射,表示第i列的映射矩阵,XT为X的转置矩阵,为权重向量。Among them, x={x 1 ,...x m } is an m-dimensional feature vector, y=(y 1 ,...y m ) T , y is a diagonal matrix of m×m size, y 1 ,. ..y m is the diagonal matrix element, is the feature map of the input x introduced by the kernel k, represents the mapping matrix of the i-th column, X T is the transpose matrix of X, is the weight vector.
在优化求解得到最大类间分布分类平面的同时,最大化分类平面解集均值且最小化分类平面解集方差:While optimizing the solution to obtain the classification plane with the largest distribution between classes, maximize the mean of the solution set of the classification plane and minimize the variance of the solution set of the classification plane:
其中,α1和α2是边际方差和边际均值分别对于整体LDM模型的权重;通过双坐标下降法对式(12)进行优化。ξ=[ξ1,...,ξm]T则代表分类器模型对于输入样本的分类误差。进而得到具有样本泛化性能及最优边界分布的LDM分类器模型解,最终输出分类结果。Among them, α 1 and α 2 are the weights of the marginal variance and the marginal mean respectively to the overall LDM model; Equation (12) is optimized by the double-coordinate descent method. ξ=[ξ 1 , . . . , ξ m ] T represents the classification error of the classifier model for the input sample. Then, the LDM classifier model solution with sample generalization performance and optimal boundary distribution is obtained, and the final classification result is output.
实施例1Example 1
结合图1,本发明所述的一种基于静脉相似图像知识迁移网络的多模态身份认证方法,步骤如下:With reference to Fig. 1, a multi-modal identity authentication method based on the vein similar image knowledge transfer network according to the present invention, the steps are as follows:
步骤1、在近红外条件下,构建静脉图像库和人脸数据库:Step 1. Under near-infrared conditions, build a vein image database and a face database:
首先在近红外条件下构建静脉图像库,近红外人脸图像库和人脸图像库,采集若干幅手背静脉样本图像,采集样本图像大小设置为460*680,图2为实验室采集的静脉数据集样本(左边两张为女性静脉样本,右边两张为男性静脉样本)。First, build a vein image library, near-infrared face image library and face image library under near-infrared conditions, collect several sample images of dorsal hand veins, and set the size of the collected sample images to 460*680. Figure 2 shows the vein data collected in the laboratory Set of samples (two left are female vein samples and right two are male vein samples).
然后选择ROI提取方法得到尺寸为460*680的有效静脉样本图像,获得静脉数据库。结果如图3所示,其中(a)图为原始静脉图像,(b)图为ROI定位图像,(c)图为ROI提取结果图像,图中可以清楚的看到提取的有效静脉区域。Then select the ROI extraction method to obtain a valid vein sample image with a size of 460*680, and obtain a vein database. The results are shown in Figure 3, where (a) is the original vein image, (b) is the ROI positioning image, and (c) is the ROI extraction result image. The extracted effective vein area can be clearly seen in the figure.
采集若干幅人脸图像,建立近红外人脸图像库,并使用VGG16卷积神经网络结构对近红外人脸图像库中的所有图像分别进行人脸检测与定位,得到尺寸为A*B,A=M,B=N的有效区域人脸数据图像,获得人脸图像库;Collect several face images, establish a near-infrared face image library, and use the VGG16 convolutional neural network structure to perform face detection and positioning on all the images in the near-infrared face image library, and obtain the size of A*B, A =M, B=N effective area face data image, obtain face image database;
步骤2、采用基于相似图像的“粗精度—细精度”迁移学习策略,通过线性回归分类器,获得得到具有身份属性的高维特征向量:Step 2. Adopt the "coarse precision-fine precision" migration learning strategy based on similar images, and obtain the high-dimensional feature vector with identity attributes through the linear regression classifier:
步骤2-1、选择深度卷积网络对人脸图像库进行预训练,得到的VGG人脸深度卷积神经网络结构作为初始模型,将和人脸数据库共享人脸属性的近红外人脸图像库在初始模型上进行微调,得到知识迁移网络的过渡人脸识别模型(FRM),通过线性回归分类器对FRM输出层进行微调得到具有近红外属性的高维特征向量;Step 2-1. Select the deep convolutional network to pre-train the face image library, and the obtained VGG face deep convolutional neural network structure is used as the initial model, and the near-infrared face image library that shares the face attributes with the face database will be used. Fine-tune on the initial model to obtain the transitional face recognition model (FRM) of the knowledge transfer network, and fine-tune the FRM output layer through a linear regression classifier to obtain a high-dimensional feature vector with near-infrared attributes;
构建初始人脸识别模型,采用基于相似图像的“粗精度—细精度”的迁移学习策略,选择深度卷积神经网络对人脸图像库进行预训练,预训练的模型选择为Caffe库的VGG模型,得到的VGG人脸深度卷积网络结构作为初始模型,将和人脸图像库共享人脸属性的近红外人脸图像库在初始模型上进行微调,得到知识迁移网络的过渡人脸识别模型(FRM),其中,通过线性回归分类器对FRM输出层进行微调得到具有近红外属性的高维特征向量;Construct the initial face recognition model, adopt the migration learning strategy of "coarse precision-fine precision" based on similar images, select deep convolutional neural network to pre-train the face image database, and select the VGG model of the Caffe library as the pre-training model. , the obtained VGG face deep convolutional network structure is used as the initial model, and the near-infrared face image database that shares face attributes with the face image database is fine-tuned on the initial model, and the transitional face recognition model of the knowledge transfer network is obtained ( FRM), wherein the FRM output layer is fine-tuned by a linear regression classifier to obtain a high-dimensional feature vector with near-infrared properties;
步骤2-2、选用和近红外人脸图像库共享近红外成像属性的实验室静脉图像库在FRM进行微调,得到VIM,微调过程中,通过线性回归分类器对VIM输出层进行微调得到具有性别属性的高维特征向量;Step 2-2. Select the laboratory vein image library that shares the near-infrared imaging attributes with the near-infrared face image library and fine-tune it in FRM to obtain VIM. During the fine-tuning process, fine-tune the VIM output layer through a linear regression classifier to obtain gender The high-dimensional feature vector of the attribute;
选用和近红外人脸图像库共享近红外成像属性的实验室静脉图像库在FRM进行微调,得到静脉身份认证模型(VIM),其中,通过线性回归分类器对VIM输出层进行微调得到具有性别属性的高维特征向量。The laboratory vein image library that shares the near-infrared imaging attributes with the near-infrared face image library is selected for fine-tuning in FRM, and the vein identity authentication model (VIM) is obtained. The high-dimensional feature vector of .
步骤2-3、将具有性别属性的静脉数据库在VIM上进行微调,并对其网络输出层和损失函数进行改进,得到VGM,通过线性回归分类器对VGM输出层进行微调得到具有身份属性的高维特征向量。Step 2-3, fine-tune the vein database with gender attributes on VIM, and improve its network output layer and loss function to obtain VGM, and fine-tune the VGM output layer through a linear regression classifier to obtain a high-quality image with identity attributes. dimensional feature vector.
基于VIM,将具有性别属性的静脉图像库在VIM上进行微调,得到静脉性别判定模型(VGM),其中,通过线性回归分类器对VGM输出层进行微调得到具有身份属性的高维特征向量。Based on VIM, the vein image library with gender attributes is fine-tuned on VIM to obtain a vein gender determination model (VGM). The VGM output layer is fine-tuned by a linear regression classifier to obtain a high-dimensional feature vector with identity attributes.
所述线性回归分类器对FRM、VIM和VGM微调过程中求解高维特征向量,具体如下:The linear regression classifier solves high-dimensional feature vectors in the fine-tuning process of FRM, VIM and VGM, as follows:
线性回归分类器对FRM、VIM、VGM微调过程中求解高维特征向量的方法具体为:The method of solving the high-dimensional feature vector in the fine-tuning process of FRM, VIM and VGM by the linear regression classifier is as follows:
假设一个深度卷积神经网络模型DCNN有K+1层,其中第k-th层设有dk个单元,其中k∈[1,K],则输入训练样本图像的灰度矩阵中的值x在DCNN第k-th层的输出如式(1)所示:Assuming that a deep convolutional neural network model DCNN has K+1 layers, the k-th layer has d k units, where k ∈ [1, K], then input the value x in the grayscale matrix of the training sample image The output of the k-th layer of DCNN is shown in formula (1):
其中,W(k)表示当前层的卷积权重,b(k)表示当前层的偏置参数,H(k)表示第k-th个隐层的特征表征结果,表示层间连接时的数据传输运算准则;in, W (k) represents the convolution weight of the current layer, b (k) represents the bias parameter of the current layer, H (k) represents the feature representation result of the k-th hidden layer, Indicates the data transfer operation criteria when connecting between layers;
FRM,VIM,VGM的主要卷积权重和偏置参数表示为: The main convolutional weights and bias parameters of FRM, VIM, VGM are expressed as:
基于线性回归分类器微调过程中,对于给定输入训练样本(xi,yi),i表示当前样本图像,采用的分类误差L(W(k),b(k),C)表示如式(2):In the fine-tuning process of the classifier based on linear regression, for a given input training sample (x i , y i ), i represents the current sample image, and the adopted classification error L (W (k) , b (k) , C) is expressed as formula (2):
其中,表示矩阵的Frobenius范数,X={x1,...xm}表示给定输入训练样本图像的灰度矩阵,Y={y1,...ym}表示对于给定输入训练样本图像的灰度矩阵,用于表示真实值,C为线性回归分类器的模型参数;in, represents the Frobenius norm of the matrix, X={x 1 ,...x m } represents the grayscale matrix of a given input training sample image, Y={y 1 ,... y m } represents that for a given input training sample The grayscale matrix of the image is used to represent the true value, and C is the model parameter of the linear regression classifier;
对于经过逻辑回归改进的网络模型的训练过程是通过调用随机子梯度下降策略对目标函数(2)进行优化求解,具体针对W(k),b(k),C三个模型参数的子梯度的计算方法如下:The training process of the network model improved by logistic regression is to optimize and solve the objective function (2) by calling the stochastic sub-gradient descent strategy, specifically for the sub-gradients of the three model parameters W (k) , b (k) , and C The calculation method is as follows:
首先用于特定梯度计算的中间变量如式(3)所示:First, the intermediate variables used for a specific gradient calculation are shown in equation (3):
基于(3)所定义的中间变量,得到的对于三个模型参数的梯度计算和模型求解方法如下所示:Based on the intermediate variables defined in (3), the gradient calculation and model solving methods for the three model parameters are obtained as follows:
当基于给定输入和模型定义后解得上述梯度后,利用L-BFGS将梯度解代入式(4)进行无约束模型求解分别得到FRM,VIM,VGM对应的高维特征向量。When the above gradient is solved based on the given input and model definition, L-BFGS is used to substitute the gradient solution into equation (4) to solve the unconstrained model to obtain the high-dimensional eigenvectors corresponding to FRM, VIM, and VGM, respectively.
微调完成的知识迁移网络的第一个全连接层(FC*7层)用作鲁棒特征提取静脉图像特征。上述网络微调时的模型训练参数设置具体为:动量(0.9),权重衰减(0.0005),梯度下降迭代求解次数为30000。在学习率设置方面,对于FRM微调过程设置为0.01,对于VIM训练设置为0.001,并且在迭代过程中的学习率基于gamma为0.1的多项式准则进行递减,训练的批尺寸设置为120。最后VGM输出层设置的简单的线性分类器参数与知识迁移网络保持一致。The first fully connected layer (FC*7 layer) of the fine-tuned knowledge transfer network is used for robust feature extraction of vein image features. The model training parameter settings for the above network fine-tuning are: momentum (0.9), weight decay (0.0005), and the number of gradient descent iterations is 30,000. In terms of learning rate settings, it is set to 0.01 for the FRM fine-tuning process and 0.001 for VIM training, and the learning rate in the iterative process is decreased based on a polynomial criterion with gamma of 0.1, and the training batch size is set to 120. The simple linear classifier parameters set by the final VGM output layer are consistent with the knowledge transfer network.
基于这一微调策略得到的结果与不同网络微调策略识别结果对比图如图4所示。The comparison between the results obtained based on this fine-tuning strategy and the recognition results of different network fine-tuning strategies is shown in Figure 4.
本方法为针对和源训练样本库分布不一致而导致模型对于目标样本表达能力弱的问题进行改进,并且保证迁移学习过程的效率,因此对引入的线性回归模型的有效性通过不同模式性别判定实验设计进行分析,具体结果如表1所示:This method is to improve the problem that the model is weak in expressing the target sample due to the inconsistent distribution of the source training sample library, and to ensure the efficiency of the transfer learning process. Therefore, the effectiveness of the introduced linear regression model is determined by the experimental design of gender in different modes. The analysis is carried out, and the specific results are shown in Table 1:
表1不同训练策略识别结果对比Table 1 Comparison of recognition results of different training strategies
分析表1所示结果可知,在不同训练模式下的分布保持一致,证明所设计的基于线性回归模型对模型训练策略进行改进在提高识别结果的同时,大大缩减了模型微调过程的训练迭代时间,满足迁移学习对模型效率的要求。Analysis of the results shown in Table 1 shows that the distribution in different training modes is consistent, which proves that the improvement of the model training strategy based on the linear regression model can improve the recognition results and greatly reduce the training iteration time of the model fine-tuning process. Meet the requirements of transfer learning for model efficiency.
步骤3、然后采用监督词袋模型对VGM输出层输出的高维特征向量进行二次编码,丢弃冗余特征,获得具有有效信息的m维特征向量,m的大小根据最终识别性能和系统耗时性而定,具体如下:
假设{(x1,y1),...,(xn,yn)}表示n个手背静脉训练样本特征向量分布,其对应的归一化向量计算表示为:Assuming that {(x 1 , y 1 ),...,(x n , y n )} represents the distribution of feature vectors of n dorsal hand vein training samples, the corresponding normalized vector calculation is expressed as:
其中,的物理含义为不同类型样本(男性和女性静脉图像)之间的分类超平面,该超平面计算公式中的支持向量si和乘积项可以通过最小化如式(8)所示的目标函数得到:in, The physical meaning of is the classification hyperplane between different types of samples (male and female vein images), the hyperplane calculates the support vector s i and the product term in the formula It can be obtained by minimizing the objective function shown in equation (8):
αi对应非零乘积项上式可以看做一个有约束项的二次规划求解问题,因此其中各个参数均可通过拉格朗日法进行求解。通过可以求解得到的分类超平面中的每一个对应元素表示其对应的m维特征向量的权重,取值越大代表该特征向量对于最终的性别分类意义越大,本方法在实际实验时考虑到最终识别性能和系统耗时性将m值大小设置为512。然后去除冗余信息,得到获得具有有效信息的m维特征向量。有效的改进VGM层直接输出的高维特征分布中含有大量的冗余信息、降低系统的识别率的缺陷。α i corresponds to the non-zero product term The above formula can be regarded as a quadratic programming problem with constraints, so each parameter can be solved by the Lagrangian method. A classification hyperplane that can be solved by Each corresponding element in represents its corresponding m-dimensional feature vector The larger the value, the greater the significance of the feature vector for the final gender classification. In the actual experiment of this method, the m value is set to 512 considering the final recognition performance and system time-consuming. Then the redundant information is removed to obtain the m-dimensional feature vector with effective information. Effectively improve the defect that the high-dimensional feature distribution directly output by the VGM layer contains a lot of redundant information and reduces the recognition rate of the system.
步骤4、将m维特征向量输入到改进型SVM分类器-LDM进行身份信息和性别信息的分类,完成非端对端的静脉识别任务,得到分类结果,具体如下:Step 4. Input the m-dimensional feature vector into the improved SVM classifier-LDM to classify the identity information and gender information, complete the non-end-to-end vein recognition task, and obtain the classification result, as follows:
最后将m维特征向量输入到改进型SVM分类器-LDM进行身份信息和性别信息的分类,分类器LDM的训练参数和微调网络时的参数保持完全一致。Finally, the m-dimensional feature vector is input into the improved SVM classifier-LDM to classify the identity information and gender information. The training parameters of the classifier LDM are exactly the same as the parameters of the fine-tuning network.
将m维有效特征信息输入LDM模型,计算分类平面解集函数γi、均值和方差 Input the m-dimensional effective feature information into the LDM model, and calculate the classification plane solution set function γ i , the mean and variance
其中,x={x1,...xm}为m维特征向量,y=(y1,...ym)T,y为一个m×m大小的对角矩阵,y1,...ym为对角矩阵元素,是由内核k引入的输入x的特征映射,表示第i列的映射矩阵,XT为X的转置矩阵,为权重向量。Among them, x={x 1 ,...x m } is an m-dimensional feature vector, y=(y 1 ,...y m ) T , y is a diagonal matrix of m×m size, y 1 ,. ..y m is the diagonal matrix element, is the feature map of the input x introduced by the kernel k, represents the mapping matrix of the i-th column, X T is the transpose matrix of X, is the weight vector.
在优化求解得到最大类间分布分类平面的同时,最大化分类平面解集均值且最小化分类平面解集方差:While optimizing the solution to obtain the classification plane with the largest distribution between classes, maximize the mean of the solution set of the classification plane and minimize the variance of the solution set of the classification plane:
其中,α1和α2是边际方差和边际均值分别对于整体LDM模型的权重;通过双坐标下降法对式(12)进行优化。ξ=[ξ1,...,ξm]T则代表分类器模型对于输入样本的分类误差。进而得到具有样本泛化性能及最优边界分布的LDM分类器模型解,最终输出分类结果。在分类器结果对比实验中,除LDM(参数设置如上讨论)之外,其它三种对比分类器选择为生物特征识别模型中常用的分类模型,即SVM,LDA和D-LDA。具体分类实验设置训练样本和测试样本比例随机,且分类结果为100次分类实验平均值,分类效果评价准则为正确分类比,具体针对所选分类器的对比结果如表2所示:Among them, α 1 and α 2 are the weights of the marginal variance and the marginal mean respectively to the overall LDM model; Equation (12) is optimized by the double-coordinate descent method. ξ=[ξ 1 , . . . , ξ m ] T represents the classification error of the classifier model for the input sample. Then, the LDM classifier model solution with sample generalization performance and optimal boundary distribution is obtained, and the final classification result is output. In the classifier result comparison experiment, in addition to LDM (parameter settings discussed above), the other three comparison classifiers are selected as the classification models commonly used in biometric recognition models, namely SVM, LDA and D-LDA. In the specific classification experiment, the proportion of training samples and test samples is random, and the classification result is the average value of 100 classification experiments. The classification effect evaluation criterion is the correct classification ratio. The specific comparison results for the selected classifiers are shown in Table 2:
表2静脉识别对比结果分布Table 2. Distribution of vein identification and comparison results
观察表2所示分类准确率,对比不同的分类器的识别结果,LDM的两种模式都要高于其它三种分类器,证明了所选择的LDM模型的有效性,也为该模型应用于实际身份认证系统(实际身份认证系统的样本量要远大于实验设置)可行性提供了保证。Observe the classification accuracy shown in Table 2, and compare the recognition results of different classifiers. The two modes of LDM are higher than the other three classifiers, which proves the effectiveness of the selected LDM model. The feasibility of the actual identity authentication system (the sample size of the actual identity authentication system is much larger than the experimental setup) provides a guarantee.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010962646.3A CN112241680A (en) | 2020-09-14 | 2020-09-14 | Multi-mode identity authentication method based on vein similar image knowledge migration network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010962646.3A CN112241680A (en) | 2020-09-14 | 2020-09-14 | Multi-mode identity authentication method based on vein similar image knowledge migration network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112241680A true CN112241680A (en) | 2021-01-19 |
Family
ID=74170882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010962646.3A Withdrawn CN112241680A (en) | 2020-09-14 | 2020-09-14 | Multi-mode identity authentication method based on vein similar image knowledge migration network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112241680A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113076927A (en) * | 2021-04-25 | 2021-07-06 | 华南理工大学 | Finger vein identification method and system based on multi-source domain migration |
CN117994815A (en) * | 2024-01-29 | 2024-05-07 | 重庆工商大学 | Deep learning-based graph neural network vein recognition method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780241A (en) * | 2016-11-22 | 2017-05-31 | 安徽客乐宝智能科技有限公司 | A kind of anti-minor based on minor's biological identification technology loses scheme |
CN107977609A (en) * | 2017-11-20 | 2018-05-01 | 华南理工大学 | A kind of finger vein identity verification method based on CNN |
WO2019034589A1 (en) * | 2017-08-15 | 2019-02-21 | Norwegian University Of Science And Technology | A biometric cryptosystem |
CN111062345A (en) * | 2019-12-20 | 2020-04-24 | 上海欧计斯软件有限公司 | Training method and device of vein recognition model and vein image recognition device |
CN111462379A (en) * | 2020-03-17 | 2020-07-28 | 广东网深锐识科技有限公司 | Access control management method, system and medium containing palm vein and face recognition |
-
2020
- 2020-09-14 CN CN202010962646.3A patent/CN112241680A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780241A (en) * | 2016-11-22 | 2017-05-31 | 安徽客乐宝智能科技有限公司 | A kind of anti-minor based on minor's biological identification technology loses scheme |
WO2019034589A1 (en) * | 2017-08-15 | 2019-02-21 | Norwegian University Of Science And Technology | A biometric cryptosystem |
CN107977609A (en) * | 2017-11-20 | 2018-05-01 | 华南理工大学 | A kind of finger vein identity verification method based on CNN |
CN111062345A (en) * | 2019-12-20 | 2020-04-24 | 上海欧计斯软件有限公司 | Training method and device of vein recognition model and vein image recognition device |
CN111462379A (en) * | 2020-03-17 | 2020-07-28 | 广东网深锐识科技有限公司 | Access control management method, system and medium containing palm vein and face recognition |
Non-Patent Citations (1)
Title |
---|
JUN WANG ET AL: "Bimodal Vein Data Mining via Cross-Selected-Domain Knowledge Transfer", 《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113076927A (en) * | 2021-04-25 | 2021-07-06 | 华南理工大学 | Finger vein identification method and system based on multi-source domain migration |
CN113076927B (en) * | 2021-04-25 | 2023-02-14 | 华南理工大学 | Method and system for finger vein recognition based on multi-source domain migration |
CN117994815A (en) * | 2024-01-29 | 2024-05-07 | 重庆工商大学 | Deep learning-based graph neural network vein recognition method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110532920B (en) | Face recognition method for small-quantity data set based on FaceNet method | |
Zhang et al. | End-to-end photo-sketch generation via fully convolutional representation learning | |
Wang et al. | View-based discriminative probabilistic modeling for 3D object retrieval and recognition | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
WO2020114118A1 (en) | Facial attribute identification method and device, storage medium and processor | |
Zhai et al. | BeautyNet: Joint multiscale CNN and transfer learning method for unconstrained facial beauty prediction | |
CN111340103B (en) | A Feature Layer Fusion Method and Device Based on Graph Embedding Canonical Correlation Analysis | |
CN105046272B (en) | A kind of image classification method based on succinct non-supervisory formula convolutional network | |
CN105138998A (en) | Method and system for re-identifying pedestrian based on view angle self-adaptive subspace learning algorithm | |
CN109344856B (en) | Offline signature identification method based on multilayer discriminant feature learning | |
CN108932501B (en) | Face recognition method based on multi-core association integration dimension reduction | |
CN105894050A (en) | Multi-task learning based method for recognizing race and gender through human face image | |
CN111582223B (en) | A three-dimensional face recognition method | |
Sawalha et al. | Face recognition using harmony search-based selected features | |
Wasi et al. | ARBEx: Attentive feature extraction with reliability balancing for robust facial expression learning | |
Neggaz et al. | An Intelligent handcrafted feature selection using Archimedes optimization algorithm for facial analysis | |
Tong et al. | Cross-view gait recognition based on a restrictive triplet network | |
Jadhav et al. | HDL-PI: hybrid DeepLearning technique for person identification using multimodal finger print, iris and face biometric features | |
CN107784284B (en) | Face recognition method and system | |
CN112241680A (en) | Multi-mode identity authentication method based on vein similar image knowledge migration network | |
Wang et al. | Local feature fusion and SRC-based decision fusion for ear recognition | |
CN109948662B (en) | A deep clustering method of face images based on K-means and MMD | |
Çevik et al. | A novel high-performance holistic descriptor for face retrieval | |
CN115376159A (en) | A cross-appearance person re-identification method based on multi-modal information | |
Yan et al. | A parameter-free framework for general supervised subspace learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210119 |