CN112241680A - Multi-mode identity authentication method based on vein similar image knowledge migration network - Google Patents

Multi-mode identity authentication method based on vein similar image knowledge migration network Download PDF

Info

Publication number
CN112241680A
CN112241680A CN202010962646.3A CN202010962646A CN112241680A CN 112241680 A CN112241680 A CN 112241680A CN 202010962646 A CN202010962646 A CN 202010962646A CN 112241680 A CN112241680 A CN 112241680A
Authority
CN
China
Prior art keywords
vein
model
dimensional feature
image
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010962646.3A
Other languages
Chinese (zh)
Inventor
王军
鹿姝
杨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202010962646.3A priority Critical patent/CN112241680A/en
Publication of CN112241680A publication Critical patent/CN112241680A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-mode identity authentication method based on a vein similar image knowledge transfer network, which is based on a knowledge transfer learning network model and a supervision bag-of-words model of similar images. The invention relates to the field of computer vision, which adopts a knowledge transfer network based on vein image similarity to train and fine-tune a face recognition model to a vein identity authentication model to a vein gender judgment model in sequence, extracts the characteristics of the vein image by the fine-tuned network, secondarily encodes a high-dimensional gender characteristic vector output by the vein gender judgment model by adopting a supervision bag model, and performs identity authentication and gender judgment. The knowledge transfer network and the supervision word bag model based on the similar images can make the characteristic representation parameter space before the model fine tuning have cross property by utilizing the similar attributes between the neighborhood models, improve the accuracy of identity recognition and ensure the discrimination and generalization performance of the models.

Description

Multi-mode identity authentication method based on vein similar image knowledge migration network
Technical Field
The invention relates to the field of hand vein recognition, in particular to a multi-mode identity authentication method based on a vein similarity image knowledge migration network.
Background
The venous blood vessel is one of the most important structures for carrying nutrition and metabolites by human beings, has the characteristics of anti-counterfeiting and easy acceptance compared with other biological identification functions (such as fingerprints, irises, gestures and faces), and is one of the most popular personal identification methods. In addition to this, the high convenience and robust feature representation of image acquisition leads to a more comprehensive and accurate vein-based personal identification system.
Although the robust identity authentication system designed based on the vein recognition technology has potential advantages, in the traditional feature extraction method, the training library of the source vein image is small, and the feature learning capability is poor, so that a knowledge transfer network model based on similar images is proposed for the first time on the basis of the vein image, the effectiveness of feature characterization parameters is ensured, and the over-fitting problem is effectively prevented.
However, the traditional mode feature coding model has the characteristic information without semantic validity, and can not effectively solve various mode recognition (feature characterization, image segmentation, image denoising, significance detection and the like) problems based on feature distribution. The method provides a monitoring bag-of-words model based on the vein image with the gender attribute for the first time, and carries out secondary coding on the high-dimensional characteristic vector output by the gender judging model, thereby removing redundant information and improving the characterization capability of the characteristic vector. However, in the above feature encoding mode, there is a drawback in adaptability to problematic samples such as rotation.
Disclosure of Invention
The invention aims to provide a multi-mode identity authentication method based on a vein similar image knowledge migration network, which effectively ensures the discrimination and generalization functions of a model, improves the classification performance, and obtains a more robust and efficient method for identifying the gender and the identity of a hand vein image.
The technical solution for realizing the purpose of the invention is as follows: a multi-mode identity authentication method based on a vein similarity image knowledge migration network comprises the following steps:
step 1, constructing a vein image library and a face image library under a near infrared condition:
collecting a plurality of hand back vein sample images, establishing a laboratory vein image library, processing the images in the laboratory vein image library by adopting an ROI (region of interest) extraction method, respectively obtaining effective vein sample images with the size of M x N, and obtaining a vein database, wherein M belongs to [100, 224], and N belongs to [100, 224 ];
collecting a plurality of face images, establishing a near-infrared face image library, respectively carrying out face detection and positioning on all images in the near-infrared face image library by using a VGG16 convolutional neural network structure to obtain an effective region face data image with the size of A x B, and obtaining a face image library, wherein A is M, and B is N;
step 2, obtaining a high-dimensional feature vector with identity attributes by adopting a similar image-based coarse precision-fine precision transfer learning strategy through a linear regression classifier:
2-1, selecting a deep convolution network to pre-train a face image base, taking an obtained VGG (VGG) face deep convolution neural network structure as an initial model, carrying out fine tuning on a near-infrared face image base sharing face attributes with a face database on the initial model to obtain an FRM (fast Fourier transform) of a knowledge transfer network, wherein a linear regression classifier is used for carrying out fine tuning on an FRM output layer to obtain a high-dimensional feature vector with the near-infrared attributes;
step 2-2, selecting a laboratory vein image library sharing near-infrared imaging attributes with a near-infrared face image library, and performing fine adjustment on the laboratory vein image library in FRM to obtain VIM, wherein a high-dimensional feature vector with gender attributes is obtained by performing fine adjustment on a VIM output layer through a linear regression classifier;
step 2-3, fine-tuning the vein database with the gender attribute on the VIM to obtain a VGM, wherein a high-dimensional feature vector with the identity attribute is obtained by fine-tuning a VGM output layer through a linear regression classifier;
step 3, carrying out secondary coding on the high-dimensional feature vector output by the VGM output layer by adopting a supervision bag-of-words model, discarding redundant features, and obtaining an m-dimensional feature vector with effective information, wherein the size of m is determined according to the final identification performance and the time consumption of the system;
and 4, inputting the m-dimensional feature vectors into an improved SVM classifier-LDM to classify the identity information and the gender information, and completing a non-end-to-end vein recognition task to obtain a classification result.
Compared with the prior art, the invention has the remarkable advantages that:
(1) a similar image-based 'coarse precision-fine precision' transfer learning strategy is provided, and the method is used for powerful task-specific deep neural network model generation by utilizing the inherent correlation between adjacent models.
(2) In order to ensure the stable knowledge migration and improve the effectiveness of the model for a specific task, the classification function in the end-to-end model of the network is improved in the process of fine tuning the network for knowledge migration, so that the characteristic characterization parameters of the specific classification task are obtained.
(3) A bag of words supervised feature selection method is proposed and implemented for better feature representation generation, where important dimensions of predefined tasks are highlighted and redundant features are suppressed for better performance.
Drawings
Fig. 1 is a flowchart of a multi-modal identity authentication method based on a vein-like image knowledge migration network according to the present invention.
Fig. 2 is a sample plot of a vein dataset collected in a laboratory, where plots (a) and (b) are female vein samples and plots (c) and (d) are male vein samples.
Fig. 3 is a diagram showing the effect of ROI extraction image, in which (a) is an original vein image, (b) is an ROI localization image, and (c) is an ROI extraction result image.
Fig. 4 is a comparison graph of identification results of different network fine-tuning policies.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
With reference to fig. 1, the multi-modal identity authentication method based on the vein-like image knowledge migration network according to the present invention includes the following steps:
step 1, constructing a vein image database and a human face database under the near infrared condition:
firstly, a vein image library, a near-infrared face image library and a face image library are constructed under the near-infrared condition, a plurality of hand back vein sample images are collected, and the size of the collected sample images is set to be M x N.
Collecting a plurality of face images, establishing a near-infrared face image library, and respectively carrying out face detection and positioning on all images in the near-infrared face image library by using a VGG16 convolutional neural network structure to obtain an effective region face data image with the size of A, B, M and B, and obtaining the face image library.
Step 2, obtaining a high-dimensional feature vector with identity attributes by adopting a similar image-based coarse precision-fine precision transfer learning strategy through a linear regression classifier:
and 2-1, selecting a deep convolution network to pre-train a face image base, taking the obtained VGG face deep convolution neural network structure as an initial model, finely adjusting a near-infrared face image base sharing face attributes with a face database on the initial model to obtain a transition Face Recognition Model (FRM) of the knowledge transfer network, and finely adjusting an FRM output layer through a linear regression classifier to obtain a high-dimensional feature vector with the near-infrared attributes.
An initial face recognition model is constructed, a deep convolution neural network is selected to pre-train a face image base by adopting a coarse-fine precision migration learning strategy based on similar images, the pre-trained model is selected as a VGG model of a Caffe base, the obtained VGG face deep convolution network structure is used as an initial model, a near-infrared face image base sharing face attributes with the face image base is subjected to fine tuning on the initial model to obtain a transition Face Recognition Model (FRM) of a knowledge migration network, and a linear regression classifier is used for fine tuning an FRM output layer to obtain a high-dimensional feature vector with the near-infrared attributes.
And 2-2, selecting a laboratory vein image library sharing the near-infrared imaging attribute with the near-infrared face image library, carrying out fine adjustment on the laboratory vein image library in the FRM to obtain the VIM, and carrying out fine adjustment on a VIM output layer through a linear regression classifier in the fine adjustment process to obtain a high-dimensional feature vector with the gender attribute.
And selecting a laboratory vein image library sharing the near-infrared imaging attribute with the near-infrared face image library, and performing fine adjustment on the laboratory vein image library in an FRM (fast Fourier transform) to obtain a vein identity authentication model (VIM), wherein a high-dimensional feature vector with the gender attribute is obtained by performing fine adjustment on a VIM output layer through a linear regression classifier.
And 2-3, fine-tuning the vein database with the gender attribute on the VIM, improving a network output layer and a loss function of the vein database to obtain the VGM, and fine-tuning the VGM output layer through a linear regression classifier to obtain a high-dimensional feature vector with the identity attribute.
And fine-tuning a vein image library with gender attribute on the VIM based on the VIM to obtain a vein gender determination model (VGM), wherein a linear regression classifier is used for fine-tuning a VGM output layer to obtain a high-dimensional feature vector with identity attribute.
The linear regression classifier solves the high-dimensional feature vector in the fine tuning process of FRM, VIM and VGM, and the method specifically comprises the following steps:
the method for solving the high-dimensional feature vector in the fine tuning process of the FRM, VIM and VGM by the linear regression classifier specifically comprises the following steps:
suppose that a deep convolutional neural network model DCNN has K +1 layers, wherein the K-th layer is provided with dkA unit where K ∈ [1, K ]]Then, the output of the value x in the gray matrix of the input training sample image at the k-th layer of DCNN is shown as formula (1):
Figure BDA0002681098400000041
wherein the content of the first and second substances,
Figure BDA0002681098400000042
W(k)convolution weights representing the current layer, b(k)Indicating the bias parameter of the current layer, H(k)Representing the characteristic characterization result of the k-th hidden layer,
Figure BDA0002681098400000043
representing the data transmission operation criterion when connecting between layers;
the main convolution weights and bias parameters for FRM, VIM, VGM are expressed as:
Figure BDA0002681098400000044
training samples (x) for a given input in a linear regression classifier-based fine tuning processi,yi) And i represents the classification error L (W) adopted by the current sample image(k),b(k)And C) is represented by formula (2):
Figure BDA0002681098400000051
wherein the content of the first and second substances,
Figure BDA0002681098400000052
frobenius norm representing a matrix, X ═ X1,...xmY-Y representing a gray matrix for a given input training sample image1,...ymExpressing a gray matrix of a given input training sample image for expressing a true value, and C is a model parameter of the linear regression classifier;
the training process of the network model improved by the logistic regression is to carry out optimization solution on the objective function (2) by calling a stochastic sub-gradient descent strategy, particularly aiming at W(k),b(k)The calculation method of the sub-gradients of the three model parameters C is as follows:
the intermediate variables first used for a particular gradient calculation are as shown in equation (3):
Figure BDA0002681098400000053
based on the intermediate variables defined in (3), the resulting gradient calculation and model solution method for the three model parameters is as follows:
Figure BDA0002681098400000054
Figure BDA0002681098400000055
Figure BDA0002681098400000056
and after solving the gradient based on the given input and the model definition, replacing the gradient solution in formula (4) by using L-BFGS (bidirectional Forwarding-class-B-class-G) to carry out unconstrained model solution to respectively obtain high-dimensional feature vectors corresponding to FRM (fast Fourier transform), VIM (virtual inertial navigation model) and VGM (vertical gradient matrix).
And 3, secondarily encoding the high-dimensional feature vector output by the VGM output layer by adopting a supervision bag-of-words model, discarding redundant features, and obtaining an m-dimensional feature vector with effective information, wherein the size of m is determined according to the final identification performance and the time consumption of the system, and the method specifically comprises the following steps:
let { (x)1,y1),...,(xn,yn) The feature vector distribution of n hand back vein training samples is represented, and the corresponding normalized vector calculation is represented as:
Figure BDA0002681098400000057
wherein the content of the first and second substances,
Figure BDA0002681098400000058
is a classification hyperplane between different types of samples (male and female vein images), the hyperplane calculating a support vector s in the formulaiSum and product term
Figure BDA0002681098400000059
This can be obtained by minimizing an objective function as shown in equation (8):
Figure BDA00026810984000000510
Figure BDA0002681098400000061
αicorresponding non-zero product term
Figure BDA00026810984000000614
The above equation can be regarded as a quadratic programming solving problem with constraint terms, so that each parameter can be solved by the lagrangian method. Classification hyperplane by resolvable
Figure BDA0002681098400000062
Each corresponding element in (a) represents its corresponding m-dimensional feature vector
Figure BDA0002681098400000063
The larger the value is, the larger the significance of the feature vector to the final gender classification is, and the m value is set to 512 in consideration of the final identification performance and the system time consumption in the actual experiment. And then removing redundant information to obtain an m-dimensional feature vector with effective information. The defects that high-dimensional feature distribution directly output by a VGM layer contains a large amount of redundant information and the recognition rate of a system is reduced are effectively improved.
Step 4, inputting the m-dimensional feature vectors into an improved SVM classifier-LDM to classify identity information and gender information, completing a non-end-to-end vein recognition task, and obtaining a classification result, wherein the classification result is as follows:
and finally, inputting the m-dimensional feature vector into an improved SVM classifier-LDM to classify identity information and gender information, wherein training parameters of the classifier LDM are completely consistent with parameters during network fine adjustment.
Inputting the m-dimensional effective characteristic information into an LDM model, and calculating a classification plane solution set function gammaiMean value of
Figure BDA00026810984000000615
Sum variance
Figure BDA0002681098400000064
Figure BDA0002681098400000065
Figure BDA0002681098400000066
Figure BDA0002681098400000067
Wherein x ═ { x ═ x1,...xmIs an m-dimensional feature vector, y ═ y1,...ym)TY is a diagonal matrix of m x m size1,...ymIn the form of a diagonal matrix of elements,
Figure BDA0002681098400000068
is a feature map of the input x introduced by the kernel k,
Figure BDA0002681098400000069
a mapping matrix representing the ith column,
Figure BDA00026810984000000610
XTis a transposed matrix of X and is,
Figure BDA00026810984000000611
is a weight vector.
While the maximum inter-class distribution classification plane is obtained through optimization solution, the mean value of the classification plane solution set is maximized and the variance of the classification plane solution set is minimized:
Figure BDA00026810984000000612
Figure BDA00026810984000000613
wherein alpha is1And alpha2The marginal variance and the marginal mean are respectively the weight of the whole LDM model; equation (12) is optimized by the two-coordinate descent method. Xi is ═ xi1,...,ξm]TIt represents the classification error of the classifier model for the input sample. And further obtaining an LDM classifier model solution with sample generalization performance and optimal boundary distribution, and finally outputting a classification result.
Example 1
With reference to fig. 1, the multi-modal identity authentication method based on the vein-like image knowledge migration network according to the present invention includes the following steps:
step 1, constructing a vein image database and a human face database under the near infrared condition:
firstly, a vein image library, a near-infrared face image library and a face image library are constructed under the near-infrared condition, a plurality of hand back vein sample images are collected, the size of the collected sample images is set to be 460 × 680, and a vein data set sample collected in a laboratory is shown in fig. 2 (the left two are female vein samples, and the right two are male vein samples).
Then, an ROI extraction method is selected to obtain an effective vein sample image with the size of 460 x 680, and a vein database is obtained. The result is shown in fig. 3, in which (a) is the original vein image, (b) is the ROI localization image, and (c) is the ROI extraction result image, and the extracted effective vein region can be clearly seen.
Collecting a plurality of face images, establishing a near-infrared face image library, and respectively carrying out face detection and positioning on all images in the near-infrared face image library by using a VGG16 convolutional neural network structure to obtain an effective region face data image with the size of A, B, M and B, and obtaining a face image library;
step 2, obtaining a high-dimensional feature vector with identity attributes by adopting a similar image-based coarse precision-fine precision transfer learning strategy through a linear regression classifier:
2-1, selecting a deep convolution network to pre-train a face image base, taking an obtained VGG (VGG) face deep convolution neural network structure as an initial model, finely tuning a near-infrared face image base sharing face attributes with a face database on the initial model to obtain a transition Face Recognition Model (FRM) of a knowledge transfer network, and finely tuning an FRM output layer through a linear regression classifier to obtain a high-dimensional feature vector with near-infrared attributes;
constructing an initial face recognition model, selecting a deep convolutional neural network to pre-train a face image base by adopting a coarse-fine precision migration learning strategy based on similar images, selecting a VGG model of a Caffe base as the pre-trained model, taking an obtained VGG face deep convolutional network structure as an initial model, and finely tuning a near-infrared face image base sharing face attributes with the face image base on the initial model to obtain a transition Face Recognition Model (FRM) of a knowledge transfer network, wherein a linear regression classifier is used for finely tuning an FRM output layer to obtain a high-dimensional feature vector with the near-infrared attributes;
step 2-2, selecting a laboratory vein image library sharing near-infrared imaging attributes with a near-infrared face image library, carrying out fine adjustment on the laboratory vein image library in an FRM (fast Fourier transform) mode to obtain a VIM (visual information model), and carrying out fine adjustment on a VIM output layer through a linear regression classifier in a fine adjustment process to obtain a high-dimensional feature vector with gender attributes;
and selecting a laboratory vein image library sharing the near-infrared imaging attribute with the near-infrared face image library, and performing fine adjustment on the laboratory vein image library in an FRM (fast Fourier transform) to obtain a vein identity authentication model (VIM), wherein a high-dimensional feature vector with the gender attribute is obtained by performing fine adjustment on a VIM output layer through a linear regression classifier.
And 2-3, fine-tuning the vein database with the gender attribute on the VIM, improving a network output layer and a loss function of the vein database to obtain the VGM, and fine-tuning the VGM output layer through a linear regression classifier to obtain a high-dimensional feature vector with the identity attribute.
And fine-tuning a vein image library with gender attribute on the VIM based on the VIM to obtain a vein gender determination model (VGM), wherein a linear regression classifier is used for fine-tuning a VGM output layer to obtain a high-dimensional feature vector with identity attribute.
The linear regression classifier solves the high-dimensional feature vector in the fine tuning process of FRM, VIM and VGM, and the method specifically comprises the following steps:
the method for solving the high-dimensional feature vector in the fine tuning process of the FRM, VIM and VGM by the linear regression classifier specifically comprises the following steps:
suppose that a deep convolutional neural network model DCNN has K +1 layers, wherein the K-th layer is provided with dkA unit where K ∈ [1, K ]]Then, the output of the value x in the gray matrix of the input training sample image at the k-th layer of DCNN is shown as formula (1):
Figure BDA0002681098400000081
wherein the content of the first and second substances,
Figure BDA0002681098400000082
W(k)convolution weights representing the current layer, b(k)Indicating the bias parameter of the current layer, H(k)Representing the characteristic characterization result of the k-th hidden layer,
Figure BDA0002681098400000083
representing the data transmission operation criterion when connecting between layers;
the main convolution weights and bias parameters for FRM, VIM, VGM are expressed as:
Figure BDA0002681098400000084
training samples (x) for a given input in a linear regression classifier-based fine tuning processi,yi) And i represents the classification error L (W) adopted by the current sample image(k),b(k)And C) is represented by formula (2):
Figure BDA0002681098400000085
wherein the content of the first and second substances,
Figure BDA0002681098400000091
frobenius norm representing a matrix, X ═ X1,...xmY-Y representing a gray matrix for a given input training sample image1,...ymExpressing a gray matrix of a given input training sample image for expressing a true value, and C is a model parameter of the linear regression classifier;
the training process of the network model improved by the logistic regression is to carry out optimization solution on the objective function (2) by calling a stochastic sub-gradient descent strategy, particularly aiming at W(k),b(k)The calculation method of the sub-gradients of the three model parameters C is as follows:
the intermediate variables first used for a particular gradient calculation are as shown in equation (3):
Figure BDA0002681098400000092
based on the intermediate variables defined in (3), the resulting gradient calculation and model solution method for the three model parameters is as follows:
Figure BDA0002681098400000093
Figure BDA0002681098400000094
Figure BDA0002681098400000095
and after solving the gradient based on the given input and the model definition, replacing the gradient solution in formula (4) by using L-BFGS (bidirectional Forwarding-class-B-class-G) to carry out unconstrained model solution to respectively obtain high-dimensional feature vectors corresponding to FRM (fast Fourier transform), VIM (virtual inertial navigation model) and VGM (vertical gradient matrix).
The first fully-connected layer (FC × 7 layer) of the fine-tuned knowledge transfer network was used as a robust feature extraction vein image feature. The model training parameter setting during the network fine tuning is specifically as follows: momentum (0.9), weight decay (0.0005), and gradient descent iterative solution number 30000. In terms of learning rate setting, 0.01 is set for FRM fine tuning process, 0.001 is set for VIM training, and the learning rate in the iterative process is decremented based on polynomial criterion with gamma of 0.1, the batch size of training is set to 120. Finally, simple linear classifier parameters set by the VGM output layer are consistent with the knowledge migration network.
The results obtained based on this fine-tuning strategy are shown in fig. 4 in comparison with the results of different network fine-tuning strategy identifications.
The method is improved aiming at the problem that the expression capacity of a model for a target sample is weak due to the fact that the model is not consistent with the distribution of a source training sample library, and the efficiency of a transfer learning process is guaranteed, so that the effectiveness of an introduced linear regression model is analyzed through different mode gender judgment experimental designs, and specific results are shown in table 1:
TABLE 1 comparison of recognition results for different training strategies
Figure BDA0002681098400000101
The results shown in table 1 are analyzed, and it is proved that the distribution under different training modes is consistent, and the improvement of the designed model training strategy based on the linear regression model improves the recognition result, greatly reduces the training iteration time in the model fine tuning process, and meets the requirement of the transfer learning on the model efficiency.
And 3, secondarily encoding the high-dimensional feature vector output by the VGM output layer by adopting a supervision bag-of-words model, discarding redundant features, and obtaining an m-dimensional feature vector with effective information, wherein the size of m is determined according to the final identification performance and the time consumption of the system, and the method specifically comprises the following steps:
let { (x)1,y1),...,(xn,yn) The feature vector distribution of n hand back vein training samples is represented, and the corresponding normalized vector calculation is represented as:
Figure BDA0002681098400000102
wherein the content of the first and second substances,
Figure BDA0002681098400000103
is a classification hyperplane between different types of samples (male and female vein images), the hyperplane calculating a support vector s in the formulaiSum and product term
Figure BDA0002681098400000108
This can be obtained by minimizing an objective function as shown in equation (8):
Figure BDA0002681098400000104
Figure BDA0002681098400000105
αicorresponding non-zero product term
Figure BDA0002681098400000106
The above equation can be regarded as a quadratic programming solving problem with constraint terms, so that each parameter can be solved by the lagrangian method. Classification hyperplane by resolvable
Figure BDA0002681098400000109
Each corresponding element in (a) represents its corresponding m-dimensional feature vector
Figure BDA0002681098400000107
The larger the value is, the larger the significance of the feature vector to the final gender classification is, and the m value is set to 512 in consideration of the final identification performance and the system time consumption in the actual experiment. And then removing redundant information to obtain an m-dimensional feature vector with effective information. The defects that high-dimensional feature distribution directly output by a VGM layer contains a large amount of redundant information and the recognition rate of a system is reduced are effectively improved.
Step 4, inputting the m-dimensional feature vectors into an improved SVM classifier-LDM to classify identity information and gender information, completing a non-end-to-end vein recognition task, and obtaining a classification result, wherein the classification result is as follows:
and finally, inputting the m-dimensional feature vector into an improved SVM classifier-LDM to classify identity information and gender information, wherein training parameters of the classifier LDM are completely consistent with parameters during network fine adjustment.
Inputting the m-dimensional effective characteristic information into an LDM model, and calculating a classification plane solution set function gammaiMean value of
Figure BDA0002681098400000111
Sum variance
Figure BDA0002681098400000112
Figure BDA0002681098400000113
Figure BDA0002681098400000114
Figure BDA0002681098400000115
Wherein x ═ { x ═ x1,...xmIs an m-dimensional feature vector, y ═ y1,...ym)TY is a diagonal matrix of m x m size1,...ymIs a diagonal momentThe number of the array elements is set to be,
Figure BDA0002681098400000116
is a feature map of the input x introduced by the kernel k,
Figure BDA0002681098400000117
a mapping matrix representing the ith column,
Figure BDA0002681098400000118
XTis a transposed matrix of X and is,
Figure BDA0002681098400000119
is a weight vector.
While the maximum inter-class distribution classification plane is obtained through optimization solution, the mean value of the classification plane solution set is maximized and the variance of the classification plane solution set is minimized:
Figure BDA00026810984000001110
Figure BDA00026810984000001111
wherein alpha is1And alpha2The marginal variance and the marginal mean are respectively the weight of the whole LDM model; equation (12) is optimized by the two-coordinate descent method. Xi is ═ xi1,...,ξm]TIt represents the classification error of the classifier model for the input sample. And further obtaining an LDM classifier model solution with sample generalization performance and optimal boundary distribution, and finally outputting a classification result. In the classifier result comparison experiment, in addition to the LDM (parameter settings discussed above), other three comparison classifiers were selected as classification models commonly used in the biometric recognition model, i.e., SVM, LDA and D-LDA. The specific classification experiment sets random proportion of training samples and test samples, the classification result is the average value of 100 classification experiments, the classification effect evaluation criterion is correct classification, and the comparison result specific to the selected classifier is shown in table 2:
TABLE 2 vein identification comparison result distribution
Figure BDA0002681098400000121
Observing the classification accuracy shown in table 2, comparing the recognition results of different classifiers, the two modes of the LDM are higher than those of the other three classifiers, proving the effectiveness of the selected LDM model, and providing a guarantee for the feasibility of the model applied to an actual identity authentication system (the sample size of the actual identity authentication system is much larger than the experimental setting).

Claims (4)

1. A multi-mode identity authentication method based on a vein similarity image knowledge migration network is characterized by comprising the following steps:
step 1, constructing a vein image library and a face image library under a near infrared condition:
collecting a plurality of hand back vein sample images, establishing a laboratory vein image library, processing the images in the laboratory vein image library by adopting an ROI (region of interest) extraction method, respectively obtaining effective vein sample images with the size of M x N, and obtaining a vein database, wherein M belongs to [100, 224], and N belongs to [100, 224 ];
collecting a plurality of face images, establishing a near-infrared face image library, respectively carrying out face detection and positioning on all images in the near-infrared face image library by using a VGG16 convolutional neural network structure to obtain an effective region face data image with the size of A x B, and obtaining a face image library, wherein A is M, and B is N;
step 2, obtaining a high-dimensional feature vector with identity attributes by adopting a similar image-based coarse precision-fine precision transfer learning strategy through a linear regression classifier:
2-1, selecting a deep convolution network to pre-train a face image base, taking an obtained VGG (VGG) face deep convolution neural network structure as an initial model, carrying out fine tuning on a near-infrared face image base sharing face attributes with a face database on the initial model to obtain an FRM (fast Fourier transform) of a knowledge transfer network, wherein a linear regression classifier is used for carrying out fine tuning on an FRM output layer to obtain a high-dimensional feature vector with the near-infrared attributes;
step 2-2, selecting a laboratory vein image library sharing near-infrared imaging attributes with a near-infrared face image library, and performing fine adjustment on the laboratory vein image library in FRM to obtain VIM, wherein a high-dimensional feature vector with gender attributes is obtained by performing fine adjustment on a VIM output layer through a linear regression classifier;
step 2-3, fine-tuning the vein database with the gender attribute on the VIM to obtain a VGM, wherein a high-dimensional feature vector with the identity attribute is obtained by fine-tuning a VGM output layer through a linear regression classifier;
step 3, carrying out secondary coding on the high-dimensional feature vector output by the VGM output layer by adopting a supervision bag-of-words model, discarding redundant features, and obtaining an m-dimensional feature vector with effective information, wherein the size of m is determined according to the final identification performance and the time consumption of the system;
and 4, inputting the m-dimensional feature vectors into an improved SVM classifier-LDM to classify the identity information and the gender information, and completing a non-end-to-end vein recognition task to obtain a classification result.
2. The multi-modal identity authentication method based on the vein similarity image knowledge transfer network of claim 1, wherein in step 2, the linear regression classifier solves the high-dimensional feature vectors in the fine tuning process of FRM, VIM and VGM, specifically as follows:
suppose that a deep convolutional neural network model DCNN has K +1 layers, wherein the K-th layer is provided with dkA unit where K ∈ [1, K ]]Then, the output of a value x in the gray matrix of the input training sample image at the k-th layer of DCNN is shown as formula (1):
Figure FDA0002681098390000021
wherein, W(k)Represents the convolution weights of the current layer,
Figure FDA0002681098390000022
b(k)bias parameters representing current layer
Figure FDA0002681098390000023
H(k)Representing the characteristic characterization result of the k-th hidden layer,
Figure FDA0002681098390000024
representing the data transmission operation criterion when connecting between layers;
the main convolution weights and bias parameters for FRM, VIM, VGM are expressed as:
Figure FDA0002681098390000025
and
Figure FDA0002681098390000026
training samples (x) for a given input in a linear regression classifier-based fine tuning processi,yi) And i represents the classification error L (W) adopted by the current sample image(k),b(k)And C) is represented by formula (2):
Figure FDA0002681098390000027
wherein the content of the first and second substances,
Figure FDA0002681098390000028
frobenius norm representing a matrix, X ═ X1,...xmY-Y representing a gray matrix for a given input training sample image1,...ymExpressing a gray matrix of a given input training sample image for expressing a true value, and C is a model parameter of the linear regression classifier;
the training process of the network model improved by the logistic regression is to carry out optimization solution on the objective function (2) by calling a stochastic sub-gradient descent strategy, particularly aiming at W(k),b(k)The calculation method of the sub-gradients of the three model parameters C is as follows:
intermediate variable D first for specific gradient calculationkAs shown in formula (3):
Figure FDA0002681098390000029
based on the intermediate variables defined in (3), the resulting gradient calculation and model solution method for the three model parameters is as follows:
Figure FDA00026810983900000210
Figure FDA00026810983900000211
Figure FDA00026810983900000212
and after solving the gradient based on the given input and the model definition, replacing the gradient solution in formula (4) by using L-BFGS (bidirectional Forwarding-class-B-class-G) to carry out unconstrained model solution to respectively obtain high-dimensional feature vectors corresponding to FRM (fast Fourier transform), VIM (virtual inertial navigation model) and VGM (vertical gradient matrix).
3. The multi-modal identity authentication method based on the vein-like image knowledge transfer network of claim 1, wherein: in step 3, a supervision bag-of-words model is adopted to carry out secondary coding on the high-dimensional feature vector output by the VGM output layer, redundant features are discarded, and an m-dimensional feature vector with effective information is obtained, wherein the method specifically comprises the following steps:
let { (x)1,y1),...,(xn,yn) The feature vector distribution of n hand back vein training sample images is represented, and the corresponding normalized vector calculation is represented as:
Figure FDA0002681098390000031
wherein the content of the first and second substances,
Figure FDA0002681098390000032
for classifying hyperplane between male and female vein images, the hyperplane calculates a support vector s in a formulaiSum and product term
Figure FDA0002681098390000033
Obtained by minimizing an objective function L as shown in equation (8):
Figure FDA0002681098390000034
Figure FDA0002681098390000035
αicorresponding non-zero product term
Figure FDA0002681098390000036
Classified hyperplane obtained by solving
Figure FDA0002681098390000037
Each corresponding element in (a) represents its corresponding m-dimensional feature vector
Figure FDA0002681098390000038
Then removing redundant information to obtain m-dimensional feature vectors with effective information.
4. The multi-modal identity authentication method based on the vein-like image knowledge transfer network of claim 1, wherein: in step 4, the m-dimensional feature vectors are input into an improved SVM classifier-LDM to classify identity information and gender information, a non-end-to-end vein recognition task is completed, and a classification result is obtained, wherein the classification result is as follows:
inputting the m-dimensional effective characteristic information into an LDM model, and calculating a classification plane solution set function gammaiMean value of
Figure FDA0002681098390000039
Sum variance
Figure FDA00026810983900000310
Figure FDA00026810983900000311
Figure FDA00026810983900000312
Figure FDA00026810983900000313
Wherein x ═ { x ═ x1,...xmIs an m-dimensional feature vector, y ═ y1,...ym)TY is a diagonal matrix of m x m size1,...ymIn the form of a diagonal matrix of elements,
Figure FDA00026810983900000314
is a feature map of the input x introduced by the kernel k,
Figure FDA00026810983900000315
a mapping matrix representing the ith column,
Figure FDA00026810983900000316
XTis a transposed matrix of X and is,
Figure FDA00026810983900000317
is a weight vector;
while the maximum inter-class distribution classification plane is obtained through optimization solution, the mean value of the classification plane solution set is maximized and the variance of the classification plane solution set is minimized:
Figure FDA0002681098390000041
Figure FDA0002681098390000042
wherein alpha is1And alpha2The marginal variance and the marginal mean are respectively the weight of the whole LDM model; optimizing the formula (12) by a two-coordinate descent method; xi is ═ xi1,...,ξmTAnd representing the classification error of the classifier model to the input sample, further obtaining an LDM classifier model solution with sample generalization performance and optimal boundary distribution, and finally outputting a classification result.
CN202010962646.3A 2020-09-14 2020-09-14 Multi-mode identity authentication method based on vein similar image knowledge migration network Withdrawn CN112241680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010962646.3A CN112241680A (en) 2020-09-14 2020-09-14 Multi-mode identity authentication method based on vein similar image knowledge migration network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010962646.3A CN112241680A (en) 2020-09-14 2020-09-14 Multi-mode identity authentication method based on vein similar image knowledge migration network

Publications (1)

Publication Number Publication Date
CN112241680A true CN112241680A (en) 2021-01-19

Family

ID=74170882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010962646.3A Withdrawn CN112241680A (en) 2020-09-14 2020-09-14 Multi-mode identity authentication method based on vein similar image knowledge migration network

Country Status (1)

Country Link
CN (1) CN112241680A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076927A (en) * 2021-04-25 2021-07-06 华南理工大学 Finger vein identification method and system based on multi-source domain migration

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780241A (en) * 2016-11-22 2017-05-31 安徽客乐宝智能科技有限公司 A kind of anti-minor based on minor's biological identification technology loses scheme
CN107977609A (en) * 2017-11-20 2018-05-01 华南理工大学 A kind of finger vein identity verification method based on CNN
WO2019034589A1 (en) * 2017-08-15 2019-02-21 Norwegian University Of Science And Technology A biometric cryptosystem
CN111062345A (en) * 2019-12-20 2020-04-24 上海欧计斯软件有限公司 Training method and device of vein recognition model and vein image recognition device
CN111462379A (en) * 2020-03-17 2020-07-28 广东网深锐识科技有限公司 Access control management method, system and medium containing palm vein and face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780241A (en) * 2016-11-22 2017-05-31 安徽客乐宝智能科技有限公司 A kind of anti-minor based on minor's biological identification technology loses scheme
WO2019034589A1 (en) * 2017-08-15 2019-02-21 Norwegian University Of Science And Technology A biometric cryptosystem
CN107977609A (en) * 2017-11-20 2018-05-01 华南理工大学 A kind of finger vein identity verification method based on CNN
CN111062345A (en) * 2019-12-20 2020-04-24 上海欧计斯软件有限公司 Training method and device of vein recognition model and vein image recognition device
CN111462379A (en) * 2020-03-17 2020-07-28 广东网深锐识科技有限公司 Access control management method, system and medium containing palm vein and face recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JUN WANG ET AL: "Bimodal Vein Data Mining via Cross-Selected-Domain Knowledge Transfer", 《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076927A (en) * 2021-04-25 2021-07-06 华南理工大学 Finger vein identification method and system based on multi-source domain migration
CN113076927B (en) * 2021-04-25 2023-02-14 华南理工大学 Finger vein identification method and system based on multi-source domain migration

Similar Documents

Publication Publication Date Title
Punyani et al. Neural networks for facial age estimation: a survey on recent advances
WO2020114118A1 (en) Facial attribute identification method and device, storage medium and processor
Chacko et al. Handwritten character recognition using wavelet energy and extreme learning machine
Yoo et al. Optimized face recognition algorithm using radial basis function neural networks and its practical applications
CN107403084B (en) Gait data-based identity recognition method
CN111340103B (en) Feature layer fusion method and device based on graph embedding typical correlation analysis
Setiowati et al. A review of optimization method in face recognition: Comparison deep learning and non-deep learning methods
Zhai et al. BeautyNet: Joint multiscale CNN and transfer learning method for unconstrained facial beauty prediction
Madhavan et al. Incremental methods in face recognition: a survey
CN113033398B (en) Gesture recognition method and device, computer equipment and storage medium
Al-Shannaq et al. Comprehensive analysis of the literature for age estimation from facial images
Sawalha et al. Face recognition using harmony search-based selected features
Zuobin et al. Feature regrouping for cca-based feature fusion and extraction through normalized cut
Neggaz et al. An Intelligent handcrafted feature selection using Archimedes optimization algorithm for facial analysis
Huang et al. Locality-regularized linear regression discriminant analysis for feature extraction
Dagher et al. Improving the SVM gender classification accuracy using clustering and incremental learning
Jadhav et al. HDL-PI: hybrid DeepLearning technique for person identification using multimodal finger print, iris and face biometric features
Ergin et al. Face Recognition by Using 2D Orthogonal Subspace Projections.
CN110287973B (en) Image feature extraction method based on low-rank robust linear discriminant analysis
Pathak et al. Multimodal eye biometric system based on contour based E-CNN and multi algorithmic feature extraction using SVBF matching
CN112241680A (en) Multi-mode identity authentication method based on vein similar image knowledge migration network
Wang Examination on face recognition method based on type 2 blurry
Dar Neural networks (CNNs) and VGG on real time face recognition system
Dar et al. Performance Evaluation of Convolutional Neural Networks (CNNs) And VGG on Real Time Face Recognition System
Zuobin et al. Effective feature fusion for pattern classification based on intra-class and extra-class discriminative correlation analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210119

WW01 Invention patent application withdrawn after publication