CN115937910A - Palm print image identification method based on small sample measurement network - Google Patents

Palm print image identification method based on small sample measurement network Download PDF

Info

Publication number
CN115937910A
CN115937910A CN202211528943.2A CN202211528943A CN115937910A CN 115937910 A CN115937910 A CN 115937910A CN 202211528943 A CN202211528943 A CN 202211528943A CN 115937910 A CN115937910 A CN 115937910A
Authority
CN
China
Prior art keywords
network
small sample
palm print
distance
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211528943.2A
Other languages
Chinese (zh)
Inventor
周开军
胡亦良
周鲜成
覃业梅
史长发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Technology
Original Assignee
Hunan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Technology filed Critical Hunan University of Technology
Priority to CN202211528943.2A priority Critical patent/CN115937910A/en
Publication of CN115937910A publication Critical patent/CN115937910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a palm print image recognition method based on a small sample measurement network, which belongs to the crossing field of biological feature recognition and computer vision and comprises the steps of dividing a training set and a test set in a palm print data set in a small sample learning mode and dividing the training set and the test set into a plurality of tasks; constructing a light-weight small sample measurement network, and extracting a feature vector by using the network; constructing a measurement formula of the feature vector, and measuring the distance of the feature vector by using the formula; a loss function based on a negative log-likelihood of the metric distance is designed, and the distances calculated using the distance formula are accumulated as corresponding losses. The invention introduces small sample learning and improved signal-to-noise ratio distance to train the model, solves the problem of less training samples and obtains higher recognition rate.

Description

Palm print image identification method based on small sample measurement network
Technical Field
The invention belongs to the crossing field of biological feature recognition and computer vision, and particularly relates to a palm print image recognition method based on a small sample measurement network.
Background
Biometric identification technology is to identify the identity of an individual by extracting the inherent features of the human body, such as fingerprints, irises, facial images, veins, palmprints or ears; the palm print contains rich features, such as main lines, ridges, wrinkles and other detailed features, and with the development of computer performance, big data and deep learning, the palm print recognition method based on deep learning is also gradually distinguished.
For a traditional palm print recognition algorithm, a special feature extractor needs to be designed artificially, which usually needs a lot of work, and a traditional deep learning-based method needs a lot of training samples and corresponding labels to ensure that the trained model does not have the problems of overfitting and the like; most of the palm print recognition algorithms based on deep learning need a large number of samples to train a model; from a practical perspective, in consideration of the privacy problem of a user, collecting a large number of samples is often difficult to achieve, and marking the samples requires a large amount of work; although some algorithms reduce the dependence on the label samples by combining with the traditional palm print recognition method, the algorithms often need the prior experience of researchers and have certain subjective factors, which often makes the model complicated and increases the calculation difficulty.
Disclosure of Invention
The invention aims to provide a palm print image recognition method based on a small sample measurement network, wherein the small sample measurement network is a light neural network model, and the model is trained based on an improved signal-to-noise ratio distance so as to reduce the calculation complexity and extract stable palm print recognition characteristics to perform high-precision matching and recognition based on methods such as deep learning and measurement learning under the condition of a small sample.
A palm print image recognition method based on a small sample measurement network is provided, which comprises the following steps:
s1: collecting and extracting a palm print image ROI, arranging the ROI into a corresponding data set, dividing a training set and a testing set in the palm print data set in a small sample learning mode, and dividing the training set and the testing set into a plurality of tasks;
s2: constructing a light small sample measurement network, and extracting a feature vector by using the network;
s3: obtaining a feature vector corresponding to each palm print ROI image through S2;
s4: constructing a distance formula of the feature vectors, and measuring the distance of the feature vectors by using the formula;
s5: designing a loss function based on the negative log-likelihood of the metric distance, updating model parameters in a training network through the loss function, and calculating the accuracy of a test set and the recognition rate of a query set;
s6: and (5) storing and deploying the network model to the actual operation environment according to the network model parameters trained in the S5.
As a further scheme of the invention: the S1 specifically comprises the following steps:
s1.1: collecting a palm image, extracting a palm print image ROI by using a specific preprocessing and image segmentation method, and sorting the palm print image ROI into corresponding data sets, wherein the data sets are randomly divided into training sets and testing sets according to the proportion of 1:1, and labels between the training sets and the testing sets are not intersected;
s1.2: dividing the divided training set and test set into h tasks according to an n-shot k-way mode, wherein each task is divided into a support set and a query set;
the method comprises the following steps that k classes are arranged in a support set, each class is provided with n samples, the value range of n is 1 to the maximum number of samples of any class, and the value range of k is 1 to the maximum number of classes;
the query set is q-shot k-way, the query set is divided in the same way as the support set, and only the values of the corresponding shots may be different, wherein the divided tasks are selected in a way that k classes are randomly selected in the training set or the test set, and n samples or q samples are randomly selected in the k classes.
As a further scheme of the invention: the S2 specifically comprises the following steps:
s2.1: the structure of the small sample measurement network consists of 4 layers of convolution modules and 1 layer of dimensionality reduction layer;
s2.2: each module of the 4 layers of convolution modules has corresponding input channel number and output channel number, the structure of each module comprises a two-dimensional convolution layer, the size of convolution kernels of the two-dimensional convolution layer is 3 multiplied by 3, the number of the convolution kernels is 64, and the size of convolution filling pixels is 1; then a batch normalization layer is formed; followed by a layer of ReLU activation functions; the last layer is the largest pooling layer, the window size is 2 x 2, and the step length is 2;
s2.3: the dimensionality reduction layer converts the high-dimensional characteristic diagram passing through the 4 layers of convolution modules into a 1-dimensional characteristic vector;
s2.4: randomly initializing the network structure parameters according to the constructed network structure;
s2.5: the entire network model can be expressed as:
Figure 100353DEST_PATH_IMAGE001
wherein the input image is
Figure DEST_PATH_IMAGE002
The network model is f, the parameter in the network model is p, and the characteristic vector output by the network model is->
Figure 676216DEST_PATH_IMAGE003
As a further scheme of the invention: the S3 specifically comprises the following steps:
s3.1: converting each palm print ROI image into an RGB image with a format of 128 multiplied by 3, and inputting the RGB image into a small sample measurement network of S2 to obtain a corresponding feature vector;
s3.2: learning from small samples in S1Method, for each task, a support set is available
Figure DEST_PATH_IMAGE004
Individual feature vectors, a query set can be derived>
Figure 377324DEST_PATH_IMAGE005
There are n feature vectors in each of the k or q classes.
As a further scheme of the invention: the S4 specifically comprises the following steps:
according to the characteristic vector obtained in S3 and the small sample learning task mode defined in S1, the query of each task is concentrated
Figure 975796DEST_PATH_IMAGE005
Individual feature vector and support set->
Figure 993431DEST_PATH_IMAGE004
The feature vectors calculate the metric distance, and the calculation distance formula is as follows:
Figure DEST_PATH_IMAGE006
wherein
Figure 737265DEST_PATH_IMAGE003
Is the feature vector obtained in S3->
Figure 207560DEST_PATH_IMAGE007
And &>
Figure DEST_PATH_IMAGE008
Represents a distance coefficient, function->
Figure 327832DEST_PATH_IMAGE009
Function->
Figure 199973DEST_PATH_IMAGE009
Is defined as follows: />
Figure DEST_PATH_IMAGE010
Wherein
Figure 320900DEST_PATH_IMAGE011
Represents a feature vector pick>
Figure DEST_PATH_IMAGE012
N denotes the feature vector->
Figure 199863DEST_PATH_IMAGE012
Of (c) is calculated.
As a further scheme of the invention: the S5 specifically comprises the following steps:
s5.1: and updating model parameters in the training network by adopting the following loss functions according to the measurement distance calculated in the step S4, the small sample learning method in the step S1 and the divided training set:
Figure 389405DEST_PATH_IMAGE013
Figure DEST_PATH_IMAGE014
representing a support set in a training task>
Figure 319315DEST_PATH_IMAGE015
Representing a set of queries in a training task>
Figure DEST_PATH_IMAGE016
The anchor picture x representing the support set and the query picture y of the query set are of the same class and ≥ h>
Figure 139372DEST_PATH_IMAGE017
Indicates that it is not of the same type>
Figure DEST_PATH_IMAGE018
Calculating a LogSoftmax function of the distance between the anchor point pictures and the query pictures, calculating the LogSoftmax function of the anchor point pictures in the support set and the query pictures in the query set, finally accumulating the LogSoftmax function into a loss value, and updating and optimizing model parameters by using the loss value;
s5.2: according to the test set divided in the S1, the accuracy rate on the test set needs to be calculated, then according to the distance formula in the S4, the distance between all the pictures of the query set and each picture in the support set is calculated, whether the matching is successful or not is judged by using the principle of the shortest distance, and finally, the corresponding recognition rate is calculated according to the matching result.
As a further scheme of the invention: and the optimizer adopted by the optimization model parameters in the S5.1 is an Adam optimizer, and the learning rate is 0.01.
As a further scheme of the invention: the S6 specifically comprises the following steps:
s6.1: according to the network model parameters trained in the S5, storing and deploying the network model parameters to an actual operation environment;
firstly, establishing a palm print database, extracting corresponding characteristic vectors by using a trained model, and storing the characteristic vectors in the database or an internal memory for convenient calculation and matching;
s6.2: obtaining an anchor point picture matched with x through a formula, wherein the matched formula is as follows:
Figure 305299DEST_PATH_IMAGE019
wherein x is a picture to be matched, s represents a support set in a test task, a is an anchor point picture, and as described in the above formula, a is a picture in the support set, and p is a network model parameter after training is completed.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, small sample learning and improved signal-to-noise ratio distance are introduced to train the model, under the condition of small samples, on the basis of methods such as deep learning and metric learning, the calculation complexity is reduced, stable palm print recognition features are extracted to perform high-precision matching and recognition, the problem of few training samples is solved, and high recognition rate is obtained.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a flow chart of a palm print image recognition method based on a small sample metric network;
FIG. 2 is a flow chart of palm acquisition provided by the present invention;
FIG. 3 is a flow chart of data set partitioning provided by the present invention;
FIG. 4 is a flow chart of model training and testing provided by the present invention;
fig. 5 is a flowchart of palm print matching according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and embodiments; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention, i.e., the described embodiments are only a subset of, and not all, embodiments of the invention; the components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention; all other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, the present invention specifically includes the following steps:
the method comprises the following steps: collecting and dividing a palm print data set; the data set of the invention is divided by means of small sample learning, which means that a small number of samples are used to learn or understand a brand new concept or category, and the learning is performed by one taskThe data set can be divided into a support set and a query set, the support set is a data set used for matching in the task, the query set is a data set used for querying in the task, and the data in the query set needs to be matched with the data in the support set; defining a support set for use in a training task as
Figure 783685DEST_PATH_IMAGE014
The query set used in training is defined as ^ er>
Figure 879686DEST_PATH_IMAGE015
Defining a support set used in the test task as S and a query set as Q; each task can be of n-shot and k-way, where k-way refers to k classes of samples in the support set, and n-shot refers to n samples per class in the support set, so that the support set has ^ or ^ based on>
Figure 683694DEST_PATH_IMAGE004
A sample; we define the number of samples per category as q, i.e., q-shot, in the query set, so the query set has ≧ H>
Figure 615877DEST_PATH_IMAGE005
And (4) sampling.
It should be noted that in small sample learning, the data used for training
Figure 147222DEST_PATH_IMAGE014
And &>
Figure 848462DEST_PATH_IMAGE015
The label space of the data S and Q used for the test is mutually independent label space, and the label space are not mutually intersected; collecting a palm image, extracting a palm print image ROI by using a specific preprocessing and image segmentation method, and sorting the palm print image ROI into corresponding data sets, wherein the data sets are randomly divided into a training set and a testing set according to the proportion of 1:1; dividing the training set and the test set which are divided according to a small sample learning mode, namely dividing the training set and the test set into h tasks according to an n-shot k-way mode, wherein each task is divided into branchesThe method comprises a support set and a query set, wherein the support set comprises k classes, each class comprises n samples, the value range of n is 1 to the maximum number of samples of any class, the value range of k is 1 to the maximum number of classes, the query set is q-shot k-way, the division mode of the query set is the same as that of the support set, only the values of corresponding shots may be different, the selection mode of the divided tasks is to randomly select k classes in a training set or a testing set, and randomly select n samples or q samples in the k classes.
Step two: establishing a small sample measurement network; most of models for learning small samples use convolution layers with deep depths to extract image features, and the quantity of the convolution layers is reduced under the condition of ensuring the stability of extracted features; the structure of the small sample measurement network is composed of 4 layers of convolution modules and 1 layer of dimensionality reduction module, each module of the 4 layers of convolution modules has the corresponding input channel number and output channel number, the structure of each module comprises a two-dimensional convolution layer, the size of a convolution kernel is 3 multiplied by 3, the number of the convolution kernels is 64, the size of a convolution filling pixel is 1, then a batch normalization layer is arranged, then a ReLU activation function is arranged, and the last layer of the convolution module is a maximum pooling layer; the function of the dimension reduction module is to convert the high-dimensional feature map passing through the 4-layer convolution module into a 1-dimensional feature vector.
Step three: extracting the characteristics of the palm print image; inputting RGB images in a format of 128 multiplied by 3 for each palm print ROI image into the network in the second step to obtain corresponding feature vectors; according to the small sample learning method of the step one, for each task, a support set can be obtained
Figure 88950DEST_PATH_IMAGE004
Individual feature vectors, a query set can be derived>
Figure DEST_PATH_IMAGE020
N or q feature vectors in each of k classes; suppose an input image is>
Figure 695381DEST_PATH_IMAGE002
The network model is f, the parameter in the network model is p, and the characteristic vector output by the network model is->
Figure 46728DEST_PATH_IMAGE003
Then, there is the following formula: />
Figure 866390DEST_PATH_IMAGE001
Step four: calculating characteristic measurement; in the statistical theory, the snr is defined as the ratio of the signal variance to the noise variance, and according to the snr definition in the statistical theory, the snr distance between feature vectors can be defined as:
Figure 12200DEST_PATH_IMAGE021
wherein
Figure DEST_PATH_IMAGE022
Represents a feature vector->
Figure 371507DEST_PATH_IMAGE023
Is greater than or equal to>
Figure DEST_PATH_IMAGE024
Represents a feature vector pick>
Figure 198648DEST_PATH_IMAGE023
N denotes the feature vector->
Figure 389327DEST_PATH_IMAGE023
Of (c) is calculated.
Similar to the traditional euclidean distance, if the palm print image of the same person is the corresponding feature vector, the snr distance should be closer, and if the palm print image of the same person is not the corresponding feature vector, the snr distance should be farther, and according to the above constraint, the following distance function formula is obtained:
Figure 706039DEST_PATH_IMAGE025
but compared to the conventional euclidean distance, the snr distance does not satisfy the commutative law, i.e.
Figure DEST_PATH_IMAGE026
The signal-to-noise ratio distance is sensitive to which feature vector is the anchor point; in small sample learning, pictures in the query set need to be matched and compared with pictures in the support set one by one, so that feature vectors corresponding to the pictures in the support set can be used as anchor points, and then distances are calculated from the feature vectors of each picture in the query set, in order to improve the applicability of the signal-to-noise ratio distance, according to the characteristics of the signal-to-noise ratio distance, the feature vectors of each picture in the query set and the support set are used as the anchor points, namely, when the anchor points of two feature vectors are calculated, the signal-to-noise ratio distance when one of the two feature vectors is used as the anchor point needs to be calculated respectively, finally, corresponding distance coefficients are multiplied and added to be used as the distance between the two vectors, and according to the feature vectors obtained in the fourth step and the small sample learning task mode defined in the first step, for each task, the picture in the query set is matched and compared with the pictures in the support set one by one>
Figure 287062DEST_PATH_IMAGE027
Individual feature vector and support set->
Figure DEST_PATH_IMAGE028
The feature vectors calculate the metric distance, which is the general formula:
Figure 183474DEST_PATH_IMAGE029
;/>
namely that
Figure 700430DEST_PATH_IMAGE025
Wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE030
and &>
Figure 656885DEST_PATH_IMAGE031
Representing the distance coefficient.
Step five: network training and testing; according to the measuring distance calculated in the step four, the small sample learning method in the step one and the divided training set, in order to update the model parameters in the training network, the invention adopts the following loss function to update the model:
Figure DEST_PATH_IMAGE032
wherein
Figure 990783DEST_PATH_IMAGE033
In the case of the loss L of,
Figure DEST_PATH_IMAGE034
anchor picture x representing the support set is of the same type as query picture y of the query set, and
Figure DEST_PATH_IMAGE035
indicates that they are not of the same class; />
Figure DEST_PATH_IMAGE036
The method comprises the steps of calculating a LogSoftmax function according to the distance between anchor point pictures and query pictures, calculating the LogSoftmax function of the anchor point pictures in a supporting set and the query pictures in a query set, and accumulating the LogSoftmax function into a loss value, and updating optimization model parameters by using the loss value, wherein an optimizer adopted in the method is an Adam optimizer, and the learning rate is 0.01.
According to the test set divided in the step one, the accuracy on the test set needs to be calculated, then according to the distance formula in the step four, the distance between all the pictures of the query set and each picture in the support set is calculated, whether the matching is successful is judged by using the principle of the shortest distance, and finally, the corresponding recognition rate is calculated according to the matching result.
Step six: matching the palm prints; in the third step, the network model parameters calculated in the second step are used, and the object which has the shortest distance to the support set is selected as the matched object during matching, wherein the matching formula is as follows:
Figure 346678DEST_PATH_IMAGE037
where x is the picture entered, since the picture was selected from the query set
Figure DEST_PATH_IMAGE038
,/>
Figure 449632DEST_PATH_IMAGE039
Then it is an anchor picture, said ÷ as above>
Figure 842567DEST_PATH_IMAGE039
And p is a parameter of the model after training is finished, and the anchor point picture matched with x is finally obtained by the formula.
In one embodiment:
s1: collecting palm images, extracting a palm print ROI area through a specific palm print segmentation method, classifying and counting palm print ROI area image samples, wherein the size of the palm print ROI area image is 128 x 128, and the flow is shown in FIG. 2.
S2: and dividing the collected palm print images into a training set and a testing set according to the proportion of 1:1, wherein the divided categories do not intersect.
S3: the training set and the testing set are divided in a mode of 1-shot 10-way, 1-shot 15-way, 5-shot 10-way and 5-shot 15-way, namely 10 categories or 15 categories are respectively and randomly selected from the training set and the testing set, 1 or 5 categories in each category are randomly selected as a support set, and the rest samples are used as a query set; wherein the training set is divided into 100 tasks in each iteration period, the test set is divided into 200 tasks, and there are 50 iteration periods in total, and the process is shown in fig. 3.
S4: inputting each task into a small sample measurement network, sequentially passing through 4 layers of convolution modules and 1 layer of dimensionality reduction modules, wherein each convolution module consists of 64 convolution layers with 3 multiplied by 3 convolution kernels, a batch normalization layer, a ReLU activation function layer and a 2 multiplied by 2 maximum pooling layer, and converting a high-dimensional characteristic diagram into a characteristic vector through the dimensionality reduction module.
S5: training and testing the model; for each task, feature vectors obtained after the support set and the query set pictures are input into the network are calculated according to the formula to obtain corresponding measurement distances, corresponding loss values are calculated according to the calculated measurement distances and the formula, model parameters are optimized by using an Adam algorithm and a learning rate of 0.01, and after 50 periods, the optimized model parameters are obtained, wherein the process is shown in fig. 4.
For each task, the feature vectors obtained after the support set and the query set pictures are input to the network are calculated according to the formula to obtain the corresponding measurement distance, and the recognition accuracy is calculated by using the shortest distance principle, wherein the process is shown in fig. 4.
S6: deploying the trained model to the corresponding production environment, establishing a corresponding palm print database, taking the image of the palm print database as a support set, taking the palm print image needing to be identified as a query set, and matching and calculating the identification result according to the matching formula, namely the principle of the shortest distance, wherein the flow is shown in fig. 5.
Through experiments, the data experiment result of the palm print data set is as follows:
Figure DEST_PATH_IMAGE040
the foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (8)

1. A palm print image recognition method based on a small sample measurement network is characterized by comprising the following steps:
s1: collecting and extracting a palm print image ROI, arranging the palm print image ROI into corresponding data sets, dividing a training set and a testing set in the palm print data sets in a small sample learning mode, and dividing the training set and the testing set into a plurality of tasks;
s2: constructing a light-weight small sample measurement network, and extracting a feature vector by using the network;
s3: obtaining a feature vector corresponding to each palm print ROI image through S2;
s4: constructing a distance formula of the feature vectors, and measuring the distance of the feature vectors by using the formula;
s5: designing a loss function based on the negative log-likelihood of the metric distance, updating model parameters in a training network through the loss function, and calculating the accuracy of a test set and the recognition rate of a query set;
s6: and (5) storing and deploying the network model to the actual operation environment according to the network model parameters trained in the S5.
2. The method for recognizing the palm print image based on the small sample metric network as claimed in claim 1, wherein the S1 specifically comprises the following steps:
s1.1: collecting a palm image, extracting a palm print image ROI by using a specific preprocessing and image segmentation method, and sorting the palm print image ROI into corresponding data sets, wherein the data sets are randomly divided into training sets and testing sets according to the proportion of 1:1, and labels between the training sets and the testing sets are not intersected;
s1.2: dividing the divided training set and test set into h tasks according to an n-shot k-way mode, wherein each task is divided into a support set and a query set;
the method comprises the following steps that k classes are arranged in a support set, each class is provided with n samples, the value range of n is 1 to the maximum number of samples of any class, and the value range of k is 1 to the maximum number of classes;
the query set is q-shot k-way, the query set is divided in the same way as the support set, and only the values of the corresponding shots may be different, wherein the divided tasks are selected in a way that k classes are randomly selected in the training set or the test set, and n samples or q samples are randomly selected in the k classes.
3. The method for recognizing the palm print image based on the small sample metric network as claimed in claim 2, wherein the S2 specifically comprises the following steps:
s2.1: the structure of the small sample measurement network consists of 4 layers of convolution modules and 1 layer of dimensionality reduction layer;
s2.2: each module of the 4 layers of convolution modules has corresponding input channel number and output channel number, the structure of each module comprises a two-dimensional convolution layer, the size of convolution kernels of the two-dimensional convolution layer is 3 multiplied by 3, the number of the convolution kernels is 64, and the size of convolution filling pixels is 1; then a batch normalization layer is formed; followed by a layer of ReLU activation functions; the last layer is the largest pooling layer, the window size is 2 x 2, and the step length is 2;
s2.3: the dimensionality reduction layer converts the high-dimensional feature map passing through the 4 layers of convolution modules into 1-dimensional feature vectors;
s2.4: randomly initializing the network structure parameters according to the constructed network structure;
s2.5: the entire network model can be expressed as:
Figure DEST_PATH_IMAGE001
wherein the input image is
Figure 587944DEST_PATH_IMAGE002
The network model is f, the parameters in the network model are p, and the feature vector output by the network model is
Figure DEST_PATH_IMAGE003
4. The method for recognizing the palm print image based on the small sample metric network as claimed in claim 3, wherein the step S3 specifically comprises the steps of:
s3.1: converting each palm print ROI image into an RGB image with a format of 128 multiplied by 3, and inputting the RGB image into a small sample measurement network of S2 to obtain a corresponding feature vector;
s3.2: according to the small sample learning method in S1, for each task, a support set can be obtained
Figure 139011DEST_PATH_IMAGE004
Individual feature vectors, the query set can be found >>
Figure DEST_PATH_IMAGE005
There are n feature vectors in each of the k or q classes.
5. The method for recognizing the palm print image based on the small sample metric network as claimed in claim 4, wherein the step S4 specifically comprises the steps of:
according to the characteristic vector obtained in S3 and the small sample learning task mode defined in S1, the query of each task is concentrated
Figure 531815DEST_PATH_IMAGE005
Individual feature vector and support set->
Figure 851938DEST_PATH_IMAGE004
The measurement distance is calculated by the feature vectors, and the calculation distance formula is as follows:
Figure 750624DEST_PATH_IMAGE006
wherein
Figure 54566DEST_PATH_IMAGE003
Is the feature vector obtained in S3->
Figure DEST_PATH_IMAGE007
And &>
Figure 657586DEST_PATH_IMAGE008
Represents a distance coefficient, function->
Figure DEST_PATH_IMAGE009
Function->
Figure 160112DEST_PATH_IMAGE009
Is defined as follows:
Figure 354333DEST_PATH_IMAGE010
wherein
Figure DEST_PATH_IMAGE011
Represents a feature vector pick>
Figure 614413DEST_PATH_IMAGE012
N denotes the feature vector->
Figure 755544DEST_PATH_IMAGE012
Of (c) is measured.
6. The method for recognizing a palm print image based on a small sample metric network as claimed in claim 5, wherein the step S5 specifically comprises the steps of:
s5.1: and updating model parameters in the training network by adopting the following loss functions according to the measurement distance calculated in the step S4, the small sample learning method in the step S1 and the divided training set:
Figure DEST_PATH_IMAGE013
Figure 909314DEST_PATH_IMAGE014
representing a support set in a training task>
Figure DEST_PATH_IMAGE015
Represents a query set in a training task, and>
Figure 212119DEST_PATH_IMAGE016
the anchor picture x representing the support set and the query picture y of the query set are of the same class and ≥ h>
Figure DEST_PATH_IMAGE017
Indicates that it is not of the same type>
Figure 39390DEST_PATH_IMAGE018
Representing that a LogSoftmax function is calculated for the distance between the anchor point pictures and the query pictures, calculating the LogSoftmax function for the anchor point pictures in the support set and the query pictures in the query set, finally accumulating the LogSoftmax function into a loss value, and updating and optimizing the model parameters by using the loss value;
s5.2: according to the test set divided in the S1, the accuracy rate on the test set needs to be calculated, then according to the distance formula in the S4, the distance between all the pictures of the query set and each picture in the support set is calculated, whether the matching is successful or not is judged by using the principle of the shortest distance, and finally, the corresponding recognition rate is calculated according to the matching result.
7. The method for identifying the palm print image based on the small sample metric network as claimed in claim 6, wherein an optimizer adopted by the optimization model parameters in the S5.1 is an Adam optimizer, and the learning rate is 0.01.
8. The method for recognizing the palm print image based on the small sample metric network as claimed in claim 6, wherein the S6 specifically comprises the following steps:
s6.1: according to the network model parameters trained in the S5, storing and deploying the network model parameters to an actual operation environment;
firstly, establishing a palm print database, extracting corresponding characteristic vectors by using a trained model, and storing the characteristic vectors in the database or an internal memory for convenient calculation and matching;
s6.2: obtaining an anchor point picture matched with x by a formula, wherein the matched formula is as follows:
Figure DEST_PATH_IMAGE019
wherein x is a picture to be matched, s represents a support set in a test task, a is an anchor point picture, and a is a picture in the support set as described in the above formula, and p is a network model parameter after training is completed.
CN202211528943.2A 2022-12-01 2022-12-01 Palm print image identification method based on small sample measurement network Pending CN115937910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211528943.2A CN115937910A (en) 2022-12-01 2022-12-01 Palm print image identification method based on small sample measurement network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211528943.2A CN115937910A (en) 2022-12-01 2022-12-01 Palm print image identification method based on small sample measurement network

Publications (1)

Publication Number Publication Date
CN115937910A true CN115937910A (en) 2023-04-07

Family

ID=86653614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211528943.2A Pending CN115937910A (en) 2022-12-01 2022-12-01 Palm print image identification method based on small sample measurement network

Country Status (1)

Country Link
CN (1) CN115937910A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340849A (en) * 2023-05-17 2023-06-27 南京邮电大学 Non-contact type cross-domain human activity recognition method based on metric learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340849A (en) * 2023-05-17 2023-06-27 南京邮电大学 Non-contact type cross-domain human activity recognition method based on metric learning
CN116340849B (en) * 2023-05-17 2023-08-15 南京邮电大学 Non-contact type cross-domain human activity recognition method based on metric learning

Similar Documents

Publication Publication Date Title
CN111310707B (en) Bone-based graph annotation meaning network action recognition method and system
Jacobs et al. Classification with nonmetric distances: Image retrieval and class representation
Xie et al. m-SNE: Multiview stochastic neighbor embedding
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN111339942B (en) Method and system for recognizing skeleton action of graph convolution circulation network based on viewpoint adjustment
CN107169117B (en) Hand-drawn human motion retrieval method based on automatic encoder and DTW
WO2019015246A1 (en) Image feature acquisition
CN110942091B (en) Semi-supervised few-sample image classification method for searching reliable abnormal data center
CN110097060B (en) Open set identification method for trunk image
CN111696101A (en) Light-weight solanaceae disease identification method based on SE-Inception
CN110781829A (en) Light-weight deep learning intelligent business hall face recognition method
CN109033978B (en) Error correction strategy-based CNN-SVM hybrid model gesture recognition method
CN110084211B (en) Action recognition method
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN110751027B (en) Pedestrian re-identification method based on deep multi-instance learning
CN109934258B (en) Image retrieval method based on feature weighting and region integration
CN110458235B (en) Motion posture similarity comparison method in video
CN110717423B (en) Training method and device for emotion recognition model of facial expression of old people
CN111401339B (en) Method and device for identifying age of person in face image and electronic equipment
CN112733602B (en) Relation-guided pedestrian attribute identification method
CN111814611A (en) Multi-scale face age estimation method and system embedded with high-order information
CN110110724A (en) The text authentication code recognition methods of function drive capsule neural network is squeezed based on exponential type
CN115049814B (en) Intelligent eye protection lamp adjusting method adopting neural network model
CN114218292A (en) Multi-element time sequence similarity retrieval method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination