CN114612968A - Convolutional neural network-based lip print identification method - Google Patents

Convolutional neural network-based lip print identification method Download PDF

Info

Publication number
CN114612968A
CN114612968A CN202210206420.XA CN202210206420A CN114612968A CN 114612968 A CN114612968 A CN 114612968A CN 202210206420 A CN202210206420 A CN 202210206420A CN 114612968 A CN114612968 A CN 114612968A
Authority
CN
China
Prior art keywords
neural network
identification
training
lip
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210206420.XA
Other languages
Chinese (zh)
Inventor
韦静
张磊磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yancheng Institute of Technology
Original Assignee
Yancheng Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yancheng Institute of Technology filed Critical Yancheng Institute of Technology
Priority to CN202210206420.XA priority Critical patent/CN114612968A/en
Publication of CN114612968A publication Critical patent/CN114612968A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for identifying a lip print based on a convolutional neural network, and belongs to the field of lip print identification. Firstly, a lip print data set is established, then, lip print images needing to be identified are divided into three types according to the proportion, the three types are respectively a training set, a verification set and a test set, and data to be trained are input and normalized. And then, building a convolutional neural network model, inputting the training set and the verification set data into the convolutional neural network for training, storing the trained recognition model, and finally realizing the prediction and recognition of the lip veins by using the model. The method can utilize more lip print features to the maximum extent, fully extracts the lip print features by utilizing the weight sharing characteristic of the convolutional neural network and carries out classification and identification, improves the identification precision of the lip print identification, shortens the identification time, simplifies the complex process of the traditional manual design feature extraction and classification and identification method, and promotes the popularization and application of the method in the fields of identity identification and verification.

Description

Convolutional neural network-based lip print identification method
Technical Field
The invention belongs to the field of biological feature identification, and particularly relates to a lip print identification method based on a convolutional neural network.
Background
The biological feature recognition technology is characterized in that a computer is closely combined with scientific technologies such as optics, acoustics, biosensors and a biometrical principle, the inherent physiological characteristics (such as fingerprints, human faces, irises and the like) and behavior characteristics (such as handwriting, voice, gait and the like) of a human body are utilized to recognize and verify personal identity, and the biological feature recognition technology has important characteristics such as uniqueness, permanence, portability and the like. With the development of machine learning and deep learning, technologies such as face recognition, fingerprint recognition and palm print recognition are successfully applied to various fields, such as an intelligent attendance system, an intelligent lock, a payment password, a mobile phone unlocking password, criminal investigation, forensic medicine, an identity card and the like, and future biological recognition technology is going to be used in various fields such as commercial application, public project application, public and social security application, personal life application, identity card application and the like. Many industries, including information, manufacturing, education, etc., are beginning to present a trend toward large-scale application of biometric technologies.
Convolutional Neural Networks (CNNs) are a type of feed-forward Neural network that includes convolution operations and has a deep structure, have important characteristics of local connection and parameter sharing, and are one of representative methods of deep learning. The appearance of the method greatly simplifies the processing process of image recognition, and does not need to spend a large amount of time to manually design a method to extract features. The CNN network structure mainly comprises a plurality of convolution layers and a pooling layer which are alternately connected, a plurality of full-connection layers are connected behind the convolution layers, and finally, a plurality of classifiers are used as output layers. The function of the convolutional layer is to extract the characteristics of input data, the convolutional layer internally comprises a plurality of convolutional kernels, and each element forming the convolutional kernels corresponds to a weight coefficient and a deviation amount, and is similar to a neuron of a feedforward neural network. The size of the convolution kernel is typically 3 x 3 and 5 x 5, the greater its number, the more features are extracted for classification. In order for a neural network to fit a variety of complex functions, non-linear activation functions must be used, and commonly used activation functions include: sigmoid function, tanh function, function to modify linear units (ReLu), and Leaky ReLu function. After the feature extraction is performed on the convolutional layer, the output feature map is transmitted to the pooling layer for feature selection and information filtering. The pooling layer comprises a preset pooling function, and the function of the pooling layer is to replace the result of a single point in the feature map with the feature map statistic of an adjacent area, so that the data dimension is reduced. The step of selecting a pooling area by the pooling layer is the same as the step of scanning the characteristic diagram by the convolution kernel, and the pooling layer is controlled by the size of a sampling window, step length and filling, and the pooling mode comprises maximum pooling and average pooling. The fully-connected layer is equivalent to the hidden layer in a traditional feedforward neural network. The fully-connected layer is positioned at the last part of the hidden layer of the convolutional neural network, consists of a plurality of neurons, and only transmits signals to other fully-connected layers, and the number of the neurons in the layer has great influence on the recognition result and the network training time. The upper layer of the output layer is usually a fully-connected layer, so the structure and the working principle of the output layer are the same as those of the output layer in the traditional feedforward neural network. For the image classification problem, the output layer may output the classification result using a logistic function or a normalized exponential function (Softmax function).
Disclosure of Invention
The invention provides a recognition method based on a convolutional neural network, which is mainly suitable for solving the problems that the manually designed lip print feature extraction and recognition method in the existing lip print recognition method is too complex, needs to spend a large amount of time, is insufficient in recognition precision, is long in recognition time and the like. Compared with the existing lip print identification method, the method has the characteristics and the innovativeness that the method combines the processes of feature extraction and classification identification, realizes automatic extraction of lip print features for classification identification, and reduces the influence of potential factors on classification identification caused by the feature extraction method. The method has the advantages that the large-scale lip print data set can be trained, then the small data samples are used for predicting and identifying, personal identity identification is achieved, the local connection of the convolutional neural network and the parameter sharing characteristic are utilized, lip print characteristic information is fully extracted, and the classifying and identifying effect of the classifier on the lip prints is improved. The model can effectively carry out prediction identification on the test data set.
The method specifically comprises the following steps:
step a, collecting a lip print image, preprocessing the image and establishing a lip print data set.
B, dividing the lip print data to be identified into a training set, a verification set and a test set according to a proportion, and reading a lip print image to be trained;
c, reading the lip print data to be trained, and normalizing the lip print data;
step d, constructing a network structure of the convolutional neural network based on a pyrrch framework, wherein the network structure is composed of an input layer, a convolutional layer, a pooling layer, a convolutional layer, a pooling layer, a full-connection layer and an output layer;
e, setting various parameters of the convolutional neural network, and inputting the training set subjected to normalization processing in the step c into the neural network structure constructed in the step d for training; b, inputting the verification set subjected to normalization processing in the step c into the neural network constructed in the step d for verification, and executing the step f if the training result of the verification set is greater than or equal to a set threshold value; if the test result of the verification set is smaller than the set threshold value, adjusting all parameters and then executing the step e, and retraining the neural network by using the training set;
f, obtaining weight parameters of each layer of the trained convolutional neural network through the step e, and storing a model of the trained convolutional neural network;
and g, loading the model parameters stored in the step f, carrying out prediction identification on the test set subjected to the normalization processing in the step c, outputting the corresponding category and the identification rate, and terminating the method.
Further, the step a specifically comprises: the lip print image is collected, the clear and multi-angle lip images are shot by using the mobile terminal equipment in a non-contact collection mode under the environment condition of natural illumination, and classification identification labels are established and stored in the folder respectively. The size of the data set and the division of the data set in different proportions have certain influence on the recognition performance of the recognition model, so that multiple experiments need to be carried out on the division of the data set, then the collected lip print image is preprocessed, the image cutting is carried out, the size of an input image is fixed, more lip print characteristic information is extracted, and more characteristic information is used for classification and recognition. In order to avoid the overfitting phenomenon of the identification model, a data enhancement method is used for expanding the data set, and the common data enhancement method comprises rotation, mirror image, blurring, noise addition, image brightness increase and reduction and the like.
Further, the step b specifically comprises: dividing the established lip print data into three types according to a certain proportion, wherein the three types are respectively a training set, a verification set and a test set. And establishing a classification label for the image data in each folder to ensure that the data sets come from different people and are respectively stored in subfolders.
Further, step d specifically includes: based on the pyrrch framework, a convolutional neural network is built, and the network structure is as follows: input layer-convolution layer-pooling layer-convolution layer-pooling layer-total connection layer-output layer. In the convolutional layer, 6 convolution kernels of 5 × 5 are used, the sampling window size of the pooling layer is 2 × 2, and the step size of window sliding is 2. In order to solve the problems of gradient disappearance and gradient explosion, a batch normalization BN (batch normalization) layer is added between the convolution layer and the pooling layer, ReLU is used as an activation function, a Dropout method is used between all connection layers, neurons are randomly discarded, the risk that the model generates an overfitting phenomenon is reduced, and finally a Softmax logistic regression layer is used for solving the multi-classification problem. And calculating the conditional probability that the vector belongs to a certain class aiming at the one-dimensional vector output by the full connection layer, and normalizing the calculated probability to ensure that the output probability is in the range of [0,1 ].
Further, step e specifically comprises: the network training comprises network training times (epochs), the number of images input by one training (Batch), learning rate and other hyper-parameters, and although the setting of the training times does not influence the final recognition result, the set value cannot be too small, which causes that the model finishes the network training without convergence; the value of Batch needs to be set according to the size of the lip print image data set and the size of the memory of the hardware equipment, the size of the Batch affects the network training speed and the training time, and if the value is too large, the memory of the hardware equipment is occupied; the learning rate is the rate of controlling gradient descent, the magnitude of the learning rate influences the final recognition performance of the model, the convergence rate is slow due to an excessively small value, and the model convergence is hindered due to an excessively large value, so that the model has the phenomena of model oscillation, instability and non-convergence, and therefore a proper learning rate needs to be selected. And d, after setting of various parameters of the network is finished, directly inputting the data set into the convolutional neural network built in the step d for training, observing and verifying the recognition rate of the sample set, comparing the recognition rate with a set threshold value, finishing the model training if the recognition rate is greater than the set threshold value, adjusting the network structure and the hyper-parameters if the recognition rate is less than the set threshold value, and inputting the data set for model training until the expected recognition rate is reached.
Further, the step g specifically comprises: and f, loading the model and the weight parameters stored in the step f, establishing a classification identification tag and a corresponding index value of a file storage data set, inputting the lip print test set subjected to normalization processing in the step c into the model for prediction identification, and finally outputting an identification result which is the classification tag and the identification rate corresponding to the image.
Has the advantages that: the invention provides a method for identifying a lip print based on a convolutional neural network, which has the following advantages:
(1) compared with the traditional lip print identification method, the method not only simplifies the lip print preprocessing process, shortens the identification period, but also improves the identification accuracy. A batch normalization layer and a Dropout method are added in the network model, so that the problems of gradient disappearance and gradient explosion are effectively solved, the phenomenon of overfitting of the network model is avoided, and a proper hyper-parameter and Adam optimizer are selected for training, so that the convergence speed of the model is accelerated, the accuracy of a verification data set and a test data set is improved, and the generalization capability and the stability of the recognition model are improved;
(2) the large-scale lip print data set can be trained to adapt to the application of the lip print data set in the field of identity recognition and verification, a small sample can be used for prediction and recognition, the accuracy rate of the prediction and recognition is high, and the recognition time is short;
(3) the method adopts the image data set containing various types of lip prints, so that the lip print identification method is more suitable for practical application scenes in life, the efficiency of determining the identity of a criminal suspect or a dead in the field of criminal investigation and forensic medicine is improved, and the application of the lip print identification in the fields of information safety and identity identification is expanded;
(4) the method can automatically extract and classify the lip print characteristics, has the advantages of high intelligence, robustness and interpretability, and is easy for related researchers to understand, popularize and apply.
Drawings
FIG. 1 is a diagram comparing a conventional lip print identification process with a convolutional neural network identification process;
FIG. 2 is a general method flow diagram of a convolutional neural network-based lip print identification method;
FIG. 3 is an exemplary diagram of a lip ridge data set image;
FIG. 4 is a diagram of a convolutional neural network constructed based on a pytorch;
FIG. 5 is a flow chart of a convolutional neural network-based lip print recognition;
FIG. 6 is a graph of the output of the model accuracy of the present invention; in the figure, train _ acc is the accuracy of a training set, and val _ acc is the accuracy of a verification set;
FIG. 7 is a graph of the model loss value output results of the present invention; in the figure, train _ loss is a training set loss value, and val _ loss is a verification set loss.
Detailed Description
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the invention mainly aims to solve the problems that the traditional lip print identification method is long in identification period, complex in image preprocessing process, difficult in feature extraction, high in identification rate and easy to be influenced by human factors and the like. A method for identifying lip veins based on a convolutional neural network is provided. Fig. 1 shows a comparison between the conventional identification method process and the identification process based on the convolutional neural network; FIG. 2 shows a general flow chart of a method for identifying a lip print based on a convolutional neural network; FIG. 3 gives an example of a partial image of a lip print data set; FIG. 4 shows a structure diagram of a convolutional neural network built based on Pythrch; fig. 5 shows the main flow of the lip print identification method of the convolutional neural network. The method mainly comprises the following steps:
the method comprises the following steps: the lip print image is collected, a clear and multi-angle lip image is shot by using a smart phone in a non-contact collection mode under the environment condition of natural illumination, as shown in figure 3, and classification identification tags are established and stored in a folder respectively. The lip print images are collected for 1200 sheets, and are divided into a training set, a verification set and a test set according to the ratio of 6:3:1, namely 720 sheets of training set, 360 sheets of verification set and 120 sheets of test set. In order to avoid the overfitting phenomenon of the recognition model, a data enhancement method is used for expanding the data set, and the common data enhancement method comprises rotation, mirror image, blurring, noise adding, image brightness increasing, image brightness reducing and the like.
Step two: and building a convolutional neural network based on the pytorch framework. As shown in fig. 4, the network structure is: input layer-convolution layer-pooling layer-convolution layer-pooling layer-total connection layer-output layer. The convolution layer adopts 6 convolution kernels of 5 x 5, the sampling window size of the pooling layer is 2 x 2, and the step length of window sliding is 2. To solve the problems of gradient disappearance and gradient explosion, a Batch Normalization (Batch Normalization) layer was added between the convolutional and pooling layers and a redru method was used as the activation function, with a value set to 0.5 between the fully-connected layers. And finally, solving the multi-classification problem by using a Softmax logistic regression layer, calculating the conditional probability that the vector belongs to a certain class aiming at the one-dimensional vector output by the full connection layer, and normalizing the calculated probability to ensure that the output probability is in the range of [0,1 ].
Step three: hyper-parameters of the convolutional neural network are set, including the network training time number of 50(epochs), the number of images input by one training time of 16(batch _ size), the learning rate of 0.001 and the like. An Adam optimizer is used to train the network model and compute the loss values in conjunction with a cross-entropy loss function. And d, after setting of various parameters of the network is finished, directly inputting the data set into the convolutional neural network set up in the step d for training, setting the threshold to be 90%, comparing the accuracy of the verification sample set with the set threshold, finishing model training if the accuracy is greater than the set threshold, adjusting the network structure and the hyper-parameters if the accuracy is smaller than the threshold, and inputting the data set for model training until the expected recognition rate is achieved. The model of the invention generates an average recognition accuracy of 98%, and the accuracy and loss values in the training process are visualized and plotted as shown in fig. 6 and 7.
Step four: loading the model and the weight parameters saved in the network training, establishing a classification identification label and a corresponding index value of a file saving data set, inputting the lip print test set subjected to normalization processing into the model for prediction identification, and finally outputting an identification result which is the classification label and the identification rate corresponding to the image.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that modifications and improvements can be made by persons skilled in the art without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (6)

1. A method for identifying a lip print based on a convolutional neural network is characterized by comprising the following steps:
step a, collecting a lip print image, establishing a lip print data set,
b, dividing the lip print data to be identified into a training set, a verification set and a test set according to a proportion, and reading a lip print image to be trained;
c, reading the lip print data to be trained, and normalizing the lip print data;
step d, constructing a network structure of the convolutional neural network based on a pyrrch framework, wherein the network structure is composed of an input layer, a convolutional layer, a pooling layer, a convolutional layer, a full-connection layer and an output layer;
e, setting various parameters of the convolutional neural network, and inputting the training set subjected to normalization processing in the step c into the neural network structure constructed in the step d for training; b, inputting the verification set subjected to normalization processing in the step c into the neural network constructed in the step d for verification, and executing the step f if the training result of the verification set is greater than or equal to a set threshold value; if the test result of the verification set is smaller than the set threshold value, adjusting all parameters and then executing the step e, and retraining the neural network by using the training set;
f, obtaining weight parameters of each layer of the trained convolutional neural network through the step e, and storing an identification model of the convolutional neural network;
and g, loading the model parameters stored in the step f, carrying out prediction identification on the test set subjected to the normalization processing in the step c, and outputting the corresponding category and identification rate.
2. The method for identifying the lip print based on the convolutional neural network as claimed in claim 1, wherein the step a specifically comprises: collecting lip vein images, using mobile equipment (smart phone, digital camera, video camera and the like) to shoot clear and multi-angle lip images under the environment condition of natural illumination by a non-contact collection mode, establishing classification identification labels to be respectively stored in a folder, wherein the size of a data set and data sets divided in different proportions have certain influence on the identification performance of an identification model, so that the division of the data set needs to be tested for multiple times, then the collected lip vein images are preprocessed, including image cutting, and the size of an input image is fixed to extract more abundant lip vein characteristic information, more characteristic information is used for classification identification, and in order to avoid the phenomenon that the identification model generates overfitting, a data enhancement method is used for expanding the data set, wherein the common data enhancement method comprises rotation, mirror image, fuzzy, noise adding, Increase and decrease image brightness, etc.
3. The method for identifying the lip print based on the convolutional neural network as claimed in claim 1, wherein the step b specifically comprises: dividing the established lip print data into three classes according to a certain proportion, namely a training set, a verification set and a test set, establishing classification labels for the image data under each folder, ensuring that the data sets come from different people, and storing the data sets under subfolders respectively.
4. The method for identifying the lip print based on the convolutional neural network as claimed in claim 1, wherein the step d specifically comprises: based on the pyrrch framework, a convolutional neural network model is built, and the network structure is as follows: the method comprises the steps of input layer-convolution layer-pooling-convolution layer-full connection layer-output layer, wherein 6 convolution kernels of 5 x 5 are adopted in each convolution layer, the sampling window size of each pooling layer is 2 x 2, the window sliding step length is 2, in order to solve the problems of gradient disappearance and gradient explosion, a batch normalization BN (batch normalization) layer is added between each convolution layer and each pooling layer, a ReLU is used as an activation function, a Dropout method is used between every two full connection layers, neurons are discarded randomly, the risk that a model generates an overfitting phenomenon is reduced, and finally a Softmax logistic regression layer is used for solving the multi-classification problem.
5. The method for identifying the lip print based on the convolutional neural network as claimed in claim 1, wherein the step e specifically comprises: the network training comprises network training times (epochs), the number of images input by one training (Batch), learning rate and other hyper-parameters, although the setting of the training times does not influence the final recognition result, the set value cannot be too small, which causes that the model finishes the network training without convergence; the value of the Batch needs to be set according to the size of the lip print image data set and the memory size of the hardware equipment, the size of the Batch affects the network training speed and the training time, if the size of the Batch is too large, the memory of the hardware equipment is occupied, and an even number is commonly used as the value of the Batch; the learning rate is the rate of controlling gradient descent, the final recognition performance of the model is influenced by the size of the learning rate, the convergence rate is slow due to an excessively small value, the convergence rate is slow due to an excessively large value, and the model is vibrated, unstable and not converged, so that a proper learning rate needs to be selected, after the setting of various parameters of the network is completed, the data set is directly input into the network for training, the recognition rate of the verification sample set is observed and compared with a set threshold value, if the recognition rate is larger than the set threshold value, the model training can be finished, if the recognition rate is smaller than the threshold value, the network structure and the hyper-parameters need to be adjusted, and the data set is input for model training until the expected recognition rate is reached.
6. The method for identifying the lip print based on the convolutional neural network as claimed in claim 1, wherein the step g is specifically to load the model and the weight parameter saved in the step f, establish a classification identification tag and a corresponding index value of a file saving data set, input the lip print test set subjected to normalization processing in the step c into the model for prediction identification, and finally output an identification result which is a classification tag and an identification rate corresponding to the image.
CN202210206420.XA 2022-03-02 2022-03-02 Convolutional neural network-based lip print identification method Withdrawn CN114612968A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210206420.XA CN114612968A (en) 2022-03-02 2022-03-02 Convolutional neural network-based lip print identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210206420.XA CN114612968A (en) 2022-03-02 2022-03-02 Convolutional neural network-based lip print identification method

Publications (1)

Publication Number Publication Date
CN114612968A true CN114612968A (en) 2022-06-10

Family

ID=81860650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210206420.XA Withdrawn CN114612968A (en) 2022-03-02 2022-03-02 Convolutional neural network-based lip print identification method

Country Status (1)

Country Link
CN (1) CN114612968A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205637A (en) * 2022-09-19 2022-10-18 山东世纪矿山机电有限公司 Intelligent identification method for mine car materials
CN116609672A (en) * 2023-05-16 2023-08-18 国网江苏省电力有限公司淮安供电分公司 Energy storage battery SOC estimation method based on improved BWOA-FNN algorithm
CN116824512A (en) * 2023-08-28 2023-09-29 西华大学 27.5kV visual grounding disconnecting link state identification method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205637A (en) * 2022-09-19 2022-10-18 山东世纪矿山机电有限公司 Intelligent identification method for mine car materials
CN115205637B (en) * 2022-09-19 2022-12-02 山东世纪矿山机电有限公司 Intelligent identification method for mine car materials
CN116609672A (en) * 2023-05-16 2023-08-18 国网江苏省电力有限公司淮安供电分公司 Energy storage battery SOC estimation method based on improved BWOA-FNN algorithm
CN116609672B (en) * 2023-05-16 2024-05-07 国网江苏省电力有限公司淮安供电分公司 Energy storage battery SOC estimation method based on improved BWOA-FNN algorithm
CN116824512A (en) * 2023-08-28 2023-09-29 西华大学 27.5kV visual grounding disconnecting link state identification method and device
CN116824512B (en) * 2023-08-28 2023-11-07 西华大学 27.5kV visual grounding disconnecting link state identification method and device

Similar Documents

Publication Publication Date Title
KR100866792B1 (en) Method and apparatus for generating face descriptor using extended Local Binary Pattern, and method and apparatus for recognizing face using it
KR101254177B1 (en) A system for real-time recognizing a face using radial basis function neural network algorithms
CN114612968A (en) Convolutional neural network-based lip print identification method
Jian et al. Densely connected convolutional network optimized by genetic algorithm for fingerprint liveness detection
KR20190123372A (en) Apparatus and method for robust face recognition via hierarchical collaborative representation
Benkaddour CNN based features extraction for age estimation and gender classification
Oleiwi et al. Integrated different fingerprint identification and classification systems based deep learning
Lazimul et al. Fingerprint liveness detection using convolutional neural network and fingerprint image enhancement
Borra et al. Face recognition based on convolutional neural network
CN114913610A (en) Multi-mode identification method based on fingerprints and finger veins
CN114743278A (en) Finger vein identification method based on generation of confrontation network and convolutional neural network
Xiao et al. An improved siamese network model for handwritten signature verification
Anastasopoulos et al. Political image analysis with deep neural networks
Ghoualmi et al. Feature Selection Based on Machine Learning Algorithms: A weighted Score Feature Importance Approach for Facial Authentication
Tvoroshenko et al. Analysis of methods for detecting and classifying the likeness of human features
Dubovečak et al. Face Detection and Recognition Using Raspberry PI Computer
CN112241680A (en) Multi-mode identity authentication method based on vein similar image knowledge migration network
Prasanth et al. Fusion of iris and periocular biometrics authentication using CNN
CN113128387B (en) Drug addiction attack recognition method for drug addicts based on facial expression feature analysis
Abhila et al. A deep learning method for identifying disguised faces using AlexNet and multiclass SVM
Al-Shareef et al. Face Recognition Using Deep Learning
Patel et al. A Survey Paper on Gender Classification using Deep Learning
Nugraha et al. Offline signature identification using deep learning and Euclidean distance
Devi et al. Attendance Management System using Face Recognition
Balakrishna ILLUMINATION FACE RECOGNITION USING DEEP LEARNING TECHNIQUES

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220610

WW01 Invention patent application withdrawn after publication