CN113837161B - Identity recognition method, device and equipment based on image recognition - Google Patents
Identity recognition method, device and equipment based on image recognition Download PDFInfo
- Publication number
- CN113837161B CN113837161B CN202111427391.1A CN202111427391A CN113837161B CN 113837161 B CN113837161 B CN 113837161B CN 202111427391 A CN202111427391 A CN 202111427391A CN 113837161 B CN113837161 B CN 113837161B
- Authority
- CN
- China
- Prior art keywords
- face
- training
- image
- vector
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention discloses an identity recognition method, device and equipment based on image recognition, belonging to the field of image recognition, and the method comprises the following steps: constructing a portrait base, wherein the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to identity information; taking a plurality of face images from a face image library as training face images, constructing a face recognition network, reducing the dimensions of the training face images, and training the face recognition network by adopting the training face images subjected to dimension reduction; acquiring a face image to be recognized, wherein the face image of a person corresponding to the face image to be recognized exists in the face image library; and identifying the face image to be identified through a face identification network to obtain a face identification result and corresponding identity information. The invention can be applied to small sample training, accelerates the training speed of the face recognition network, enables the face recognition accuracy to be higher, and realizes the identity recognition by combining the face recognition with the identity information.
Description
Technical Field
The invention belongs to the field of image recognition, and particularly relates to an identity recognition method, an identity recognition device and identity recognition equipment based on image recognition.
Background
Face recognition is a biometric technique for identifying an identity based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and further recognize the detected faces.
In the prior art, a large amount of face sample data is usually collected, a neural network is trained through the face sample data, and the trained neural network is adopted for face recognition, so that the purpose of face recognition is achieved. However, in the prior art, under the condition of a small sample, the training effect of the neural network is poor, so that the recognition accuracy is reduced, and in the training process, the convergence rate is low and the training time is long.
Disclosure of Invention
Aiming at the defects in the prior art, the identity recognition method, the identity recognition device and the identity recognition equipment based on image recognition solve the problems in the prior art.
In a first aspect, the present invention provides an identity recognition method based on image recognition, including:
constructing a portrait base, wherein the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to identity information;
taking a plurality of face images from a face image library as training face images, constructing a face recognition network, reducing the dimensions of the training face images, and training the face recognition network by adopting the training face images subjected to dimension reduction;
acquiring a face image to be recognized, wherein the face image of a person corresponding to the face image to be recognized exists in the face image library;
identifying a face image to be identified through a face identification network to obtain a face identification result and corresponding identity information;
the constructing of the face recognition network, the dimension reduction of the training face image, and the training of the face recognition network by adopting the training face image after the dimension reduction comprise:
adopting a BP neural network as a face recognition network;
preprocessing a training face image to obtain a preprocessed image;
performing dimensionality reduction on the preprocessed image to obtain a feature vector of the preprocessed image;
and setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector.
Further, the preprocessing the training face image includes: and carrying out graying, inclination correction, median filtering and normalization operation on the training face image.
Further, the performing the dimensionality reduction on the preprocessed image to obtain the feature vector of the preprocessed image includes:
constructing covariance matrix of preprocessed imageCThe covariance matrixCComprises the following steps:
wherein the content of the first and second substances,n=1,2,…,N,Nrepresenting the total number of pre-processed images,is shown asnA vector of the pre-processed image is generated,represents the average face vector of the face,Trepresenting a transposed symbol;
obtaining a covariance matrixCFeature vector ofAnd a characteristic valueThe feature vectorAnd a characteristic valueOne-to-one correspondence is realized;
the covariance matrixCAll the characteristic values are arranged in the order from big to small and before being taken outmA characteristic value of the image before being extractedmCorresponding to a characteristic valuemThe feature vectors form a feature spaceUSaid feature space,Representing the feature vector corresponding to the first feature value after sorting,representing the feature vector corresponding to the second feature value after sorting,represents the first after the sortingmThe characteristic vector corresponding to each characteristic value;
obtaining a preprocessed image in a feature spaceUAnd taking the projection as a feature vector of the preprocessed imageComprises the following steps:
further, the setting of an expected vector corresponding to the feature vector, and training of the face recognition network by using the feature vector and the expected vector include:
Determining the error valueEIf the weight is not within the threshold range, the training of the face recognition network is finished, otherwise, the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold in the face recognition network are updated, and the actual output vector is obtained again。
Wherein the content of the first and second substances,k=1,2,…,M,Mrepresenting the total number of the neurons corresponding to the output layer of the face recognition network,indicating the second in the output layerkThe actual output value of the individual neuron element,indicating the second in the output layerkThe expected output value of the individual neuron;
the error valueEComprises the following steps:
wherein the content of the first and second substances,ethe natural constant is represented by a natural constant,the steepness factor is represented by a value representing,j=1,2,…,L,Lrepresents the total number of neurons in the hidden layer,the intermediate coefficients are represented by the coefficients of the,indicating the second in the hidden layerjThe first neuron and the second neuron in the output layerkWeights between individual neurons;is shown asnA feature vectorAfter inputting the face recognition network, the first in the hidden layerjThe output of each neuron;indicating the second in the output layerkA first threshold corresponding to each neuron.
Further, the updating of the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer, and the threshold in the face recognition network includes:
determining the adjustment quantity of the weight between the input layer and the hidden layer of the face recognition network, the weight between the hidden layer and the output layer and the threshold, wherein the adjustment quantity is as follows:
wherein the content of the first and second substances,representing weightsIn the first placeThe amount of adjustment for the sub-training,representing weightsIn the first placeThe amount of adjustment for the sub-training,it is indicated that the learning rate is,indicating the second in the output layerkThe output error term of each neuron,indicating the first in the input layeriThe neuron and the second in the hidden layerjThe weight between the individual nerves is given,i=1,2,…,,represents the total number of neurons of the input layer;is shown asnA feature vectorAfter inputting the face recognition network, the first in the input layeriThe output of each neuron;representing weightsIn the first placeThe amount of adjustment for the sub-training,representing weightsIn the first placeThe amount of adjustment for the sub-training,indicating the second in the hidden layerjThe output error term of each neuron,indicating the second in the output layerkA first threshold value corresponding to each of the neurons,indicates a first threshold valueIn the first placeThe amount of adjustment for the sub-training,indicates a first threshold valueIn the first placeThe amount of adjustment for the sub-training,indicating the second in the hidden layerjA second threshold value corresponding to each of the neurons,indicating a second threshold valueIn the first placeThe amount of adjustment for the sub-training,indicating a second threshold valueIn the first placeThe adjustment amount of the secondary training;
and updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold value in the face recognition network according to the adjustment amount.
the first mentionednA feature vectorAfter inputting the face recognition network, the first in the hidden layerjOutput of individual neuronComprises the following steps:
in a second aspect, the present invention provides an identity recognition apparatus based on image recognition, for implementing the identity recognition method in the first aspect, the identity recognition apparatus includes a construction module, a training module, an acquisition module, and a recognition module;
the construction module is used for constructing a portrait base, the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to one identity information;
the training module is used for taking a plurality of face images from the face image library as training face images, constructing a face recognition network, reducing the dimension of the training face images, and training the face recognition network by adopting the training face images after dimension reduction;
the acquisition module is used for acquiring a face image to be recognized, and the face image of a person corresponding to the face image to be recognized exists in the face database;
the recognition module is used for recognizing the face image to be recognized through the face recognition network to obtain a face recognition result and corresponding identity information.
In a third aspect, the invention provides an identification device based on image recognition, comprising a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the identification method of the first aspect.
The invention has the beneficial effects that:
(1) the invention provides an identity recognition method, an identity recognition device and identity recognition equipment based on image recognition, which can be used for recognizing human faces so as to perform identity recognition.
(2) According to the invention, when the adjustment quantity of the weight and the threshold is determined, the momentum item is introduced, so that the oscillation trend in the training process is reduced, and the training speed of the face recognition network is increased.
(3) In the invention, a gradient factor is introduced in the calculation of the error value, so that the convergence speed is accelerated.
(4) The invention solves the problem of non-full rank generated during small sample training by reducing the dimension of the training face image, and leads the training effect of the face recognition network to be better.
Drawings
Fig. 1 is a flowchart of an identity recognition method based on image recognition according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an identity recognition apparatus based on image recognition according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an identification device based on image recognition according to an embodiment of the present application.
The system comprises a building module 21, a training module 22, an acquisition module 23, an identification module 24, an identification device 30, a memory 31, a processor 32 and a bus 33.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an identity recognition method based on image recognition includes:
s11, constructing a portrait base, wherein the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to identity information.
Optionally, each person corresponds to a plurality of face images, and the face images may include face images in the directions of the front, the side, the oblique side, and the like. By identifying the face image, corresponding identity information can be obtained, so that the aim of identity identification is fulfilled.
S12, taking out a plurality of face images from the face image library as training face images, constructing a face recognition network, reducing the dimensions of the training face images, and training the face recognition network by adopting the training face images after dimension reduction.
Optionally, taking out a plurality of face images from the face image library as training face images, including: selecting at least one person from a portrait library; and taking out the face image part corresponding to the selected person to obtain a training face image. It should be noted that the partial removal here means removal of half or more.
And S13, acquiring the face image to be recognized, wherein the face image of the person corresponding to the face image to be recognized exists in the face image library.
And S14, recognizing the face image to be recognized through a face recognition network to obtain a face recognition result and corresponding identity information.
A human face library is constructed by collecting a small amount of data, training face images are selected from the human face library, a face recognition network is trained, the face images to be recognized are recognized through the trained face recognition network, people corresponding to the face images to be recognized in the human face library are obtained, and therefore corresponding identity information is obtained.
By reducing the dimension of the training face image, the problem of non-full rank generated during small sample training is solved, and the training effect of the face recognition network is better.
In this embodiment, each person in the face image library corresponds to a plurality of face images, and the identity information corresponding to the face images includes a name, an age, a gender, and a number.
In a possible implementation manner, constructing a face recognition network, performing dimension reduction on a training face image, and training the face recognition network by using the training face image after dimension reduction, includes: adopting a BP neural network as a face recognition network; preprocessing a training face image to obtain a preprocessed image; performing dimensionality reduction on the preprocessed image to obtain a feature vector of the preprocessed image; and setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector.
In one possible implementation, the preprocessing of the training face image includes: and carrying out graying, inclination correction, median filtering and normalization operation on the training face image.
In a possible implementation, performing a dimension reduction process on the preprocessed image to obtain a feature vector of the preprocessed image includes:
constructing covariance matrix of preprocessed imageCCovariance matrixCComprises the following steps:
wherein the content of the first and second substances,n=1,2,…,N,Nrepresenting the total number of pre-processed images,is shown asnA vector of the pre-processed image is generated,represents the average face vector of the face,Trepresenting transposed symbols.
Obtaining a covariance matrixCFeature vector ofAnd a characteristic valueFeature vectorAnd a characteristic valueAnd correspond to each other.
The covariance matrixCAll the characteristic values are arranged in the order from big to small and before being taken outmA characteristic value of the image before being extractedmCorresponding to a characteristic valuemThe feature vectors form a feature spaceUCharacteristic space,Representing the feature vector corresponding to the first feature value after sorting,representing the feature vector corresponding to the second feature value after sorting,represents the first after the sortingmAnd the characteristic vector corresponding to each characteristic value.
Obtaining a preprocessed image in a feature spaceUAnd will projectShadow as a feature vector of the preprocessed imageComprises the following steps:
in the present embodiment, a pair of feature vectors is providedAnd (5) further processing the data to improve the performance of the face recognition network. The method comprises the following steps:
according to the feature vector corresponding to the same personFall into one class and obtain an intra-class scatter matrixComprises the following steps:
wherein the content of the first and second substances,=1,2,…,c,cindicating that the total number of persons was selected from the portrait pool in step S12,m=1,2,…,K,Krepresenting the total number of training face images corresponding to the same person,indicates the corresponding second personmTraining the feature vectors of the face images,is shown asThe mean value of the feature vectors of the individual,Trepresenting transposed symbols. The total number of the training face images corresponding to each person is the same and all the training face images areK。
wherein the content of the first and second substances,representing the mean of all feature vectors.
Obtaining a matrixThe characteristic values and the characteristic vectors are in one-to-one correspondence;
choose the largestrA characteristic value and userThe eigenvectors corresponding to the characteristic values form a projection space;
in calculating projection vectorThen, the projection vector is adoptedAnd training a face recognition network.
Optionally, dimension reduction processing may be performed on all the faces in the face library.
In one possible implementation, setting an expected vector corresponding to the feature vector, and training the face recognition network with the feature vector and the expected vector, includes:
Determining the error valueEIf the weight is not within the threshold range, the training of the face recognition network is finished, otherwise, the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold in the face recognition network are updated, and the actual output vector is obtained again. Worth saying thatObviously, when the face recognition network is trained, the feature vector adopted by each training is adoptedDifferent, that is, the training face image used each time is different.
Wherein the content of the first and second substances,k=1,2,…,M,Mrepresenting the total number of the neurons corresponding to the output layer of the face recognition network,indicating the second in the output layerkThe actual output value of the individual neuron element,indicating the second in the output layerkThe expected output value of the individual neuron.
Error valueEComprises the following steps:
wherein the content of the first and second substances,ethe natural constant is represented by a natural constant,the steepness factor is represented by a value representing,j=1,2,…,L,Lrepresents the total number of neurons in the hidden layer,the intermediate coefficients are represented by the coefficients of the,indicating the second in the hidden layerjThe first neuron and the second neuron in the output layerkWeights between individual neurons;is shown asnA feature vectorAfter inputting the face recognition network, the first in the hidden layerjThe output of each neuron;indicating the second in the output layerkA first threshold corresponding to each neuron.
In one possible embodiment, updating the weights between the input layer and the hidden layer, the weights between the hidden layer and the output layer, and the threshold in the face recognition network includes:
determining the adjustment quantity of the weight between the input layer and the hidden layer of the face recognition network, the weight between the hidden layer and the output layer and the threshold, wherein the adjustment quantity is as follows:
wherein the content of the first and second substances,representing weightsIn the first placeThe amount of adjustment for the sub-training,representing weightsIn the first placeThe amount of adjustment for the sub-training,it is indicated that the learning rate is,indicating the second in the output layerkThe output error term of each neuron,indicating the first in the input layeriThe neuron and the second in the hidden layerjThe weight between the individual nerves is given,i=1,2,…,,represents the total number of neurons of the input layer;is shown asnA feature vectorAfter inputting the face recognition network, the first in the input layeriThe output of each neuron;representing weightsIn the first placeThe amount of adjustment for the sub-training,representing weightsIn the first placeThe amount of adjustment for the sub-training,indicating the second in the hidden layerjThe output error term of each neuron,indicating the second in the output layerkA first threshold value corresponding to each of the neurons,indicates a first threshold valueIn the first placeThe amount of adjustment for the sub-training,indicates a first threshold valueIn the first placeThe amount of adjustment for the sub-training,indicating the second in the hidden layerjA second threshold value corresponding to each of the neurons,indicating a second threshold valueIn the first placeThe amount of adjustment for the sub-training,indicating a second threshold valueIn the first placeAdjustment amount of sub-training.
In the present embodiment, momentum terms are added、、Andthe oscillation trend in the training process is reduced, and the training speed of the face recognition network is improved.
And updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold value in the face recognition network according to the adjustment amount.
Optionally, the updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer, and the threshold in the face recognition network includes:
wherein the content of the first and second substances,is shown asSecond in the updated hidden layer during the second trainingjThe first neuron and the second neuron in the output layerkWeights between individual neurons.Is shown asSecond in the updated hidden layer during the second trainingjThe first neuron and the second neuron in the output layerkWeights between individual neurons.Is shown asIn the second training, the updated input layer isiThe neuron and the second in the hidden layerjWeight between individual nerves.Is shown asIn the second training, the updated input layer isiThe neuron and the second in the hidden layerjWeight between individual nerves.Is shown asAnd updating the first threshold value during the secondary training.Is shown asAnd updating the first threshold value during the secondary training.Is shown asAnd updating the second threshold value during the secondary training.Is shown asAnd updating the second threshold value during the secondary training.
In this embodiment, a dynamic learning rate may be adopted to further improve the convergence rate, specifically:
wherein the content of the first and second substances,is shown asThe learning rate of the sub-training is,is shown asThe learning rate of the sub-training is,is shown asThe error value of the sub-training is,is shown asError value of sub-training.
first, thenA feature vectorAfter inputting the face recognition network, the first in the hidden layerjOutput of individual neuronComprises the following steps:
example 2
As shown in fig. 2, the present embodiment provides an identity recognition apparatus based on image recognition, which can be used to implement the identity recognition method disclosed in embodiment 1, and includes a construction module 21, a training module 22, an obtaining module 23, and a recognition module 24.
The construction module 21 is configured to construct a portrait base, where the portrait base includes at least one face image corresponding to a person, and each face image corresponds to one identity information.
The training module 22 is configured to take a plurality of face images from the face image library as training face images, construct a face recognition network, perform dimension reduction on the training face images, and train the face recognition network by using the training face images after dimension reduction.
The obtaining module 23 is configured to obtain a face image to be recognized, where the face image of a person corresponding to the face image to be recognized exists in the face library.
The recognition module 24 is configured to recognize a face image to be recognized through a face recognition network, so as to obtain a face recognition result and corresponding identity information.
In one possible embodiment, the training module 22 is specifically configured to use a BP neural network as a face recognition network; preprocessing a training face image to obtain a preprocessed image; performing dimensionality reduction on the preprocessed image to obtain a feature vector of the preprocessed image; and setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector.
Optionally, the preprocessing is performed on the training face image, and includes: preprocessing the training face image, comprising: and carrying out graying, inclination correction, median filtering and normalization operation on the training face image.
Optionally, the performing dimension reduction on the preprocessed image to obtain a feature vector of the preprocessed image includes:
constructing covariance matrix of preprocessed imageCCovariance matrixCComprises the following steps:
wherein the content of the first and second substances,n=1,2,…,N,Nrepresenting the total number of pre-processed images,is shown asnA vector of the pre-processed image is generated,represents the average face vector of the face,Trepresenting transposed symbols.
Obtaining a covariance matrixCFeature vector ofAnd a characteristic valueFeature vectorAnd a characteristic valueAnd correspond to each other.
The covariance matrixCAll the characteristic values are arranged in the order from big to small and before being taken outmA characteristic value of the image before being extractedmCorresponding to a characteristic valuemThe feature vectors form a feature spaceUCharacteristic space,Representing the feature vector corresponding to the first feature value after sorting,representing the feature vector corresponding to the second feature value after sorting,represents the first after the sortingmAnd the characteristic vector corresponding to each characteristic value.
Obtaining a preprocessed image in a feature spaceUAnd taking the projection as a feature vector of the preprocessed image, the feature vector of the preprocessed imageComprises the following steps:
optionally, setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector, including: setting feature vectorsCorresponding expected vector(ii) a Feature vectorObtaining an actual output vector as an input vector for a face recognition network(ii) a According to the actual output vectorAnd an expectation vectorObtaining an error valueE(ii) a Determining the error valueEIf the weight is within the threshold range, finishing the training of the face recognition network, otherwise, finishing the weight between the input layer and the hidden layer and the weight between the hidden layer and the output layer in the face recognition networkUpdating the sum threshold value and obtaining the actual output vector again。
Actual output vectorExpectation vector(ii) a Wherein the content of the first and second substances,k=1,2,…,M,Mrepresenting the total number of the neurons corresponding to the output layer of the face recognition network,indicating the second in the output layerkThe actual output value of the individual neuron element,indicating the second in the output layerkThe expected output value of the individual neuron.
Optionally, error valueEComprises the following steps:
wherein the content of the first and second substances,ethe natural constant is represented by a natural constant,the steepness factor is represented by a value representing,j=1,2,…,L,Lrepresents the total number of neurons in the hidden layer,the intermediate coefficients are represented by the coefficients of the,indicating the second in the hidden layerjThe first neuron and the second neuron in the output layerkWeights between individual neurons;is shown asnA feature vectorAfter inputting the face recognition network, the first in the hidden layerjThe output of each neuron;indicating the second in the output layerkA first threshold corresponding to each neuron.
Optionally, the updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer, and the threshold in the face recognition network includes:
determining the adjustment quantity of the weight between the input layer and the hidden layer of the face recognition network, the weight between the hidden layer and the output layer and the threshold, wherein the adjustment quantity is as follows:
wherein the content of the first and second substances,representing weightsIn the first placeThe amount of adjustment for the sub-training,representing weightsIn the first placeThe amount of adjustment for the sub-training,it is indicated that the learning rate is,indicating the second in the output layerkThe output error term of each neuron,indicating the first in the input layeriThe neuron and the second in the hidden layerjThe weight between the individual nerves is given,i=1,2,…,,represents the total number of neurons of the input layer;is shown asnA feature vectorAfter inputting the face recognition network, the first in the input layeriThe output of each neuron;representing weightsIn the first placeThe amount of adjustment for the sub-training,representing weightsIn the first placeThe amount of adjustment for the sub-training,indicating the second in the hidden layerjThe output error term of each neuron,indicating the second in the output layerkA first threshold value corresponding to each of the neurons,indicates a first threshold valueIn the first placeThe amount of adjustment for the sub-training,indicates a first threshold valueIn the first placeThe amount of adjustment for the sub-training,indicating the second in the hidden layerjA second threshold value corresponding to each of the neurons,indicating a second threshold valueIn the first placeThe amount of adjustment for the sub-training,indicating a second threshold valueIn the first placeAdjustment amount of sub-training.
And updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold value in the face recognition network according to the adjustment amount.
first, thenA feature vectorAfter inputting the face recognition network, the first in the hidden layerjOutput of individual neuronComprises the following steps:
example 3
As shown in fig. 3, an identification device based on image recognition is provided, and the recognition device 30 may include a memory 31 and a processor 32. Illustratively, the memory 31, the processor 32, and the various parts are interconnected by a bus 33.
The image recognition-based identity recognition device in the embodiment of fig. 3 may implement the technical solution in the embodiment 1, and the implementation principle and the beneficial effects thereof are similar, and are not described herein again.
Example 4
The present embodiment provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is used for implementing the identity recognition method based on image recognition described in embodiment 1.
Example 5
Embodiments of the present application may also provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for identity recognition based on image recognition according to embodiment 1 is implemented.
The invention provides an identity recognition method, an identity recognition device and identity recognition equipment based on image recognition, which can be used for recognizing human faces so as to perform identity recognition. According to the invention, when the adjustment quantity of the weight and the threshold is determined, the momentum item is introduced, so that the oscillation trend in the training process is reduced, and the training speed of the face recognition network is increased. In the invention, a gradient factor is introduced in the calculation of the error value, so that the convergence speed is accelerated. The invention solves the problem of non-full rank generated during small sample training by reducing the dimension of the training face image, and leads the training effect of the face recognition network to be better.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (6)
1. An identity recognition method based on image recognition is characterized by comprising the following steps:
constructing a portrait base, wherein the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to identity information;
taking a plurality of face images from a face image library as training face images, constructing a face recognition network, reducing the dimensions of the training face images, and training the face recognition network by adopting the training face images subjected to dimension reduction;
acquiring a face image to be recognized, wherein the face image of a person corresponding to the face image to be recognized exists in the face image library;
identifying a face image to be identified through a face identification network to obtain a face identification result and corresponding identity information;
the constructing of the face recognition network, the dimension reduction of the training face image, and the training of the face recognition network by adopting the training face image after the dimension reduction comprise:
adopting a BP neural network as a face recognition network;
preprocessing a training face image to obtain a preprocessed image;
performing dimensionality reduction on the preprocessed image to obtain a feature vector of the preprocessed image;
setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector;
the performing the dimensionality reduction on the preprocessed image to obtain the feature vector of the preprocessed image includes:
constructing covariance matrix of preprocessed imageCThe covariance matrixCComprises the following steps:
wherein the content of the first and second substances,n=1,2,…,N,Nrepresenting the total number of pre-processed images,is shown asnA vector of the pre-processed image is generated,represents the average face vector of the face,Trepresenting a transposed symbol;
obtaining a covariance matrixCFeature vector ofAnd a characteristic valueThe feature vectorAnd a characteristic valueOne-to-one correspondence is realized;
the covariance matrixCAll the characteristic values are arranged in the order from big to small and before being taken outA characteristic value of the image before being extractedCorresponding to a characteristic valueThe feature vectors form a feature spaceUSaid feature space,Representing the feature vector corresponding to the first feature value after sorting,representing the feature vector corresponding to the second feature value after sorting,represents the first after the sortingThe characteristic vector corresponding to each characteristic value;
obtaining a preprocessed image in a feature spaceUAnd taking the projection as a feature vector of the preprocessed imageComprises the following steps:
the setting of the expected vector corresponding to the feature vector and the training of the face recognition network by the feature vector and the expected vector comprise the following steps:
Determining the error valueEIf the weight is not within the threshold range, the training of the face recognition network is finished, otherwise, the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold in the face recognition network are updated, and the actual output vector is obtained again;
Wherein the content of the first and second substances,k=1,2,…,M,Mrepresenting the total number of the neurons corresponding to the output layer of the face recognition network,indicating the second in the output layerkThe actual output value of the individual neuron element,indicating the second in the output layerkThe expected output value of the individual neuron;
the error valueEComprises the following steps:
wherein the content of the first and second substances,ethe natural constant is represented by a natural constant,the steepness factor is represented by a value representing,j=1,2,…,L,Lrepresents the total number of neurons in the hidden layer,the intermediate coefficients are represented by the coefficients of the,indicating the second in the hidden layerjThe first neuron and the second neuron in the output layerkWeights between individual neurons;is shown asnA feature vectorAfter inputting the face recognition network, the first in the hidden layerjThe output of each neuron;indicating the second in the output layerkA first threshold corresponding to each neuron.
2. The identity recognition method based on image recognition according to claim 1, wherein the preprocessing the training face image comprises: and carrying out graying, inclination correction, median filtering and normalization operation on the training face image.
3. The method for identifying an identity based on image recognition according to claim 1, wherein the updating of the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold in the face recognition network comprises:
determining the adjustment quantity of the weight between the input layer and the hidden layer of the face recognition network, the weight between the hidden layer and the output layer and the threshold, wherein the adjustment quantity is as follows:
wherein the content of the first and second substances,representing weightsIn the first placeThe amount of adjustment for the sub-training,representing weightsIn the first placeThe amount of adjustment for the sub-training,it is indicated that the learning rate is,indicating the second in the output layerkThe output error term of each neuron,indicating the first in the input layeriThe neuron and the second in the hidden layerjThe weight between the individual nerves is given,i=1,2,…,,represents the total number of neurons of the input layer;is shown asnA feature vectorAfter inputting the face recognition network, the first in the input layeriThe output of each neuron;representing weightsIn the first placeThe amount of adjustment for the sub-training,representing weightsIn the first placeThe amount of adjustment for the sub-training,indicating the second in the hidden layerjThe output error term of each neuron,indicating the second in the output layerkA first threshold value corresponding to each of the neurons,indicates a first threshold valueIn the first placeThe amount of adjustment for the sub-training,indicates a first threshold valueIn the first placeThe amount of adjustment for the sub-training,indicating the second in the hidden layerjA second threshold value corresponding to each of the neurons,indicating a second threshold valueIn the first placeThe amount of adjustment for the sub-training,indicating a second threshold valueIn the first placeThe adjustment amount of the secondary training;
and updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold value in the face recognition network according to the adjustment amount.
4. The method of claim 3, wherein the steepness factor is a function of a distance between the sensor and the image sensorComprises the following steps:
the first mentionednA feature vectorAfter inputting the face recognition network, the first in the hidden layerjOutput of individual neuronComprises the following steps:
5. an identity recognition device based on image recognition, which is used for realizing the identity recognition method of any one of claims 1 to 4, and comprises a construction module, a training module, an acquisition module and a recognition module;
the construction module is used for constructing a portrait base, the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to one identity information;
the training module is used for taking a plurality of face images from the face image library as training face images, constructing a face recognition network, reducing the dimension of the training face images, and training the face recognition network by adopting the training face images after dimension reduction;
the acquisition module is used for acquiring a face image to be recognized, and the face image of a person corresponding to the face image to be recognized exists in the face database;
the recognition module is used for recognizing the face image to be recognized through the face recognition network to obtain a face recognition result and corresponding identity information.
6. An identity recognition device based on image recognition is characterized by comprising a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the identification method of any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111427391.1A CN113837161B (en) | 2021-11-29 | 2021-11-29 | Identity recognition method, device and equipment based on image recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111427391.1A CN113837161B (en) | 2021-11-29 | 2021-11-29 | Identity recognition method, device and equipment based on image recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113837161A CN113837161A (en) | 2021-12-24 |
CN113837161B true CN113837161B (en) | 2022-02-22 |
Family
ID=78971814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111427391.1A Active CN113837161B (en) | 2021-11-29 | 2021-11-29 | Identity recognition method, device and equipment based on image recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113837161B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101315557A (en) * | 2008-06-25 | 2008-12-03 | 浙江大学 | Propylene polymerization production process optimal soft survey instrument and method based on genetic algorithm optimization BP neural network |
CN107871101A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | A kind of method for detecting human face and device |
CN109145817A (en) * | 2018-08-21 | 2019-01-04 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of face In vivo detection recognition methods |
CN109491816A (en) * | 2018-10-19 | 2019-03-19 | 中国船舶重工集团公司第七六研究所 | Knowledge based engineering method for diagnosing faults |
CN110969073A (en) * | 2019-08-23 | 2020-04-07 | 贵州大学 | Facial expression recognition method based on feature fusion and BP neural network |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050216200A1 (en) * | 2004-03-29 | 2005-09-29 | The Govt. of U.S.A. Represented by the Secretary, Department of Health and Human Services | Neural network pattern recognition for predicting pharmacodynamics using patient characteristics |
US11676278B2 (en) * | 2019-09-26 | 2023-06-13 | Intel Corporation | Deep learning for dense semantic segmentation in video with automated interactivity and improved temporal coherence |
US11488007B2 (en) * | 2019-12-06 | 2022-11-01 | International Business Machines Corporation | Building of custom convolution filter for a neural network using an automated evolutionary process |
US11651225B2 (en) * | 2020-05-05 | 2023-05-16 | Mitsubishi Electric Research Laboratories, Inc. | Non-uniform regularization in artificial neural networks for adaptable scaling |
CN112199986A (en) * | 2020-08-20 | 2021-01-08 | 西安理工大学 | Face image recognition method based on local binary pattern multi-distance learning |
-
2021
- 2021-11-29 CN CN202111427391.1A patent/CN113837161B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101315557A (en) * | 2008-06-25 | 2008-12-03 | 浙江大学 | Propylene polymerization production process optimal soft survey instrument and method based on genetic algorithm optimization BP neural network |
CN107871101A (en) * | 2016-09-23 | 2018-04-03 | 北京眼神科技有限公司 | A kind of method for detecting human face and device |
CN109145817A (en) * | 2018-08-21 | 2019-01-04 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of face In vivo detection recognition methods |
CN109491816A (en) * | 2018-10-19 | 2019-03-19 | 中国船舶重工集团公司第七六研究所 | Knowledge based engineering method for diagnosing faults |
CN110969073A (en) * | 2019-08-23 | 2020-04-07 | 贵州大学 | Facial expression recognition method based on feature fusion and BP neural network |
Non-Patent Citations (2)
Title |
---|
一种基于改进PCA和BP神经网络的人脸识别算法;岳也 等;《太原师范学院学报(自然科学版)》;20210331;第20卷(第1期);第49-54、68页 * |
一种基于附加动量法的改进BP算法;王树森 等;《济源职业技术学院学报》;20120930;第11卷(第3期);第9-13页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113837161A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sargin et al. | Audiovisual synchronization and fusion using canonical correlation analysis | |
CN108416374B (en) | Non-negative matrix factorization method based on discrimination orthogonal subspace constraint | |
CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
CN110503000B (en) | Teaching head-up rate measuring method based on face recognition technology | |
CN112818850B (en) | Cross-posture face recognition method and system based on progressive neural network and attention mechanism | |
Lip et al. | Comparative study on feature, score and decision level fusion schemes for robust multibiometric systems | |
CN111401105B (en) | Video expression recognition method, device and equipment | |
CN112818764A (en) | Low-resolution image facial expression recognition method based on feature reconstruction model | |
Gomez-Alanis et al. | Performance evaluation of front-and back-end techniques for ASV spoofing detection systems based on deep features | |
Zhang et al. | I-vector based physical task stress detection with different fusion strategies | |
CN113837161B (en) | Identity recognition method, device and equipment based on image recognition | |
Marcel | A symmetric transformation for lda-based face verification | |
CN112329698A (en) | Face recognition method and system based on intelligent blackboard | |
CN115546862A (en) | Expression recognition method and system based on cross-scale local difference depth subspace characteristics | |
Cheng et al. | Ensemble convolutional neural networks for face recognition | |
Basbrain et al. | A neural network approach to score fusion for emotion recognition | |
Tran et al. | Baby learning with vision transformer for face recognition | |
JPH10261083A (en) | Device and method for identifying individual | |
Kundu et al. | A modified BP network using Malsburg learning for rotation and location invariant fingerprint recognition and localization with and without occlusion | |
CN112464916A (en) | Face recognition method and model training method thereof | |
Venkatramaphanikumar et al. | Face Recognition with Modular Two Dimensional PCA under Uncontrolled Illumination Variations | |
CN110991228A (en) | Improved PCA face recognition algorithm resistant to illumination influence | |
Kundu et al. | A modified RBFN based on heuristic based clustering for location invariant fingerprint recognition and localization with and without occlusion | |
WO2021189980A1 (en) | Voice data generation method and apparatus, and computer device and storage medium | |
CN114663965B (en) | Testimony comparison method and device based on two-stage alternative learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |