CN113837161B - Identity recognition method, device and equipment based on image recognition - Google Patents

Identity recognition method, device and equipment based on image recognition Download PDF

Info

Publication number
CN113837161B
CN113837161B CN202111427391.1A CN202111427391A CN113837161B CN 113837161 B CN113837161 B CN 113837161B CN 202111427391 A CN202111427391 A CN 202111427391A CN 113837161 B CN113837161 B CN 113837161B
Authority
CN
China
Prior art keywords
face
training
image
vector
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111427391.1A
Other languages
Chinese (zh)
Other versions
CN113837161A (en
Inventor
杨斌
张胜田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Institute Guangdong
Original Assignee
Neusoft Institute Guangdong
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Institute Guangdong filed Critical Neusoft Institute Guangdong
Priority to CN202111427391.1A priority Critical patent/CN113837161B/en
Publication of CN113837161A publication Critical patent/CN113837161A/en
Application granted granted Critical
Publication of CN113837161B publication Critical patent/CN113837161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses an identity recognition method, device and equipment based on image recognition, belonging to the field of image recognition, and the method comprises the following steps: constructing a portrait base, wherein the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to identity information; taking a plurality of face images from a face image library as training face images, constructing a face recognition network, reducing the dimensions of the training face images, and training the face recognition network by adopting the training face images subjected to dimension reduction; acquiring a face image to be recognized, wherein the face image of a person corresponding to the face image to be recognized exists in the face image library; and identifying the face image to be identified through a face identification network to obtain a face identification result and corresponding identity information. The invention can be applied to small sample training, accelerates the training speed of the face recognition network, enables the face recognition accuracy to be higher, and realizes the identity recognition by combining the face recognition with the identity information.

Description

Identity recognition method, device and equipment based on image recognition
Technical Field
The invention belongs to the field of image recognition, and particularly relates to an identity recognition method, an identity recognition device and identity recognition equipment based on image recognition.
Background
Face recognition is a biometric technique for identifying an identity based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and further recognize the detected faces.
In the prior art, a large amount of face sample data is usually collected, a neural network is trained through the face sample data, and the trained neural network is adopted for face recognition, so that the purpose of face recognition is achieved. However, in the prior art, under the condition of a small sample, the training effect of the neural network is poor, so that the recognition accuracy is reduced, and in the training process, the convergence rate is low and the training time is long.
Disclosure of Invention
Aiming at the defects in the prior art, the identity recognition method, the identity recognition device and the identity recognition equipment based on image recognition solve the problems in the prior art.
In a first aspect, the present invention provides an identity recognition method based on image recognition, including:
constructing a portrait base, wherein the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to identity information;
taking a plurality of face images from a face image library as training face images, constructing a face recognition network, reducing the dimensions of the training face images, and training the face recognition network by adopting the training face images subjected to dimension reduction;
acquiring a face image to be recognized, wherein the face image of a person corresponding to the face image to be recognized exists in the face image library;
identifying a face image to be identified through a face identification network to obtain a face identification result and corresponding identity information;
the constructing of the face recognition network, the dimension reduction of the training face image, and the training of the face recognition network by adopting the training face image after the dimension reduction comprise:
adopting a BP neural network as a face recognition network;
preprocessing a training face image to obtain a preprocessed image;
performing dimensionality reduction on the preprocessed image to obtain a feature vector of the preprocessed image;
and setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector.
Further, the preprocessing the training face image includes: and carrying out graying, inclination correction, median filtering and normalization operation on the training face image.
Further, the performing the dimensionality reduction on the preprocessed image to obtain the feature vector of the preprocessed image includes:
constructing covariance matrix of preprocessed imageCThe covariance matrixCComprises the following steps:
Figure 381957DEST_PATH_IMAGE001
wherein the content of the first and second substances,n=1,2,…,NNrepresenting the total number of pre-processed images,
Figure 4699DEST_PATH_IMAGE002
is shown asnA vector of the pre-processed image is generated,
Figure 418363DEST_PATH_IMAGE003
represents the average face vector of the face,Trepresenting a transposed symbol;
obtaining a covariance matrixCFeature vector of
Figure 223377DEST_PATH_IMAGE004
And a characteristic value
Figure 306871DEST_PATH_IMAGE005
The feature vector
Figure 479226DEST_PATH_IMAGE004
And a characteristic value
Figure 696581DEST_PATH_IMAGE005
One-to-one correspondence is realized;
the covariance matrixCAll the characteristic values are arranged in the order from big to small and before being taken outmA characteristic value of the image before being extractedmCorresponding to a characteristic valuemThe feature vectors form a feature spaceUSaid feature space
Figure 60828DEST_PATH_IMAGE006
Figure 705436DEST_PATH_IMAGE007
Representing the feature vector corresponding to the first feature value after sorting,
Figure 37191DEST_PATH_IMAGE008
representing the feature vector corresponding to the second feature value after sorting,
Figure 792658DEST_PATH_IMAGE009
represents the first after the sortingmThe characteristic vector corresponding to each characteristic value;
obtaining a preprocessed image in a feature spaceUAnd taking the projection as a feature vector of the preprocessed image
Figure 775526DEST_PATH_IMAGE010
Comprises the following steps:
Figure 591035DEST_PATH_IMAGE011
further, the setting of an expected vector corresponding to the feature vector, and training of the face recognition network by using the feature vector and the expected vector include:
setting feature vectors
Figure 144508DEST_PATH_IMAGE010
Corresponding expected vector
Figure 703665DEST_PATH_IMAGE012
Feature vector
Figure 42505DEST_PATH_IMAGE010
Obtaining an actual output vector as an input vector for a face recognition network
Figure 294495DEST_PATH_IMAGE013
According to the actual output vector
Figure 600842DEST_PATH_IMAGE013
And an expectation vector
Figure 698111DEST_PATH_IMAGE012
Obtaining an error valueE
Determining the error valueEIf the weight is not within the threshold range, the training of the face recognition network is finished, otherwise, the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold in the face recognition network are updated, and the actual output vector is obtained again
Figure 389992DEST_PATH_IMAGE013
Further, the actual output vector
Figure 547304DEST_PATH_IMAGE014
Expectation vector
Figure 809789DEST_PATH_IMAGE015
Wherein the content of the first and second substances,k=1,2,…,MMrepresenting the total number of the neurons corresponding to the output layer of the face recognition network,
Figure 399165DEST_PATH_IMAGE016
indicating the second in the output layerkThe actual output value of the individual neuron element,
Figure 758602DEST_PATH_IMAGE017
indicating the second in the output layerkThe expected output value of the individual neuron;
the error valueEComprises the following steps:
Figure 211449DEST_PATH_IMAGE018
wherein the content of the first and second substances,ethe natural constant is represented by a natural constant,
Figure 351443DEST_PATH_IMAGE019
the steepness factor is represented by a value representing,j=1,2,…,LLrepresents the total number of neurons in the hidden layer,
Figure 665881DEST_PATH_IMAGE020
the intermediate coefficients are represented by the coefficients of the,
Figure 207721DEST_PATH_IMAGE021
indicating the second in the hidden layerjThe first neuron and the second neuron in the output layerkWeights between individual neurons;
Figure 332934DEST_PATH_IMAGE022
is shown asnA feature vector
Figure 960224DEST_PATH_IMAGE023
After inputting the face recognition network, the first in the hidden layerjThe output of each neuron;
Figure 406249DEST_PATH_IMAGE024
indicating the second in the output layerkA first threshold corresponding to each neuron.
Further, the updating of the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer, and the threshold in the face recognition network includes:
determining the adjustment quantity of the weight between the input layer and the hidden layer of the face recognition network, the weight between the hidden layer and the output layer and the threshold, wherein the adjustment quantity is as follows:
Figure 740279DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 613557DEST_PATH_IMAGE026
representing weights
Figure 383936DEST_PATH_IMAGE027
In the first place
Figure 633651DEST_PATH_IMAGE028
The amount of adjustment for the sub-training,
Figure 25449DEST_PATH_IMAGE029
representing weights
Figure 866367DEST_PATH_IMAGE027
In the first place
Figure 359927DEST_PATH_IMAGE030
The amount of adjustment for the sub-training,
Figure 819858DEST_PATH_IMAGE031
it is indicated that the learning rate is,
Figure 112168DEST_PATH_IMAGE032
indicating the second in the output layerkThe output error term of each neuron,
Figure 858407DEST_PATH_IMAGE033
indicating the first in the input layeriThe neuron and the second in the hidden layerjThe weight between the individual nerves is given,i=1,2,…,
Figure 636002DEST_PATH_IMAGE034
Figure 430782DEST_PATH_IMAGE034
represents the total number of neurons of the input layer;
Figure 656227DEST_PATH_IMAGE035
is shown asnA feature vector
Figure 273502DEST_PATH_IMAGE036
After inputting the face recognition network, the first in the input layeriThe output of each neuron;
Figure 194185DEST_PATH_IMAGE037
representing weights
Figure 120552DEST_PATH_IMAGE038
In the first place
Figure 121875DEST_PATH_IMAGE039
The amount of adjustment for the sub-training,
Figure 147600DEST_PATH_IMAGE040
representing weights
Figure 571890DEST_PATH_IMAGE041
In the first place
Figure 301949DEST_PATH_IMAGE042
The amount of adjustment for the sub-training,
Figure 377352DEST_PATH_IMAGE043
indicating the second in the hidden layerjThe output error term of each neuron,
Figure 901875DEST_PATH_IMAGE044
indicating the second in the output layerkA first threshold value corresponding to each of the neurons,
Figure 311996DEST_PATH_IMAGE045
indicates a first threshold value
Figure 845746DEST_PATH_IMAGE046
In the first place
Figure 775656DEST_PATH_IMAGE047
The amount of adjustment for the sub-training,
Figure 471079DEST_PATH_IMAGE048
indicates a first threshold value
Figure 869962DEST_PATH_IMAGE046
In the first place
Figure 941823DEST_PATH_IMAGE042
The amount of adjustment for the sub-training,
Figure 54136DEST_PATH_IMAGE049
indicating the second in the hidden layerjA second threshold value corresponding to each of the neurons,
Figure 248356DEST_PATH_IMAGE050
indicating a second threshold value
Figure 242857DEST_PATH_IMAGE049
In the first place
Figure 993776DEST_PATH_IMAGE028
The amount of adjustment for the sub-training,
Figure 757332DEST_PATH_IMAGE051
indicating a second threshold value
Figure 686236DEST_PATH_IMAGE049
In the first place
Figure 168033DEST_PATH_IMAGE052
The adjustment amount of the secondary training;
and updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold value in the face recognition network according to the adjustment amount.
Further, the steepness factor
Figure 457063DEST_PATH_IMAGE053
Comprises the following steps:
Figure 340706DEST_PATH_IMAGE054
the first mentionednA feature vector
Figure 204625DEST_PATH_IMAGE023
After inputting the face recognition network, the first in the hidden layerjOutput of individual neuron
Figure 173718DEST_PATH_IMAGE055
Comprises the following steps:
Figure 266439DEST_PATH_IMAGE056
wherein the content of the first and second substances,
Figure 207850DEST_PATH_IMAGE057
representing an excitation function;
output error term of the output layer
Figure 852458DEST_PATH_IMAGE058
Comprises the following steps:
Figure 934946DEST_PATH_IMAGE059
output error term of the hidden layer
Figure 690413DEST_PATH_IMAGE060
Comprises the following steps:
Figure 220751DEST_PATH_IMAGE061
in a second aspect, the present invention provides an identity recognition apparatus based on image recognition, for implementing the identity recognition method in the first aspect, the identity recognition apparatus includes a construction module, a training module, an acquisition module, and a recognition module;
the construction module is used for constructing a portrait base, the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to one identity information;
the training module is used for taking a plurality of face images from the face image library as training face images, constructing a face recognition network, reducing the dimension of the training face images, and training the face recognition network by adopting the training face images after dimension reduction;
the acquisition module is used for acquiring a face image to be recognized, and the face image of a person corresponding to the face image to be recognized exists in the face database;
the recognition module is used for recognizing the face image to be recognized through the face recognition network to obtain a face recognition result and corresponding identity information.
In a third aspect, the invention provides an identification device based on image recognition, comprising a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the identification method of the first aspect.
The invention has the beneficial effects that:
(1) the invention provides an identity recognition method, an identity recognition device and identity recognition equipment based on image recognition, which can be used for recognizing human faces so as to perform identity recognition.
(2) According to the invention, when the adjustment quantity of the weight and the threshold is determined, the momentum item is introduced, so that the oscillation trend in the training process is reduced, and the training speed of the face recognition network is increased.
(3) In the invention, a gradient factor is introduced in the calculation of the error value, so that the convergence speed is accelerated.
(4) The invention solves the problem of non-full rank generated during small sample training by reducing the dimension of the training face image, and leads the training effect of the face recognition network to be better.
Drawings
Fig. 1 is a flowchart of an identity recognition method based on image recognition according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an identity recognition apparatus based on image recognition according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an identification device based on image recognition according to an embodiment of the present application.
The system comprises a building module 21, a training module 22, an acquisition module 23, an identification module 24, an identification device 30, a memory 31, a processor 32 and a bus 33.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an identity recognition method based on image recognition includes:
s11, constructing a portrait base, wherein the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to identity information.
Optionally, each person corresponds to a plurality of face images, and the face images may include face images in the directions of the front, the side, the oblique side, and the like. By identifying the face image, corresponding identity information can be obtained, so that the aim of identity identification is fulfilled.
S12, taking out a plurality of face images from the face image library as training face images, constructing a face recognition network, reducing the dimensions of the training face images, and training the face recognition network by adopting the training face images after dimension reduction.
Optionally, taking out a plurality of face images from the face image library as training face images, including: selecting at least one person from a portrait library; and taking out the face image part corresponding to the selected person to obtain a training face image. It should be noted that the partial removal here means removal of half or more.
And S13, acquiring the face image to be recognized, wherein the face image of the person corresponding to the face image to be recognized exists in the face image library.
And S14, recognizing the face image to be recognized through a face recognition network to obtain a face recognition result and corresponding identity information.
A human face library is constructed by collecting a small amount of data, training face images are selected from the human face library, a face recognition network is trained, the face images to be recognized are recognized through the trained face recognition network, people corresponding to the face images to be recognized in the human face library are obtained, and therefore corresponding identity information is obtained.
By reducing the dimension of the training face image, the problem of non-full rank generated during small sample training is solved, and the training effect of the face recognition network is better.
In this embodiment, each person in the face image library corresponds to a plurality of face images, and the identity information corresponding to the face images includes a name, an age, a gender, and a number.
In a possible implementation manner, constructing a face recognition network, performing dimension reduction on a training face image, and training the face recognition network by using the training face image after dimension reduction, includes: adopting a BP neural network as a face recognition network; preprocessing a training face image to obtain a preprocessed image; performing dimensionality reduction on the preprocessed image to obtain a feature vector of the preprocessed image; and setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector.
In one possible implementation, the preprocessing of the training face image includes: and carrying out graying, inclination correction, median filtering and normalization operation on the training face image.
In a possible implementation, performing a dimension reduction process on the preprocessed image to obtain a feature vector of the preprocessed image includes:
constructing covariance matrix of preprocessed imageCCovariance matrixCComprises the following steps:
Figure 895315DEST_PATH_IMAGE001
wherein the content of the first and second substances,n=1,2,…,NNrepresenting the total number of pre-processed images,
Figure 839000DEST_PATH_IMAGE062
is shown asnA vector of the pre-processed image is generated,
Figure 273524DEST_PATH_IMAGE003
represents the average face vector of the face,Trepresenting transposed symbols.
Obtaining a covariance matrixCFeature vector of
Figure 986265DEST_PATH_IMAGE004
And a characteristic value
Figure 598774DEST_PATH_IMAGE005
Feature vector
Figure 967438DEST_PATH_IMAGE004
And a characteristic value
Figure 64707DEST_PATH_IMAGE005
And correspond to each other.
The covariance matrixCAll the characteristic values are arranged in the order from big to small and before being taken outmA characteristic value of the image before being extractedmCorresponding to a characteristic valuemThe feature vectors form a feature spaceUCharacteristic space
Figure 38480DEST_PATH_IMAGE006
Figure 930212DEST_PATH_IMAGE007
Representing the feature vector corresponding to the first feature value after sorting,
Figure 707544DEST_PATH_IMAGE008
representing the feature vector corresponding to the second feature value after sorting,
Figure 608504DEST_PATH_IMAGE009
represents the first after the sortingmAnd the characteristic vector corresponding to each characteristic value.
Obtaining a preprocessed image in a feature spaceUAnd will projectShadow as a feature vector of the preprocessed image
Figure 171204DEST_PATH_IMAGE010
Comprises the following steps:
Figure 764996DEST_PATH_IMAGE011
in the present embodiment, a pair of feature vectors is provided
Figure 62247DEST_PATH_IMAGE063
And (5) further processing the data to improve the performance of the face recognition network. The method comprises the following steps:
according to the feature vector corresponding to the same person
Figure 111106DEST_PATH_IMAGE063
Fall into one class and obtain an intra-class scatter matrix
Figure 918525DEST_PATH_IMAGE064
Comprises the following steps:
Figure 542273DEST_PATH_IMAGE065
wherein the content of the first and second substances,
Figure 169564DEST_PATH_IMAGE066
=1,2,…,ccindicating that the total number of persons was selected from the portrait pool in step S12,m=1,2,…,KKrepresenting the total number of training face images corresponding to the same person,
Figure 287692DEST_PATH_IMAGE067
indicates the corresponding second personmTraining the feature vectors of the face images,
Figure 684039DEST_PATH_IMAGE068
is shown as
Figure 980153DEST_PATH_IMAGE066
The mean value of the feature vectors of the individual,Trepresenting transposed symbols. The total number of the training face images corresponding to each person is the same and all the training face images areK
Obtaining inter-class scatter matrices
Figure 94739DEST_PATH_IMAGE069
Comprises the following steps:
Figure 344455DEST_PATH_IMAGE070
wherein the content of the first and second substances,
Figure 470674DEST_PATH_IMAGE071
representing the mean of all feature vectors.
Obtaining a matrix
Figure 311591DEST_PATH_IMAGE072
The characteristic values and the characteristic vectors are in one-to-one correspondence;
choose the largestrA characteristic value and userThe eigenvectors corresponding to the characteristic values form a projection space
Figure 38108DEST_PATH_IMAGE073
Obtaining feature vectors
Figure 294777DEST_PATH_IMAGE074
In projection space
Figure 400136DEST_PATH_IMAGE075
The projection vector on is:
Figure 26334DEST_PATH_IMAGE076
in calculating projection vector
Figure 381092DEST_PATH_IMAGE077
Then, the projection vector is adopted
Figure 379135DEST_PATH_IMAGE078
And training a face recognition network.
Optionally, dimension reduction processing may be performed on all the faces in the face library.
In one possible implementation, setting an expected vector corresponding to the feature vector, and training the face recognition network with the feature vector and the expected vector, includes:
setting feature vectors
Figure 604580DEST_PATH_IMAGE010
Corresponding expected vector
Figure 256141DEST_PATH_IMAGE012
Feature vector
Figure 691671DEST_PATH_IMAGE010
Obtaining an actual output vector as an input vector for a face recognition network
Figure 618038DEST_PATH_IMAGE013
According to the actual output vector
Figure 635673DEST_PATH_IMAGE013
And an expectation vector
Figure 349813DEST_PATH_IMAGE012
Obtaining an error valueE
Determining the error valueEIf the weight is not within the threshold range, the training of the face recognition network is finished, otherwise, the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold in the face recognition network are updated, and the actual output vector is obtained again
Figure 413584DEST_PATH_IMAGE013
. Worth saying thatObviously, when the face recognition network is trained, the feature vector adopted by each training is adopted
Figure 19009DEST_PATH_IMAGE010
Different, that is, the training face image used each time is different.
In one possible implementation, the actual output vector
Figure 874839DEST_PATH_IMAGE014
Expectation vector
Figure 399361DEST_PATH_IMAGE015
Wherein the content of the first and second substances,k=1,2,…,MMrepresenting the total number of the neurons corresponding to the output layer of the face recognition network,
Figure 560215DEST_PATH_IMAGE016
indicating the second in the output layerkThe actual output value of the individual neuron element,
Figure 828385DEST_PATH_IMAGE017
indicating the second in the output layerkThe expected output value of the individual neuron.
Error valueEComprises the following steps:
Figure 509027DEST_PATH_IMAGE018
wherein the content of the first and second substances,ethe natural constant is represented by a natural constant,
Figure 470030DEST_PATH_IMAGE019
the steepness factor is represented by a value representing,j=1,2,…,LLrepresents the total number of neurons in the hidden layer,
Figure 118180DEST_PATH_IMAGE020
the intermediate coefficients are represented by the coefficients of the,
Figure 127725DEST_PATH_IMAGE021
indicating the second in the hidden layerjThe first neuron and the second neuron in the output layerkWeights between individual neurons;
Figure 223725DEST_PATH_IMAGE022
is shown asnA feature vector
Figure 90050DEST_PATH_IMAGE023
After inputting the face recognition network, the first in the hidden layerjThe output of each neuron;
Figure 225497DEST_PATH_IMAGE024
indicating the second in the output layerkA first threshold corresponding to each neuron.
In one possible embodiment, updating the weights between the input layer and the hidden layer, the weights between the hidden layer and the output layer, and the threshold in the face recognition network includes:
determining the adjustment quantity of the weight between the input layer and the hidden layer of the face recognition network, the weight between the hidden layer and the output layer and the threshold, wherein the adjustment quantity is as follows:
Figure 835469DEST_PATH_IMAGE079
wherein the content of the first and second substances,
Figure 490704DEST_PATH_IMAGE026
representing weights
Figure 793509DEST_PATH_IMAGE027
In the first place
Figure 150672DEST_PATH_IMAGE028
The amount of adjustment for the sub-training,
Figure 298757DEST_PATH_IMAGE029
representing weights
Figure 572612DEST_PATH_IMAGE027
In the first place
Figure 46319DEST_PATH_IMAGE030
The amount of adjustment for the sub-training,
Figure 218674DEST_PATH_IMAGE031
it is indicated that the learning rate is,
Figure 45816DEST_PATH_IMAGE032
indicating the second in the output layerkThe output error term of each neuron,
Figure 315123DEST_PATH_IMAGE033
indicating the first in the input layeriThe neuron and the second in the hidden layerjThe weight between the individual nerves is given,i=1,2,…,
Figure 320251DEST_PATH_IMAGE034
Figure 42219DEST_PATH_IMAGE034
represents the total number of neurons of the input layer;
Figure 673052DEST_PATH_IMAGE035
is shown asnA feature vector
Figure 265707DEST_PATH_IMAGE036
After inputting the face recognition network, the first in the input layeriThe output of each neuron;
Figure 284479DEST_PATH_IMAGE037
representing weights
Figure 352798DEST_PATH_IMAGE038
In the first place
Figure 911955DEST_PATH_IMAGE039
The amount of adjustment for the sub-training,
Figure 500062DEST_PATH_IMAGE040
representing weights
Figure 643730DEST_PATH_IMAGE041
In the first place
Figure 74711DEST_PATH_IMAGE042
The amount of adjustment for the sub-training,
Figure 47347DEST_PATH_IMAGE043
indicating the second in the hidden layerjThe output error term of each neuron,
Figure 880173DEST_PATH_IMAGE044
indicating the second in the output layerkA first threshold value corresponding to each of the neurons,
Figure 896540DEST_PATH_IMAGE045
indicates a first threshold value
Figure 814817DEST_PATH_IMAGE046
In the first place
Figure 325564DEST_PATH_IMAGE047
The amount of adjustment for the sub-training,
Figure 747318DEST_PATH_IMAGE048
indicates a first threshold value
Figure 498368DEST_PATH_IMAGE046
In the first place
Figure 248149DEST_PATH_IMAGE042
The amount of adjustment for the sub-training,
Figure 874171DEST_PATH_IMAGE049
indicating the second in the hidden layerjA second threshold value corresponding to each of the neurons,
Figure 822536DEST_PATH_IMAGE050
indicating a second threshold value
Figure 56071DEST_PATH_IMAGE049
In the first place
Figure 309460DEST_PATH_IMAGE028
The amount of adjustment for the sub-training,
Figure 552222DEST_PATH_IMAGE051
indicating a second threshold value
Figure 620673DEST_PATH_IMAGE049
In the first place
Figure 149743DEST_PATH_IMAGE052
Adjustment amount of sub-training.
In the present embodiment, momentum terms are added
Figure 264329DEST_PATH_IMAGE080
Figure 920570DEST_PATH_IMAGE081
Figure 437002DEST_PATH_IMAGE082
And
Figure 915736DEST_PATH_IMAGE083
the oscillation trend in the training process is reduced, and the training speed of the face recognition network is improved.
And updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold value in the face recognition network according to the adjustment amount.
Optionally, the updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer, and the threshold in the face recognition network includes:
Figure 189723DEST_PATH_IMAGE084
wherein the content of the first and second substances,
Figure 774288DEST_PATH_IMAGE085
is shown as
Figure 269860DEST_PATH_IMAGE086
Second in the updated hidden layer during the second trainingjThe first neuron and the second neuron in the output layerkWeights between individual neurons.
Figure 422624DEST_PATH_IMAGE087
Is shown as
Figure 669060DEST_PATH_IMAGE088
Second in the updated hidden layer during the second trainingjThe first neuron and the second neuron in the output layerkWeights between individual neurons.
Figure 57316DEST_PATH_IMAGE089
Is shown as
Figure 892548DEST_PATH_IMAGE090
In the second training, the updated input layer isiThe neuron and the second in the hidden layerjWeight between individual nerves.
Figure 996639DEST_PATH_IMAGE091
Is shown as
Figure 307535DEST_PATH_IMAGE092
In the second training, the updated input layer isiThe neuron and the second in the hidden layerjWeight between individual nerves.
Figure 109268DEST_PATH_IMAGE093
Is shown as
Figure 189220DEST_PATH_IMAGE094
And updating the first threshold value during the secondary training.
Figure 434519DEST_PATH_IMAGE095
Is shown as
Figure 904814DEST_PATH_IMAGE096
And updating the first threshold value during the secondary training.
Figure 369294DEST_PATH_IMAGE097
Is shown as
Figure 428385DEST_PATH_IMAGE098
And updating the second threshold value during the secondary training.
Figure 218487DEST_PATH_IMAGE099
Is shown as
Figure 379341DEST_PATH_IMAGE100
And updating the second threshold value during the secondary training.
In this embodiment, a dynamic learning rate may be adopted to further improve the convergence rate, specifically:
Figure 647511DEST_PATH_IMAGE101
wherein the content of the first and second substances,
Figure 328154DEST_PATH_IMAGE102
is shown as
Figure 289156DEST_PATH_IMAGE103
The learning rate of the sub-training is,
Figure 671727DEST_PATH_IMAGE104
is shown as
Figure 743588DEST_PATH_IMAGE105
The learning rate of the sub-training is,
Figure 42852DEST_PATH_IMAGE106
is shown as
Figure 909176DEST_PATH_IMAGE103
The error value of the sub-training is,
Figure 44623DEST_PATH_IMAGE107
is shown as
Figure 920175DEST_PATH_IMAGE108
Error value of sub-training.
In a possible embodiment, the steepness factor
Figure 575409DEST_PATH_IMAGE053
Comprises the following steps:
Figure 19160DEST_PATH_IMAGE054
first, thenA feature vector
Figure 766536DEST_PATH_IMAGE023
After inputting the face recognition network, the first in the hidden layerjOutput of individual neuron
Figure 383462DEST_PATH_IMAGE055
Comprises the following steps:
Figure 391739DEST_PATH_IMAGE056
wherein the content of the first and second substances,
Figure 865445DEST_PATH_IMAGE057
representing an excitation function;
output error term of output layer
Figure 709904DEST_PATH_IMAGE058
Comprises the following steps:
Figure 661680DEST_PATH_IMAGE059
output error term of hidden layer
Figure 291507DEST_PATH_IMAGE060
Comprises the following steps:
Figure 670535DEST_PATH_IMAGE061
example 2
As shown in fig. 2, the present embodiment provides an identity recognition apparatus based on image recognition, which can be used to implement the identity recognition method disclosed in embodiment 1, and includes a construction module 21, a training module 22, an obtaining module 23, and a recognition module 24.
The construction module 21 is configured to construct a portrait base, where the portrait base includes at least one face image corresponding to a person, and each face image corresponds to one identity information.
The training module 22 is configured to take a plurality of face images from the face image library as training face images, construct a face recognition network, perform dimension reduction on the training face images, and train the face recognition network by using the training face images after dimension reduction.
The obtaining module 23 is configured to obtain a face image to be recognized, where the face image of a person corresponding to the face image to be recognized exists in the face library.
The recognition module 24 is configured to recognize a face image to be recognized through a face recognition network, so as to obtain a face recognition result and corresponding identity information.
In one possible embodiment, the training module 22 is specifically configured to use a BP neural network as a face recognition network; preprocessing a training face image to obtain a preprocessed image; performing dimensionality reduction on the preprocessed image to obtain a feature vector of the preprocessed image; and setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector.
Optionally, the preprocessing is performed on the training face image, and includes: preprocessing the training face image, comprising: and carrying out graying, inclination correction, median filtering and normalization operation on the training face image.
Optionally, the performing dimension reduction on the preprocessed image to obtain a feature vector of the preprocessed image includes:
constructing covariance matrix of preprocessed imageCCovariance matrixCComprises the following steps:
Figure 2291DEST_PATH_IMAGE001
wherein the content of the first and second substances,n=1,2,…,NNrepresenting the total number of pre-processed images,
Figure 757757DEST_PATH_IMAGE062
is shown asnA vector of the pre-processed image is generated,
Figure 537363DEST_PATH_IMAGE003
represents the average face vector of the face,Trepresenting transposed symbols.
Obtaining a covariance matrixCFeature vector of
Figure 228239DEST_PATH_IMAGE004
And a characteristic value
Figure 906345DEST_PATH_IMAGE005
Feature vector
Figure 622759DEST_PATH_IMAGE004
And a characteristic value
Figure 210866DEST_PATH_IMAGE005
And correspond to each other.
The covariance matrixCAll the characteristic values are arranged in the order from big to small and before being taken outmA characteristic value of the image before being extractedmCorresponding to a characteristic valuemThe feature vectors form a feature spaceUCharacteristic space
Figure 197277DEST_PATH_IMAGE006
Figure 752892DEST_PATH_IMAGE007
Representing the feature vector corresponding to the first feature value after sorting,
Figure 53423DEST_PATH_IMAGE008
representing the feature vector corresponding to the second feature value after sorting,
Figure 292775DEST_PATH_IMAGE009
represents the first after the sortingmAnd the characteristic vector corresponding to each characteristic value.
Obtaining a preprocessed image in a feature spaceUAnd taking the projection as a feature vector of the preprocessed image, the feature vector of the preprocessed image
Figure 450087DEST_PATH_IMAGE010
Comprises the following steps:
Figure 994463DEST_PATH_IMAGE011
optionally, setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector, including: setting feature vectors
Figure 364264DEST_PATH_IMAGE010
Corresponding expected vector
Figure 645073DEST_PATH_IMAGE012
(ii) a Feature vector
Figure 848652DEST_PATH_IMAGE010
Obtaining an actual output vector as an input vector for a face recognition network
Figure 988647DEST_PATH_IMAGE013
(ii) a According to the actual output vector
Figure 307677DEST_PATH_IMAGE013
And an expectation vector
Figure 849517DEST_PATH_IMAGE012
Obtaining an error valueE(ii) a Determining the error valueEIf the weight is within the threshold range, finishing the training of the face recognition network, otherwise, finishing the weight between the input layer and the hidden layer and the weight between the hidden layer and the output layer in the face recognition networkUpdating the sum threshold value and obtaining the actual output vector again
Figure 20735DEST_PATH_IMAGE013
Actual output vector
Figure 772660DEST_PATH_IMAGE014
Expectation vector
Figure 749843DEST_PATH_IMAGE015
(ii) a Wherein the content of the first and second substances,k=1,2,…,MMrepresenting the total number of the neurons corresponding to the output layer of the face recognition network,
Figure 287135DEST_PATH_IMAGE016
indicating the second in the output layerkThe actual output value of the individual neuron element,
Figure 957150DEST_PATH_IMAGE017
indicating the second in the output layerkThe expected output value of the individual neuron.
Optionally, error valueEComprises the following steps:
Figure 697836DEST_PATH_IMAGE018
wherein the content of the first and second substances,ethe natural constant is represented by a natural constant,
Figure 478710DEST_PATH_IMAGE019
the steepness factor is represented by a value representing,j=1,2,…,LLrepresents the total number of neurons in the hidden layer,
Figure 198404DEST_PATH_IMAGE020
the intermediate coefficients are represented by the coefficients of the,
Figure 711425DEST_PATH_IMAGE021
indicating the second in the hidden layerjThe first neuron and the second neuron in the output layerkWeights between individual neurons;
Figure 437941DEST_PATH_IMAGE022
is shown asnA feature vector
Figure 429031DEST_PATH_IMAGE023
After inputting the face recognition network, the first in the hidden layerjThe output of each neuron;
Figure 799970DEST_PATH_IMAGE024
indicating the second in the output layerkA first threshold corresponding to each neuron.
Optionally, the updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer, and the threshold in the face recognition network includes:
determining the adjustment quantity of the weight between the input layer and the hidden layer of the face recognition network, the weight between the hidden layer and the output layer and the threshold, wherein the adjustment quantity is as follows:
Figure 437887DEST_PATH_IMAGE079
wherein the content of the first and second substances,
Figure 995907DEST_PATH_IMAGE026
representing weights
Figure 275841DEST_PATH_IMAGE027
In the first place
Figure 422657DEST_PATH_IMAGE028
The amount of adjustment for the sub-training,
Figure 277481DEST_PATH_IMAGE029
representing weights
Figure 214475DEST_PATH_IMAGE027
In the first place
Figure 140843DEST_PATH_IMAGE030
The amount of adjustment for the sub-training,
Figure 892898DEST_PATH_IMAGE031
it is indicated that the learning rate is,
Figure 636732DEST_PATH_IMAGE032
indicating the second in the output layerkThe output error term of each neuron,
Figure 434924DEST_PATH_IMAGE033
indicating the first in the input layeriThe neuron and the second in the hidden layerjThe weight between the individual nerves is given,i=1,2,…,
Figure 774769DEST_PATH_IMAGE034
Figure 974806DEST_PATH_IMAGE034
represents the total number of neurons of the input layer;
Figure 391007DEST_PATH_IMAGE035
is shown asnA feature vector
Figure 410915DEST_PATH_IMAGE036
After inputting the face recognition network, the first in the input layeriThe output of each neuron;
Figure 351189DEST_PATH_IMAGE037
representing weights
Figure 264788DEST_PATH_IMAGE038
In the first place
Figure 225790DEST_PATH_IMAGE039
The amount of adjustment for the sub-training,
Figure 873941DEST_PATH_IMAGE040
representing weights
Figure 149064DEST_PATH_IMAGE041
In the first place
Figure 58114DEST_PATH_IMAGE042
The amount of adjustment for the sub-training,
Figure 816117DEST_PATH_IMAGE043
indicating the second in the hidden layerjThe output error term of each neuron,
Figure 810618DEST_PATH_IMAGE044
indicating the second in the output layerkA first threshold value corresponding to each of the neurons,
Figure 561536DEST_PATH_IMAGE045
indicates a first threshold value
Figure 590672DEST_PATH_IMAGE046
In the first place
Figure 752532DEST_PATH_IMAGE047
The amount of adjustment for the sub-training,
Figure 499908DEST_PATH_IMAGE048
indicates a first threshold value
Figure 788938DEST_PATH_IMAGE046
In the first place
Figure 672580DEST_PATH_IMAGE042
The amount of adjustment for the sub-training,
Figure 506806DEST_PATH_IMAGE049
indicating the second in the hidden layerjA second threshold value corresponding to each of the neurons,
Figure 741479DEST_PATH_IMAGE050
indicating a second threshold value
Figure 834200DEST_PATH_IMAGE049
In the first place
Figure 572349DEST_PATH_IMAGE028
The amount of adjustment for the sub-training,
Figure 341590DEST_PATH_IMAGE051
indicating a second threshold value
Figure 797979DEST_PATH_IMAGE049
In the first place
Figure 428812DEST_PATH_IMAGE052
Adjustment amount of sub-training.
And updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold value in the face recognition network according to the adjustment amount.
Optionally, steepness factor
Figure 287047DEST_PATH_IMAGE053
Comprises the following steps:
Figure 5952DEST_PATH_IMAGE054
first, thenA feature vector
Figure 559425DEST_PATH_IMAGE023
After inputting the face recognition network, the first in the hidden layerjOutput of individual neuron
Figure 39953DEST_PATH_IMAGE055
Comprises the following steps:
Figure 752695DEST_PATH_IMAGE056
wherein the content of the first and second substances,
Figure 614471DEST_PATH_IMAGE057
representing an excitation function;
output error term of output layer
Figure 779873DEST_PATH_IMAGE058
Comprises the following steps:
Figure 299979DEST_PATH_IMAGE059
output error term of hidden layer
Figure 742593DEST_PATH_IMAGE060
Comprises the following steps:
Figure 634325DEST_PATH_IMAGE061
example 3
As shown in fig. 3, an identification device based on image recognition is provided, and the recognition device 30 may include a memory 31 and a processor 32. Illustratively, the memory 31, the processor 32, and the various parts are interconnected by a bus 33.
Memory 31 stores computer-executable instructions;
processor 32 executes the memory-stored computer-executable instructions that cause the processor to perform the image recognition-based identification method described in embodiment 1.
The image recognition-based identity recognition device in the embodiment of fig. 3 may implement the technical solution in the embodiment 1, and the implementation principle and the beneficial effects thereof are similar, and are not described herein again.
Example 4
The present embodiment provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is used for implementing the identity recognition method based on image recognition described in embodiment 1.
Example 5
Embodiments of the present application may also provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for identity recognition based on image recognition according to embodiment 1 is implemented.
The invention provides an identity recognition method, an identity recognition device and identity recognition equipment based on image recognition, which can be used for recognizing human faces so as to perform identity recognition. According to the invention, when the adjustment quantity of the weight and the threshold is determined, the momentum item is introduced, so that the oscillation trend in the training process is reduced, and the training speed of the face recognition network is increased. In the invention, a gradient factor is introduced in the calculation of the error value, so that the convergence speed is accelerated. The invention solves the problem of non-full rank generated during small sample training by reducing the dimension of the training face image, and leads the training effect of the face recognition network to be better.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (6)

1. An identity recognition method based on image recognition is characterized by comprising the following steps:
constructing a portrait base, wherein the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to identity information;
taking a plurality of face images from a face image library as training face images, constructing a face recognition network, reducing the dimensions of the training face images, and training the face recognition network by adopting the training face images subjected to dimension reduction;
acquiring a face image to be recognized, wherein the face image of a person corresponding to the face image to be recognized exists in the face image library;
identifying a face image to be identified through a face identification network to obtain a face identification result and corresponding identity information;
the constructing of the face recognition network, the dimension reduction of the training face image, and the training of the face recognition network by adopting the training face image after the dimension reduction comprise:
adopting a BP neural network as a face recognition network;
preprocessing a training face image to obtain a preprocessed image;
performing dimensionality reduction on the preprocessed image to obtain a feature vector of the preprocessed image;
setting an expected vector corresponding to the feature vector, and training the face recognition network by using the feature vector and the expected vector;
the performing the dimensionality reduction on the preprocessed image to obtain the feature vector of the preprocessed image includes:
constructing covariance matrix of preprocessed imageCThe covariance matrixCComprises the following steps:
Figure 985561DEST_PATH_IMAGE001
wherein the content of the first and second substances,n=1,2,…,NNrepresenting the total number of pre-processed images,
Figure 384181DEST_PATH_IMAGE002
is shown asnA vector of the pre-processed image is generated,
Figure 968746DEST_PATH_IMAGE003
represents the average face vector of the face,Trepresenting a transposed symbol;
obtaining a covariance matrixCFeature vector of
Figure 605264DEST_PATH_IMAGE004
And a characteristic value
Figure 882661DEST_PATH_IMAGE005
The feature vector
Figure 971840DEST_PATH_IMAGE004
And a characteristic value
Figure 625675DEST_PATH_IMAGE005
One-to-one correspondence is realized;
the covariance matrixCAll the characteristic values are arranged in the order from big to small and before being taken out
Figure 851120DEST_PATH_IMAGE006
A characteristic value of the image before being extracted
Figure 299419DEST_PATH_IMAGE006
Corresponding to a characteristic value
Figure 875894DEST_PATH_IMAGE006
The feature vectors form a feature spaceUSaid feature space
Figure 67841DEST_PATH_IMAGE007
Figure 147792DEST_PATH_IMAGE008
Representing the feature vector corresponding to the first feature value after sorting,
Figure 766992DEST_PATH_IMAGE009
representing the feature vector corresponding to the second feature value after sorting,
Figure 830763DEST_PATH_IMAGE010
represents the first after the sorting
Figure 443314DEST_PATH_IMAGE006
The characteristic vector corresponding to each characteristic value;
obtaining a preprocessed image in a feature spaceUAnd taking the projection as a feature vector of the preprocessed image
Figure 377772DEST_PATH_IMAGE011
Comprises the following steps:
Figure 433453DEST_PATH_IMAGE012
the setting of the expected vector corresponding to the feature vector and the training of the face recognition network by the feature vector and the expected vector comprise the following steps:
setting feature vectors
Figure 718941DEST_PATH_IMAGE011
Corresponding expected vector
Figure 252690DEST_PATH_IMAGE013
Feature vector
Figure 307234DEST_PATH_IMAGE011
Obtaining an actual output vector as an input vector for a face recognition network
Figure 533816DEST_PATH_IMAGE014
According to the actual output vector
Figure 306600DEST_PATH_IMAGE014
And an expectation vector
Figure 112882DEST_PATH_IMAGE013
Obtaining an error valueE
Determining the error valueEIf the weight is not within the threshold range, the training of the face recognition network is finished, otherwise, the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold in the face recognition network are updated, and the actual output vector is obtained again
Figure 553090DEST_PATH_IMAGE014
The actual output vector
Figure 950574DEST_PATH_IMAGE015
Expectation vector
Figure 476233DEST_PATH_IMAGE016
Wherein the content of the first and second substances,k=1,2,…,MMrepresenting the total number of the neurons corresponding to the output layer of the face recognition network,
Figure 86206DEST_PATH_IMAGE017
indicating the second in the output layerkThe actual output value of the individual neuron element,
Figure 380921DEST_PATH_IMAGE018
indicating the second in the output layerkThe expected output value of the individual neuron;
the error valueEComprises the following steps:
Figure 949305DEST_PATH_IMAGE019
wherein the content of the first and second substances,ethe natural constant is represented by a natural constant,
Figure 962261DEST_PATH_IMAGE020
the steepness factor is represented by a value representing,j=1,2,…,LLrepresents the total number of neurons in the hidden layer,
Figure 641504DEST_PATH_IMAGE021
the intermediate coefficients are represented by the coefficients of the,
Figure 790725DEST_PATH_IMAGE022
indicating the second in the hidden layerjThe first neuron and the second neuron in the output layerkWeights between individual neurons;
Figure 264432DEST_PATH_IMAGE023
is shown asnA feature vector
Figure 233525DEST_PATH_IMAGE024
After inputting the face recognition network, the first in the hidden layerjThe output of each neuron;
Figure 450880DEST_PATH_IMAGE025
indicating the second in the output layerkA first threshold corresponding to each neuron.
2. The identity recognition method based on image recognition according to claim 1, wherein the preprocessing the training face image comprises: and carrying out graying, inclination correction, median filtering and normalization operation on the training face image.
3. The method for identifying an identity based on image recognition according to claim 1, wherein the updating of the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold in the face recognition network comprises:
determining the adjustment quantity of the weight between the input layer and the hidden layer of the face recognition network, the weight between the hidden layer and the output layer and the threshold, wherein the adjustment quantity is as follows:
Figure 985766DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 364795DEST_PATH_IMAGE027
representing weights
Figure 821184DEST_PATH_IMAGE028
In the first place
Figure 842230DEST_PATH_IMAGE029
The amount of adjustment for the sub-training,
Figure 700464DEST_PATH_IMAGE030
representing weights
Figure 515974DEST_PATH_IMAGE028
In the first place
Figure 725238DEST_PATH_IMAGE031
The amount of adjustment for the sub-training,
Figure 284395DEST_PATH_IMAGE032
it is indicated that the learning rate is,
Figure 997137DEST_PATH_IMAGE033
indicating the second in the output layerkThe output error term of each neuron,
Figure 249126DEST_PATH_IMAGE034
indicating the first in the input layeriThe neuron and the second in the hidden layerjThe weight between the individual nerves is given,i=1,2,…,
Figure 414528DEST_PATH_IMAGE035
Figure 42956DEST_PATH_IMAGE035
represents the total number of neurons of the input layer;
Figure 610203DEST_PATH_IMAGE036
is shown asnA feature vector
Figure 767515DEST_PATH_IMAGE037
After inputting the face recognition network, the first in the input layeriThe output of each neuron;
Figure 685793DEST_PATH_IMAGE038
representing weights
Figure 321173DEST_PATH_IMAGE039
In the first place
Figure 8507DEST_PATH_IMAGE040
The amount of adjustment for the sub-training,
Figure 336720DEST_PATH_IMAGE041
representing weights
Figure 476714DEST_PATH_IMAGE042
In the first place
Figure 181365DEST_PATH_IMAGE043
The amount of adjustment for the sub-training,
Figure 988784DEST_PATH_IMAGE044
indicating the second in the hidden layerjThe output error term of each neuron,
Figure 284636DEST_PATH_IMAGE045
indicating the second in the output layerkA first threshold value corresponding to each of the neurons,
Figure 443085DEST_PATH_IMAGE046
indicates a first threshold value
Figure 420268DEST_PATH_IMAGE047
In the first place
Figure 82194DEST_PATH_IMAGE048
The amount of adjustment for the sub-training,
Figure 17789DEST_PATH_IMAGE049
indicates a first threshold value
Figure 132375DEST_PATH_IMAGE047
In the first place
Figure 913249DEST_PATH_IMAGE043
The amount of adjustment for the sub-training,
Figure 429681DEST_PATH_IMAGE050
indicating the second in the hidden layerjA second threshold value corresponding to each of the neurons,
Figure 270599DEST_PATH_IMAGE051
indicating a second threshold value
Figure 872481DEST_PATH_IMAGE050
In the first place
Figure 457046DEST_PATH_IMAGE029
The amount of adjustment for the sub-training,
Figure 827985DEST_PATH_IMAGE052
indicating a second threshold value
Figure 839803DEST_PATH_IMAGE050
In the first place
Figure 194561DEST_PATH_IMAGE053
The adjustment amount of the secondary training;
and updating the weight between the input layer and the hidden layer, the weight between the hidden layer and the output layer and the threshold value in the face recognition network according to the adjustment amount.
4. The method of claim 3, wherein the steepness factor is a function of a distance between the sensor and the image sensor
Figure 848396DEST_PATH_IMAGE054
Comprises the following steps:
Figure 73841DEST_PATH_IMAGE055
the first mentionednA feature vector
Figure 522140DEST_PATH_IMAGE056
After inputting the face recognition network, the first in the hidden layerjOutput of individual neuron
Figure 98615DEST_PATH_IMAGE057
Comprises the following steps:
Figure 290562DEST_PATH_IMAGE058
wherein the content of the first and second substances,
Figure 370513DEST_PATH_IMAGE059
representing an excitation function;
output error term of the output layer
Figure 989714DEST_PATH_IMAGE060
Comprises the following steps:
Figure 53484DEST_PATH_IMAGE061
output error term of the hidden layer
Figure 517964DEST_PATH_IMAGE062
Comprises the following steps:
Figure 718001DEST_PATH_IMAGE063
5. an identity recognition device based on image recognition, which is used for realizing the identity recognition method of any one of claims 1 to 4, and comprises a construction module, a training module, an acquisition module and a recognition module;
the construction module is used for constructing a portrait base, the portrait base comprises at least one face image corresponding to one person, and each face image corresponds to one identity information;
the training module is used for taking a plurality of face images from the face image library as training face images, constructing a face recognition network, reducing the dimension of the training face images, and training the face recognition network by adopting the training face images after dimension reduction;
the acquisition module is used for acquiring a face image to be recognized, and the face image of a person corresponding to the face image to be recognized exists in the face database;
the recognition module is used for recognizing the face image to be recognized through the face recognition network to obtain a face recognition result and corresponding identity information.
6. An identity recognition device based on image recognition is characterized by comprising a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the identification method of any of claims 1 to 4.
CN202111427391.1A 2021-11-29 2021-11-29 Identity recognition method, device and equipment based on image recognition Active CN113837161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111427391.1A CN113837161B (en) 2021-11-29 2021-11-29 Identity recognition method, device and equipment based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111427391.1A CN113837161B (en) 2021-11-29 2021-11-29 Identity recognition method, device and equipment based on image recognition

Publications (2)

Publication Number Publication Date
CN113837161A CN113837161A (en) 2021-12-24
CN113837161B true CN113837161B (en) 2022-02-22

Family

ID=78971814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111427391.1A Active CN113837161B (en) 2021-11-29 2021-11-29 Identity recognition method, device and equipment based on image recognition

Country Status (1)

Country Link
CN (1) CN113837161B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101315557A (en) * 2008-06-25 2008-12-03 浙江大学 Propylene polymerization production process optimal soft survey instrument and method based on genetic algorithm optimization BP neural network
CN107871101A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN109145817A (en) * 2018-08-21 2019-01-04 佛山市南海区广工大数控装备协同创新研究院 A kind of face In vivo detection recognition methods
CN109491816A (en) * 2018-10-19 2019-03-19 中国船舶重工集团公司第七六研究所 Knowledge based engineering method for diagnosing faults
CN110969073A (en) * 2019-08-23 2020-04-07 贵州大学 Facial expression recognition method based on feature fusion and BP neural network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050216200A1 (en) * 2004-03-29 2005-09-29 The Govt. of U.S.A. Represented by the Secretary, Department of Health and Human Services Neural network pattern recognition for predicting pharmacodynamics using patient characteristics
US11676278B2 (en) * 2019-09-26 2023-06-13 Intel Corporation Deep learning for dense semantic segmentation in video with automated interactivity and improved temporal coherence
US11488007B2 (en) * 2019-12-06 2022-11-01 International Business Machines Corporation Building of custom convolution filter for a neural network using an automated evolutionary process
US11651225B2 (en) * 2020-05-05 2023-05-16 Mitsubishi Electric Research Laboratories, Inc. Non-uniform regularization in artificial neural networks for adaptable scaling
CN112199986A (en) * 2020-08-20 2021-01-08 西安理工大学 Face image recognition method based on local binary pattern multi-distance learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101315557A (en) * 2008-06-25 2008-12-03 浙江大学 Propylene polymerization production process optimal soft survey instrument and method based on genetic algorithm optimization BP neural network
CN107871101A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN109145817A (en) * 2018-08-21 2019-01-04 佛山市南海区广工大数控装备协同创新研究院 A kind of face In vivo detection recognition methods
CN109491816A (en) * 2018-10-19 2019-03-19 中国船舶重工集团公司第七六研究所 Knowledge based engineering method for diagnosing faults
CN110969073A (en) * 2019-08-23 2020-04-07 贵州大学 Facial expression recognition method based on feature fusion and BP neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于改进PCA和BP神经网络的人脸识别算法;岳也 等;《太原师范学院学报(自然科学版)》;20210331;第20卷(第1期);第49-54、68页 *
一种基于附加动量法的改进BP算法;王树森 等;《济源职业技术学院学报》;20120930;第11卷(第3期);第9-13页 *

Also Published As

Publication number Publication date
CN113837161A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
Sargin et al. Audiovisual synchronization and fusion using canonical correlation analysis
CN108416374B (en) Non-negative matrix factorization method based on discrimination orthogonal subspace constraint
CN109255289B (en) Cross-aging face recognition method based on unified generation model
CN110503000B (en) Teaching head-up rate measuring method based on face recognition technology
CN112818850B (en) Cross-posture face recognition method and system based on progressive neural network and attention mechanism
Lip et al. Comparative study on feature, score and decision level fusion schemes for robust multibiometric systems
CN111401105B (en) Video expression recognition method, device and equipment
CN112818764A (en) Low-resolution image facial expression recognition method based on feature reconstruction model
Gomez-Alanis et al. Performance evaluation of front-and back-end techniques for ASV spoofing detection systems based on deep features
Zhang et al. I-vector based physical task stress detection with different fusion strategies
CN113837161B (en) Identity recognition method, device and equipment based on image recognition
Marcel A symmetric transformation for lda-based face verification
CN112329698A (en) Face recognition method and system based on intelligent blackboard
CN115546862A (en) Expression recognition method and system based on cross-scale local difference depth subspace characteristics
Cheng et al. Ensemble convolutional neural networks for face recognition
Basbrain et al. A neural network approach to score fusion for emotion recognition
Tran et al. Baby learning with vision transformer for face recognition
JPH10261083A (en) Device and method for identifying individual
Kundu et al. A modified BP network using Malsburg learning for rotation and location invariant fingerprint recognition and localization with and without occlusion
CN112464916A (en) Face recognition method and model training method thereof
Venkatramaphanikumar et al. Face Recognition with Modular Two Dimensional PCA under Uncontrolled Illumination Variations
CN110991228A (en) Improved PCA face recognition algorithm resistant to illumination influence
Kundu et al. A modified RBFN based on heuristic based clustering for location invariant fingerprint recognition and localization with and without occlusion
WO2021189980A1 (en) Voice data generation method and apparatus, and computer device and storage medium
CN114663965B (en) Testimony comparison method and device based on two-stage alternative learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant