CN111832402A - Face recognition method and device - Google Patents

Face recognition method and device Download PDF

Info

Publication number
CN111832402A
CN111832402A CN202010498493.1A CN202010498493A CN111832402A CN 111832402 A CN111832402 A CN 111832402A CN 202010498493 A CN202010498493 A CN 202010498493A CN 111832402 A CN111832402 A CN 111832402A
Authority
CN
China
Prior art keywords
vectors
vector
similarity
feature
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010498493.1A
Other languages
Chinese (zh)
Inventor
蔡晓东
郑淑婷
黄玳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202010498493.1A priority Critical patent/CN111832402A/en
Publication of CN111832402A publication Critical patent/CN111832402A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention provides a face recognition method and a face recognition device, wherein the method comprises the following steps: obtaining a plurality of target face pictures from the photographing equipment, and processing similarity vectors of the plurality of target face pictures to obtain a plurality of similarity vectors; processing the fusion features of the similarity vectors to obtain a plurality of fusion feature vectors, and calculating the weight vectors of the fusion feature vectors to obtain a plurality of weight vectors; calculating the enhanced features of the plurality of fusion feature vectors according to the plurality of weight vectors to obtain enhanced feature vectors; and identifying and analyzing the enhanced feature vector to obtain an identification result. The method reduces the inaccurate recognition caused by large change of the human face characteristics, improves the expression effect of the similarity vector, and simultaneously improves the robustness of the target recognition algorithm and the recognition accuracy of the human face recognition system.

Description

Face recognition method and device
Technical Field
The invention mainly relates to the technical field of face recognition, in particular to a face recognition method and a face recognition device.
Background
The face recognition comprises two forms of close-set recognition and open-set recognition, wherein the open-set recognition refers to training a face recognition model to extract features to recognize the identity of a target, and the attention is paid that a test set does not intersect with the target in a training set. The open-set recognition technology is widely applied in life, such as the application of a face recognition algorithm to online payment, access control and the like.
The face recognition algorithm based on the deep learning frame has a very good effect, and the method is concerned by more and more people. Common face recognition algorithms include Facenet, SphereFace, Cosface, and the like. The method improves the learning ability of the human face characteristics by deepening and enlarging the model scale or improving the loss function, thereby improving the target recognition accuracy. However, due to the influences of factors such as large target age span, camouflage, large angle and the like, the large change of the human face features during open-set recognition often causes that the recognition accuracy is difficult to meet the requirements of people.
In summary, although the face recognition method based on end-to-end deep learning greatly improves the target recognition task effect, there is a technical problem that targets with large face feature changes are difficult to accurately recognize.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a face recognition method and a face recognition device.
The technical scheme for solving the technical problems is as follows: a face recognition method comprises the following steps:
obtaining a plurality of target face pictures from photographing equipment, and carrying out similarity vector processing on the plurality of target face pictures to obtain a plurality of similarity vectors;
performing fusion feature processing on each similarity vector to obtain a plurality of fusion feature vectors, and performing weight vector calculation on each fusion feature vector to obtain a plurality of weight vectors;
performing enhanced feature calculation on the plurality of fusion feature vectors according to the plurality of weight vectors to obtain enhanced feature vectors;
and carrying out identification analysis on the enhanced feature vector to obtain an identification result.
Another technical solution of the present invention for solving the above technical problems is as follows: a face recognition apparatus comprising:
the target face picture processing module is used for acquiring a plurality of target face pictures from the photographing equipment and carrying out similarity vector processing on the plurality of target face pictures to obtain a plurality of similarity vectors;
the weight vector processing module is used for respectively carrying out fusion feature processing on each similarity vector to obtain a plurality of fusion feature vectors and respectively carrying out weight vector calculation on each fusion feature vector to obtain a plurality of weight vectors;
the enhanced feature vector calculation module is used for performing enhanced feature calculation on the plurality of fused feature vectors according to the plurality of weight vectors to obtain enhanced feature vectors;
and the identification result obtaining module is used for carrying out identification analysis on the enhanced feature vector to obtain an identification result.
The invention has the beneficial effects that: the method comprises the steps of processing a plurality of length similarity vectors of target face pictures to obtain a plurality of similarity vectors, reducing identification inaccuracy caused by large face feature change, processing the fusion features of the similarity vectors to obtain a plurality of fusion feature vectors, calculating the weight vectors of the fusion feature vectors to obtain a plurality of weight vectors, calculating the enhancement features of the fusion feature vectors according to the weight vectors to obtain enhancement feature vectors, and identifying and analyzing the enhancement feature vectors to obtain identification results, so that the expression effect of the similarity vectors is improved, and meanwhile, the robustness of a target identification algorithm and the identification accuracy of a face identification system are improved.
Drawings
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention;
fig. 2 is a block diagram of a flow of processing a similarity vector according to an embodiment of the present invention;
fig. 3 is a flow chart of a face recognition method according to an embodiment of the present invention;
fig. 4 is a block diagram of a face recognition apparatus according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention.
As shown in fig. 1, a face recognition method includes the following steps:
obtaining a plurality of target face pictures from photographing equipment, and carrying out similarity vector processing on the plurality of target face pictures to obtain a plurality of similarity vectors;
performing fusion feature processing on each similarity vector to obtain a plurality of fusion feature vectors, and performing weight vector calculation on each fusion feature vector to obtain a plurality of weight vectors;
performing enhanced feature calculation on the plurality of fusion feature vectors according to the plurality of weight vectors to obtain enhanced feature vectors;
and carrying out identification analysis on the enhanced feature vector to obtain an identification result.
It should be understood that the face feature vector is a vector value representing a set of face information.
In the embodiment, the plurality of similarity vectors are obtained by processing the plurality of target face picture length similarity vectors, so that the identification inaccuracy caused by large face feature change is reduced, the plurality of fusion feature vectors are obtained by processing the fusion features of the similarity vectors respectively, the weight vectors of the fusion feature vectors are calculated respectively to obtain the plurality of weight vectors, the enhanced feature vectors are calculated according to the enhanced features of the fusion feature vectors to obtain the enhanced feature vectors, and the identification result is obtained by identifying and analyzing the enhanced feature vectors, so that the expression effect of the similarity vectors is improved, and meanwhile, the robustness of a target identification algorithm and the identification accuracy of a face identification system are improved.
Optionally, as an embodiment of the present invention, as shown in fig. 2, the process of performing similarity vector processing on a plurality of target face pictures to obtain a plurality of similarity vectors includes:
respectively training a plurality of target face pictures and a plurality of face comparison pictures in a preset face comparison library by using a pre-trained face recognition model to obtain a plurality of target face feature vectors and a plurality of face comparison feature vectors;
calculating each target face feature vector and a plurality of face comparison feature vectors by using a cosine distance formula to obtain a plurality of similarity vectors to be screened;
screening the similarity vectors to be screened to obtain a plurality of similarity vectors to be selected;
and randomly selecting the plurality of similarity vectors to be selected to obtain a plurality of similarity vectors.
In fig. 2, the pre-trained face recognition model may be a spheerface model.
It should be understood that the similarity vector to be filtered refers to the cosine distance between the target face feature vector and the face comparison feature vector.
It should be understood that the screening process refers to selecting the top N to-be-screened similarity vectors, which is because the top N to-be-screened similarity vectors have higher similarity, and target association information can be learned through the top N to-be-screened similarity vectors.
Specifically, in the open set face recognition technology, a target front face picture is carefully selected to form a face library as a comparison database, then a pre-trained face recognition model is used to obtain a face comparison feature vector of the face library and a target face feature vector of a target, and the cosine distance between the target face feature vector and the face comparison feature vector is calculated. For example, the result of the feature vector of the face comparison of the target is [0.45,0.56,0.78,0.23,0.32,0.43,0.48], where 0.45 represents the similarity vector to be filtered with number 1 in the target and face library, and 0.56 represents the similarity vector to be filtered with number 2 in the target and face library.
And selecting the results of N top-ranked names in the similarity vectors to be screened to form the similarity vectors to be selected of the target. For example, the result of the similarity vector to be screened of the target is [0.45,0.56,0.78,0.23,0.32,0.43,0.48], and the result with the selection rank N of 3 constitutes the result of the similarity vector to be selected as [0,0.56,0.78,0,0,0,0.48 ].
And randomly selecting m similarity vectors to be selected as the input of the model.
Specifically, firstly, a pretrained SphereFace model is used as a face feature extractor, then, a face library with the number of people being 2000 is made, and the cosine distance of a target is calculated, which is called as a similarity vector to be screened. And finally, randomly selecting 3 similarity vectors to be selected as a training sample of the model for each target.
In the above embodiment, a plurality of target face feature vectors and a plurality of face comparison feature vectors are obtained by training a plurality of target face pictures and a plurality of face comparison pictures in a preset face comparison library respectively by using a pre-trained face recognition model; calculating the feature vectors of each target face and the comparison feature vectors of the plurality of faces by using a cosine distance formula to obtain a plurality of similarity vectors to be screened; screening a plurality of similarity vectors to be screened to obtain a plurality of similarity vectors to be selected; the method comprises the steps of obtaining a plurality of similarity vectors by randomly selecting the similarity vectors to be selected, and strategically selecting the similarity vectors as training samples of a model, so that the influence of inaccurate recognition caused by large face feature change is reduced.
Optionally, as an embodiment of the present invention, the process of performing fusion feature processing on each similarity vector to obtain a plurality of fusion feature values, and performing weight vector calculation on each fusion feature value to obtain a plurality of weight vectors includes:
respectively inputting each similarity vector into an associated learning layer to obtain a plurality of fusion feature vectors;
and respectively carrying out weight vector calculation on each fusion feature vector according to a mean absolute deviation distribution algorithm to obtain a plurality of weight vectors.
It is to be understood that each element of the weight vector represents a weight factor for the corresponding location, i.e. each element of the fused feature vector is assigned a probability.
Specifically, each similarity vector is input into two AE encoders respectively, and then two outputs are connected in series to combine into a fused feature vector, so as to obtain potential associated feature representation of the similarity vector.
In the above embodiment, a plurality of fusion feature vectors are obtained by inputting each of the similarity vectors to an associated learning layer; and a plurality of weight vectors are obtained by respectively calculating the weight vectors of the fusion characteristic vectors according to a mean absolute deviation distribution algorithm, so that the robustness of a target recognition algorithm is improved, and the recognition accuracy of the face recognition system is improved.
Optionally, as an embodiment of the present invention, the process of respectively inputting each of the similarity vectors into an association learning layer to obtain a plurality of fusion feature vectors includes:
calculating each similarity vector through a first formula to obtain a plurality of fusion feature vectors, wherein the first formula is as follows:
Figure BDA0002523782930000061
wherein the content of the first and second substances,
Figure BDA0002523782930000062
wherein v isiFor the ith fused feature vector, xiIs the ith similarity vector, AE1For preset AE encoder 1, AE2Is a preset AE encoder 2 that is,
Figure BDA0002523782930000063
the vector is output for the AE encoder 1,
Figure BDA0002523782930000064
is output from AE encoder 1The vector of the vector is then calculated,
Figure BDA0002523782930000065
is a series operation.
It should be understood that each similarity vector in the training sample is input separately to AE1Encoder and AE2Encoder for obtaining corresponding associated features v1And v2An association learning layer; then, v is measured1And v2Concatenated into a fused feature vector.
In the above embodiment, a plurality of fusion feature vectors are obtained by calculating each similarity vector in the first form, so that the associated features of the target are learned, the expression effect of the similarity vectors is improved, and the accuracy of the data is improved for the subsequent calculation.
Optionally, as an embodiment of the present invention, the process of respectively performing weight vector calculation on each of the fusion feature vectors according to a mean absolute deviation distribution algorithm to obtain a plurality of weight vectors includes:
respectively carrying out weight vector calculation on each fusion characteristic vector through a second formula to obtain a plurality of weight vectors, wherein the second formula is as follows:
pi=exp(-‖vvar,i‖)/∑exp(-‖vvar,j‖),j∈(1,2,…,m),
wherein the content of the first and second substances,
Figure BDA0002523782930000071
wherein the content of the first and second substances,
Figure BDA0002523782930000072
wherein m is the number of fused feature vectors, pi∈RdAs weight vector of the ith fused feature vector, RdFor data space, d is the data length, | is the absolute value, vvar,i∈RdIs the variance, v, of the ith fused feature vectormean∈RdIs the mean of m fused feature vectors, viIs the ith fused feature vector.
It should be appreciated that the mean absolute deviation assignment algorithm is utilized to calculate the assignment probability for each element of the fused feature vector and to recalculate the corresponding fused feature vector based on the probability results.
In the embodiment, the weight vectors of the fusion feature vectors are respectively calculated by the second formula to obtain a plurality of weight vectors, so that the robustness of the algorithm is improved, and the distinguishing feature expression effect of the target is enhanced.
Optionally, as an embodiment of the present invention, the process of performing enhanced feature calculation on the multiple fusion feature vectors according to the multiple weight vectors to obtain an enhanced feature vector includes:
performing enhanced feature calculation on the plurality of weight vectors and the plurality of fusion feature vectors according to a third formula to obtain an enhanced feature vector, wherein the third formula is as follows:
e=∑qi,i∈(1,2,…,m),
wherein the content of the first and second substances,
Figure BDA0002523782930000073
wherein e ∈ RdTo enhance the feature vector, RdIs a data space, d is a data length, qi∈RdFor the ith reassigned vector, <' > is Hadamard product, Hadamard produst, viFor the ith fused feature vector, pi∈RdIs the weight vector of the ith fused feature vector.
In the above embodiment, the enhanced feature vector is obtained by calculating the enhanced features according to the plurality of weight vectors and the plurality of fused feature vectors by the third formula, so that the robustness of the algorithm and the recognition accuracy of the face recognition system are improved.
Optionally, as an embodiment of the present invention, the process of performing recognition analysis on the enhanced feature vector to obtain a recognition result includes:
and inputting the enhanced feature vector into a BN and Dropout combined layer network, and obtaining an identification result through a classification function output algorithm.
Preferably, the classification function output algorithm may be a softmax function.
It is to be understood that BN is BatchNorm, whose role is to accelerate the training convergence process; dropout is a means to alleviate model overfitting, and improves the generalization ability of the model, i.e. the same recognition effect can be achieved on different data.
In the above embodiment, the enhanced feature vector is input into the BN and Dropout combined layer network, and the recognition result is obtained through the classification function output algorithm, so that the expression effect of the similarity vector is improved, and meanwhile, the robustness of the target recognition algorithm and the recognition accuracy of the face recognition system are also improved.
Optionally, as another embodiment of the present invention, as shown in fig. 3, first, a pre-training feature extractor obtains the face comparison feature vector of a face library and the target face feature vector of a target; then, calculating similarity vectors to be screened of the target and the face library by a cosine distance formula, wherein the similarity vectors to be screened refer to cosine distances between the feature vectors of the target face of the target and the feature vectors of the face comparison of the face library; then, selecting N results before ranking to form similarity vectors to be selected, randomly selecting M similarity vectors to be selected from the similarity vectors to be selected to obtain the similarity vectors as the input of a next-level associated learning layer, and outputting M fusion feature vectors in series after passing through an AE encoder 1 and an AE encoder 2 respectively; then, M fused feature vectors are selected to be input into a mean absolute deviation distribution algorithm to obtain M weight vectors; and then, respectively carrying out point multiplication on the M weight vectors by using the corresponding fusion characteristic vectors to obtain M redistribution vectors, adding the M redistribution vectors according to positions to combine the M redistribution vectors into an enhanced characteristic vector, and finally, outputting an identification result through a lower-level network.
Fig. 4 is a block diagram of a face recognition apparatus according to an embodiment of the present invention.
Optionally, as another embodiment of the present invention, as shown in fig. 4, a face recognition apparatus includes:
the target face picture processing module is used for acquiring a plurality of target face pictures from the photographing equipment and carrying out similarity vector processing on the plurality of target face pictures to obtain a plurality of similarity vectors;
the weight vector processing module is used for respectively carrying out fusion feature processing on each similarity vector to obtain a plurality of fusion feature vectors and respectively carrying out weight vector calculation on each fusion feature vector to obtain a plurality of weight vectors;
the enhanced feature vector calculation module is used for performing enhanced feature calculation on the plurality of fused feature vectors according to the plurality of weight vectors to obtain enhanced feature vectors;
and the identification result obtaining module is used for carrying out identification analysis on the enhanced feature vector to obtain an identification result.
Optionally, as an embodiment of the present invention, the face feature vector processing module is specifically configured to:
respectively training a plurality of target face pictures and a plurality of face comparison pictures in a preset face comparison library by using a pre-trained face recognition model to obtain a plurality of target face feature vectors and a plurality of face comparison feature vectors;
calculating each target face feature vector and a plurality of face comparison feature vectors by using a cosine distance formula to obtain a plurality of similarity vectors to be screened;
screening the similarity vectors to be screened to obtain a plurality of similarity vectors to be selected;
and randomly selecting the plurality of similarity vectors to be selected to obtain a plurality of similarity vectors.
Optionally, as an embodiment of the present invention, the weight vector processing module is specifically configured to:
respectively inputting each similarity vector into an associated learning layer to obtain a plurality of fusion feature vectors;
and respectively carrying out weight vector calculation on each fusion feature vector according to a mean absolute deviation distribution algorithm to obtain a plurality of weight vectors.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. It will be understood that the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A face recognition method is characterized by comprising the following steps:
obtaining a plurality of target face pictures from photographing equipment, and carrying out similarity vector processing on the plurality of target face pictures to obtain a plurality of similarity vectors;
performing fusion feature processing on each similarity vector to obtain a plurality of fusion feature vectors, and performing weight vector calculation on each fusion feature vector to obtain a plurality of weight vectors;
performing enhanced feature calculation on the plurality of fusion feature vectors according to the plurality of weight vectors to obtain enhanced feature vectors;
and carrying out identification analysis on the enhanced feature vector to obtain an identification result.
2. The method according to claim 1, wherein the process of performing similarity vector processing on the plurality of target face pictures to obtain a plurality of similarity vectors comprises:
respectively training a plurality of target face pictures and a plurality of face comparison pictures in a preset face comparison library by using a pre-trained face recognition model to obtain a plurality of target face feature vectors and a plurality of face comparison feature vectors;
calculating each target face feature vector and a plurality of face comparison feature vectors by using a cosine distance formula to obtain a plurality of similarity vectors to be screened;
screening the similarity vectors to be screened to obtain a plurality of similarity vectors to be selected;
and randomly selecting the plurality of similarity vectors to be selected to obtain a plurality of similarity vectors.
3. The face recognition method according to claim 1, wherein the process of performing fusion feature processing on each of the similarity vectors to obtain a plurality of fusion feature values, and performing weight vector calculation on each of the fusion feature values to obtain a plurality of weight vectors includes:
respectively inputting each similarity vector into an associated learning layer to obtain a plurality of fusion feature vectors;
and respectively carrying out weight vector calculation on each fusion feature vector according to a mean absolute deviation distribution algorithm to obtain a plurality of weight vectors.
4. The face recognition method according to claim 3, wherein the process of inputting each of the similarity vectors into an associated learning layer to obtain a plurality of fused feature vectors comprises:
calculating each similarity vector through a first formula to obtain a plurality of fusion feature vectors, wherein the first formula is as follows:
Figure FDA0002523782920000021
wherein the content of the first and second substances,
Figure FDA0002523782920000022
wherein v isiFor the ith fused feature vector, xiIs the ith similarity vector, AE1For preset AE encoder 1, AE2Is a preset AE encoder 2 that is,
Figure FDA0002523782920000023
the vector is output for the AE encoder 1,
Figure FDA0002523782920000024
the vector is output for the AE encoder 1,
Figure FDA0002523782920000027
is a series operation.
5. The face recognition method according to claim 3 or 4, wherein the process of performing weight vector calculation on each of the fused feature vectors according to a mean absolute deviation distribution algorithm to obtain a plurality of weight vectors comprises:
respectively carrying out weight vector calculation on each fusion characteristic vector through a second formula to obtain a plurality of weight vectors, wherein the second formula is as follows:
pi=exp(-‖vvar,i‖)/∑exp(-‖vvar,j‖),j∈(1,2,…,m),
wherein the content of the first and second substances,
Figure FDA0002523782920000025
wherein the content of the first and second substances,
Figure FDA0002523782920000026
wherein m is the number of fused feature vectors, pi∈RdAs weight vector of the ith fused feature vector, RdFor data space, d is the data length, | is the absolute value, vvar,i∈RdIs the variance, v, of the ith fused feature vectormean∈RdIs the mean of m fused feature vectors, viIs the ith fused feature vector.
6. The face recognition method of claim 5, wherein the process of performing enhanced feature calculation on the plurality of fused feature vectors according to the plurality of weight vectors to obtain enhanced feature vectors comprises:
performing enhanced feature calculation on the plurality of weight vectors and the plurality of fusion feature vectors according to a third formula to obtain an enhanced feature vector, wherein the third formula is as follows:
e=∑qi,i∈(1,2,…,m),
wherein the content of the first and second substances,
Figure FDA0002523782920000031
wherein e ∈ RdTo enhance the feature vector, RdIs a data space, d is a data length, qi∈RdFor the ith reassigned vector, <' > is Hadamard product, Hadamard produst, viFor the ith fused feature vector, pi∈RdIs the weight vector of the ith fused feature vector.
7. The face recognition method according to claim 1 or 6, wherein the process of performing recognition analysis on the enhanced feature vector to obtain a recognition result comprises:
and inputting the enhanced feature vector into a BN and Dropout combined layer network, and obtaining an identification result through a classification function output algorithm.
8. A face recognition apparatus, comprising:
the target face picture processing module is used for acquiring a plurality of target face pictures from the photographing equipment and carrying out similarity vector processing on the plurality of target face pictures to obtain a plurality of similarity vectors;
the weight vector processing module is used for respectively carrying out fusion feature processing on each similarity vector to obtain a plurality of fusion feature vectors and respectively carrying out weight vector calculation on each fusion feature vector to obtain a plurality of weight vectors;
the enhanced feature vector calculation module is used for performing enhanced feature calculation on the plurality of fused feature vectors according to the plurality of weight vectors to obtain enhanced feature vectors;
and the identification result obtaining module is used for carrying out identification analysis on the enhanced feature vector to obtain an identification result.
9. The face recognition apparatus of claim 8, wherein the face feature vector processing module is specifically configured to:
respectively training a plurality of target face pictures and a plurality of face comparison pictures in a preset face comparison library by using a pre-trained face recognition model to obtain a plurality of target face feature vectors and a plurality of face comparison feature vectors;
calculating each target face feature vector and a plurality of face comparison feature vectors by using a cosine distance formula to obtain a plurality of similarity vectors to be screened;
screening the similarity vectors to be screened to obtain a plurality of similarity vectors to be selected;
and randomly selecting the plurality of similarity vectors to be selected to obtain a plurality of similarity vectors.
10. The face recognition apparatus of claim 8, wherein the weight vector processing module is specifically configured to:
respectively inputting each similarity vector into an associated learning layer to obtain a plurality of fusion feature vectors;
and respectively carrying out weight vector calculation on each fusion feature vector according to a mean absolute deviation distribution algorithm to obtain a plurality of weight vectors.
CN202010498493.1A 2020-06-04 2020-06-04 Face recognition method and device Pending CN111832402A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010498493.1A CN111832402A (en) 2020-06-04 2020-06-04 Face recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010498493.1A CN111832402A (en) 2020-06-04 2020-06-04 Face recognition method and device

Publications (1)

Publication Number Publication Date
CN111832402A true CN111832402A (en) 2020-10-27

Family

ID=72898338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010498493.1A Pending CN111832402A (en) 2020-06-04 2020-06-04 Face recognition method and device

Country Status (1)

Country Link
CN (1) CN111832402A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113407780A (en) * 2021-05-20 2021-09-17 桂林电子科技大学 Target retrieval method, device and storage medium
CN113408348A (en) * 2021-05-14 2021-09-17 桂林电子科技大学 Video-based face recognition method and device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090433A (en) * 2017-12-12 2018-05-29 厦门集微科技有限公司 Face identification method and device, storage medium, processor
CN109472240A (en) * 2018-11-12 2019-03-15 北京影谱科技股份有限公司 Recognition of face multi-model self-adapting Fusion Features Enhancement Method and device
CN109543548A (en) * 2018-10-26 2019-03-29 桂林电子科技大学 A kind of face identification method, device and storage medium
CN109684951A (en) * 2018-12-12 2019-04-26 北京旷视科技有限公司 Face identification method, bottom library input method, device and electronic equipment
WO2019100436A1 (en) * 2017-11-22 2019-05-31 Zhejiang Dahua Technology Co., Ltd. Methods and systems for face recognition
CN109902561A (en) * 2019-01-16 2019-06-18 平安科技(深圳)有限公司 A kind of face identification method and device, robot applied to robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100436A1 (en) * 2017-11-22 2019-05-31 Zhejiang Dahua Technology Co., Ltd. Methods and systems for face recognition
CN108090433A (en) * 2017-12-12 2018-05-29 厦门集微科技有限公司 Face identification method and device, storage medium, processor
CN109543548A (en) * 2018-10-26 2019-03-29 桂林电子科技大学 A kind of face identification method, device and storage medium
CN109472240A (en) * 2018-11-12 2019-03-15 北京影谱科技股份有限公司 Recognition of face multi-model self-adapting Fusion Features Enhancement Method and device
CN109684951A (en) * 2018-12-12 2019-04-26 北京旷视科技有限公司 Face identification method, bottom library input method, device and electronic equipment
CN109902561A (en) * 2019-01-16 2019-06-18 平安科技(深圳)有限公司 A kind of face identification method and device, robot applied to robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAI HUANG等: "Robust face recognition method based on similarity fusion", 《2019 6TH INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATICS (ICSAI)》 *
刘玮: "无约束条件下的人脸识别方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408348A (en) * 2021-05-14 2021-09-17 桂林电子科技大学 Video-based face recognition method and device and storage medium
CN113407780A (en) * 2021-05-20 2021-09-17 桂林电子科技大学 Target retrieval method, device and storage medium

Similar Documents

Publication Publication Date Title
CN110909651B (en) Method, device and equipment for identifying video main body characters and readable storage medium
Rubinstein et al. A comparative study of image retargeting
CN110503099B (en) Information identification method based on deep learning and related equipment
WO2012132418A1 (en) Characteristic estimation device
CN111368772B (en) Identity recognition method, device, equipment and storage medium
CN110427970A (en) Image classification method, device, computer equipment and storage medium
CN110502694B (en) Lawyer recommendation method based on big data analysis and related equipment
CN109840524A (en) Kind identification method, device, equipment and the storage medium of text
CN110633421A (en) Feature extraction, recommendation, and prediction methods, devices, media, and apparatuses
CN111832402A (en) Face recognition method and device
CN111159563A (en) Method, device and equipment for determining user interest point information and storage medium
CN113298146A (en) Image matching method, device, equipment and medium based on feature detection
CN115222443A (en) Client group division method, device, equipment and storage medium
CN109992676B (en) Cross-media resource retrieval method and retrieval system
CN114037545A (en) Client recommendation method, device, equipment and storage medium
CN113705792A (en) Personalized recommendation method, device, equipment and medium based on deep learning model
CN117455687A (en) Recommendation method and device for financial products, storage medium and electronic equipment
Al Qudah et al. Using Artificial Intelligence Applications For E-Government Services As Iris Recognition
CN110929731B (en) Medical image processing method and device based on pathfinder intelligent search algorithm
CN113850632B (en) User category determination method, device, equipment and storage medium
CN113343700B (en) Data processing method, device, equipment and storage medium
CN114581177A (en) Product recommendation method, device, equipment and storage medium
CN110263196B (en) Image retrieval method, image retrieval device, electronic equipment and storage medium
CN113688708A (en) Face recognition method, system and storage medium based on probability characteristics
CN116171462A (en) Object sequence identification method, network training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201027