CN116386108B - Fairness face recognition method based on instance consistency - Google Patents

Fairness face recognition method based on instance consistency Download PDF

Info

Publication number
CN116386108B
CN116386108B CN202310304443.9A CN202310304443A CN116386108B CN 116386108 B CN116386108 B CN 116386108B CN 202310304443 A CN202310304443 A CN 202310304443A CN 116386108 B CN116386108 B CN 116386108B
Authority
CN
China
Prior art keywords
sample
similarity
face
fairness
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310304443.9A
Other languages
Chinese (zh)
Other versions
CN116386108A (en
Inventor
崔振
孙宇飞
李勇
张桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310304443.9A priority Critical patent/CN116386108B/en
Publication of CN116386108A publication Critical patent/CN116386108A/en
Application granted granted Critical
Publication of CN116386108B publication Critical patent/CN116386108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a fairness face recognition method based on example consistency, which relates to the technical field of image signal processing, and mainly comprises the following steps: step one, extracting features of a face image sample; step two, calculating the similarity of a negative sample pair formed by the sample and the classifier class center and the similarity of a positive sample pair between similar samples by utilizing the face sample characteristics extracted in the step one; step three, respectively calculating the False recognition rate (False PositiveRate, FPR) and recall rate (TruePositiveRate, TPR) of the face sample by using the two similarities obtained in the step two; and step four, modifying a face recognition loss function according to the FPR and the TPR of the sample obtained in the step three, and calculating the consistency loss of the sample FPR and the TPR. Compared with the classical face recognition method, the method provided by the invention has example-level fairness, higher recognition accuracy and lower standard deviation, and superior generalization.

Description

Fairness face recognition method based on instance consistency
Technical Field
The invention relates to the technical field of image signal processing, in particular to a fairness face recognition method based on example consistency.
Background
The appearance of convolutional neural network and the development of deep learning make the performance of the machine in face recognition obviously improved, but with the wider and wider application, the potential unfair phenomenon of the machine causes people to be alert, and the existing fairness face recognition method is designed aiming at specific group fairness, such as face characteristics irrelevant to learning groups, learning group adaptive classifiers and the like, so that three problems are brought. First, the group definition modes are various, such as skin color, gender, age, etc., and these factors are coupled together, so that the fairness model trained on the specific group cannot ensure fairness on other groups; secondly, the face identity features irrelevant to the learning group are usually introduced into group feature decoupling, which causes the learned features to lose key information required for recognition and reduces the distinguishing property of the features; third, the group fairness method needs to rely on group attribute labeling during training, and in a large-scale face recognition training set, no or less part of data usually contains group labeling information. Therefore, there is a need for a face recognition method that is not specific to a specific group, and that is not dependent on group attribute labeling during training, and that can achieve individual fairness.
Disclosure of Invention
The invention aims to provide a fairness face recognition method based on example consistency, and provides the fairness face recognition method which is not specific to a specific group and does not depend on group attribute labeling in the training process, so that the overall recognition precision is improved compared with other group fairness methods, and meanwhile, the example consistency of standard deviation recognized among different groups is reduced.
In order to achieve the above purpose, the invention provides a fairness face recognition method based on example consistency, which comprises the following steps:
firstly, extracting features of a face image to obtain a feature vector of a sample;
step two, calculating the similarity of positive and negative sample pairs according to the face feature vector obtained in the step one;
step three, calculating the false recognition rate of each sample instance according to the similarity of the negative sample pair in the step two, and calculating the recall rate of each sample instance according to the similarity of the positive sample pair in the step two;
and step four, modifying a Softmax loss function adopted by the training face recognition model according to the false recognition rate and recall rate of each sample in the step three, and calculating the consistency loss of each sample instance.
Preferably, in the first step, the method for extracting the face features is as follows:
let I represent an input face image, x represent the corresponding face feature, E (-) represent the feature extraction module of the face recognition model, the formula is as follows:
x=E(I)
in the above formula, the feature extraction module E (·) adopts a ResNet residual convolutional neural network.
Preferably, in the second step, the calculation formula of the similarity is as follows:
s=S(x i ,x j )
in the above, x i ,x j Representative of two unitized sample features (|x) i ||=||x j I=1), S represents the similarity of the two samples, S (·) represents the similarity metric function.
S(x i ,x j )=<x i ,x j >
=||x i ||||x j ||cosθ ij
=cosθ ij
In the above equation, cosine similarity is used for measurement when similarity calculation is performed.
Preferably, the similarity of the negative sample pair is calculated by first using the sample feature x i And classifier class center w= { W j |j=1,2,3...,C,j≠y i The negative sample pair formed by the two is as follows:
S n ={S(x i ,w j )|j=1,2,3...,C,j≠y i }
in the above, y i Representing sample x i Class labels, w j Class center vector representing the j-th class, C representing the number of training set classes, S n Representing the similarity of negative pairs of samples.
Preferably, for the similarity calculation of positive sample pairs, positive sample pairs composed of samples of the same class are used for calculation using the following formula:
S p ={S(x i ,x j )|y i =y j }
in the above, S p Is the similarity of positive sample pairs, y i Representing sample x i Class labels of (c).
Preferably, in the third step, the calculating method of the false recognition rate and the recall rate selects a similarity threshold according to the similarity in the second step, and calculates the false recognition rate and the recall rate of each sample, and the formula is as follows:
in the above formula, S represents similarity, τ is a similarity threshold, C represents the number of training set categories, S n Representing the similarity of negative sample pairs, S p Is the similarity of positive sample pairs, calculates the false recognition rate and recall rate of each sample by using F respectively i And T i And (3) representing.
Preferably, in the fourth step, the method for modifying the Softmax loss function first constrains the consistency of the false recognition rate of the sample, when the false recognition rate of the sample becomes high, the loss value becomes large, and then constrains the consistency of the recall rate of the sample, when the recall rate of the sample becomes low, the loss value becomes large, and the modified formula is as follows:
in the above-mentioned method, the step of,representing cosine similarity of samples and true class centers of classifier, cos theta j Representing the similarity of the sample with other class centers s 1 Scale for increasing Softmax loss, s 2 The scale used to control the two penalty terms, m' i Adaptive constraint terms, m, calculated for each sample positive sample pair i Constraint terms for each negative sample pair.
Preferably, said m' i Formalized definition of (c) is as follows:
in the above, T i Is the recall of the sample, H(s) is a decreasing function of similarity.
Preferably, said m i Formalized definition of (c) is as follows:
in the above, F i Is the false recognition rate of the sample, G(s) is the similarity s (s is equal to or greater than tau)>0) Is a function of the increase in (2).
Preferably, the training set face pictures used are from an open source face database.
Therefore, the fairness face recognition method based on example consistency, which adopts the method, has the following beneficial effects:
(1) Compared with the existing fairness face recognition method, the method is not designed for specific groups, and the fairness of the sample level is realized by defining and restricting the consistency of the false recognition rate (False Positive Rate, FPR) and the recall rate (True Positive Rate, TPR) of each sample, so that the fairness of the model on various groups and even individuals is ensured, and the problem that the recognition accuracy is reduced while the recognition deviation is reduced is well avoided because the decoupling of group characteristics is not involved in the training process.
(2) Compared with the existing fairness face recognition method, the method does not need group attribute labeling in the training process, and the accuracy of the face recognition model is greatly dependent on the size of the training set (the number of identity categories and the number of image samples), so that the method can conveniently train in a large-scale face recognition training set, can obtain the fairness face recognition model which can meet the actual accuracy requirement and has lower deviation on each group and even individual.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
Fig. 1 is a schematic flow chart of a fairness face recognition method based on example consistency;
fig. 2 is a process of similarity threshold selection in a fairness face recognition method based on example consistency according to the present invention.
Detailed Description
For the purpose of making apparent the objects, technical solutions and advantages of the embodiments of the present invention, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and that the components of the embodiments of the present invention generally described and illustrated in the drawings herein may be arranged and designed in various different configurations, therefore, the following detailed description of the embodiments of the present invention provided in the drawings is not intended to limit the scope of the claimed invention but merely represents selected embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a fairness face recognition method based on instance consistency, which comprises the following steps:
firstly, extracting features of a face image, wherein the face image of a training set is from an open source face database, and the feature vector of a sample is obtained, and the face feature extraction method comprises the following steps:
let I represent an input face image, x represent the corresponding face feature, E (-) represent the feature extraction module of the face recognition model, the formula is as follows:
x=E(I)
in the above formula, the feature extraction module E (·) adopts a ResNet residual convolutional neural network.
Step two, calculating the similarity of positive and negative sample pairs according to the face feature vector obtained in the step one, wherein the calculation formula of the similarity is as follows;
s=S(x i ,x j )
in the above, x i ,x j Representative of two unitized sample features (|x) i ||=||x j I=1), S represents the similarity of the two samples, S (·) represents a similarity measure function, and when similarity calculation is performed, cosine similarity is used to measure, with the following formula:
S(x i ,x j )=<x i ,x j >
=||x i ||||x j ||cosθ ij
=cosθ ij
similarity calculation of negative sample pair, firstly adopting sample characteristic x i And classifier class center w= { W j |j=1,2,3...,C,j≠y i Negative sample pair formed by }, where y i Representing sample x i Class labels, w j Class center vector representing the j-th class, C representing the number of training set classes, S n The similarity of the negative sample pair is expressed as follows:
S n ={S(x i ,w j )|j=1,2,3...,C,j≠y i }
for similarity calculation of positive sample pairs, positive sample pairs composed of samples of the same class are used for calculation using the following formula:
Sp={S(xi,xj)|yi=yj}
in the above, S p Is the similarity of positive sample pairs, y i Representing sample x i Class labels of (c).
Step three, calculating the false recognition rate of each sample according to the similarity of the negative sample pair in the step two, and calculating the recall rate of each sample according to the similarity of the positive sample pair in the step two, selecting a similarity threshold value, and calculating the false recognition rate and the recall rate of each sample according to the formula as follows:
in the above formula, S represents similarity, τ is a similarity threshold, C represents the number of training set categories, S n Representing the similarity of negative sample pairs, S p Is the similarity of positive sample pairs, calculates the false recognition rate and recall rate of each sample by using F respectively i And T i And (3) representing.
Step four, according to the false recognition rate and recall rate of each sample in the step three, calculating the consistency loss of each sample instance by modifying a Softmax loss function adopted by a training face recognition model, and modifying the Softmax loss function, wherein the consistency of the false recognition rate of the sample is constrained firstly, when the false recognition rate of the sample becomes high, the loss value becomes large, the consistency of the recall rate of the sample is constrained, when the recall rate of the sample becomes low, the loss value becomes large, and the modified formula is as follows:
in the above-mentioned method, the step of,representing cosine similarity of samples and true class centers of classifier, cos theta j Representing the similarity of the sample with other class centers s 1 Scale for increasing Softmax loss, s 2 The scale used to control the two penalty terms, m' i Adaptive constraint terms, m, calculated for each sample positive sample pair i Constraint terms for each negative sample pair;
m′ i formalized definition of (2)The meaning is as follows:
in the above, T i Is the recall of the sample, H(s) is a decreasing function of similarity.
The m is i Formalized definition of (c) is as follows:
in the above, F i Is the false recognition rate of the sample, G(s) is the similarity s (s is equal to or greater than tau)>0) Is a function of the increase in (2).
Examples
As shown in fig. 1, the invention provides a fairness face recognition method based on example consistency, which comprises the following steps:
the main process of face recognition is to calculate the similarity of the features of two face images, then compare the similarity with a specified threshold, if the similarity is larger than the threshold, the two faces are considered to be the same person, otherwise, the two faces are not considered to be the same person, so the feature extraction must be performed on the face images in the first step of face recognition.
Let I represent an input face image, x represent the corresponding face feature, E (-) represent the feature extraction module of the face recognition model, the formula is as follows:
x=E(I)
in this embodiment, the feature extraction module E (·) employs a res net residual convolutional neural network. The function of unitizing x by the above formula is to distribute the face features on the hypersphere to obtain better face representation.
Step two, after obtaining the face feature vector, the similarity of the Positive (Positive) Negative (Negative) sample Pair (Pair) needs to be calculated. The calculation formula is as follows:
s=S(x i ,x j )
wherein x is i ,x j Representative of two unitized sample features (|x) i ||=||x j I=1), S represents the similarity of the two samples, S (·) represents the similarity metric function, and in this embodiment, the cosine similarity is used for the metric, with the following formula:
S(x i ,x j )=<x i ,x j >
=||x i ||||x j ||cosθ ij
=cosθ ij
for similarity calculation of negative sample pairs, the present embodiment uses sample feature x i And classifier class center w= { W j |j=1,2,3...,C,j≠y i Negative sample pair formed by }, where y i Representing sample x i Class labels, w j Class center vector representing the j-th class, C representing the number of training set classes, S n Representing the similarity of negative sample pairs, the formula is as follows:
S n ={S(x i ,w j )|j=1,2,3...,C,j≠y i }
for the similarity calculation of the positive sample pair, the positive sample pair formed by the samples of the same class is adopted in the embodiment, and the formula is as follows:
S p ={S(x i ,x j )|y i =y j }
in the above, S p Is the similarity of positive sample pairs, y i Representing sample x i Class labels of (c).
Step three, after obtaining the similarity of the positive and negative sample pairs through the step two, firstly designating a similarity threshold value, counting negative sample pairs consisting of other types of samples and the samples for one sample, and calculating the proportion of the negative sample pairs with the similarity higher than the threshold value as FPR of the sample example; at the same time, counting positive sample pairs composed of samples belonging to the same class as the sample, calculating the proportion of positive sample pairs with similarity higher than the threshold value, and taking the proportion as TPR of the sample example by the above methodThe method can calculate the false recognition rate FPR and recall rate TPR of each sample by using F i And T i The formula is as follows:
the selection process of the similarity threshold τ is shown in fig. 2, specifically, after the unitized face features are extracted through the first step, all the sample features are simultaneously similar to the classifier class center to calculate the negative sample pairs. Then they are ordered in order from big to small, by declaring a global FPR, the index number of the ordered negative sample pair similarity sequence corresponding to the FPR can be calculated, and the similarity value at the index number is the required threshold, that is, the threshold for calculating the single sample FPR and TPR is determined by the negative sample pair similarities of all samples.
And fourthly, constraining the consistency of each sample FPR and TPR to enable the model to learn the fair face representation of the example level, wherein the sample FPR and TPR are closely related to a conventional Softmax loss function of face recognition, and the Softmax loss function used for face recognition has the basic form as follows:
in the above-mentioned method, the step of,representing cosine similarity of samples and classifier true (GT) class center, cos theta j And the similarity between the sample and the center of other classes is represented, s is used for increasing the loss scale, so that training is more efficient, m is a manually designated value larger than 0, and the similarity distance between the sample and the true class and the similarity between the sample and the other classes are constrained to be not smaller than m, so that the discriminant between the face features is improved.
However, this commonly used Softmax penalty function does not take model fairness into account, and it can be observed that the inter-class similarity constraint term m in the above denominator is essentially a negative sample pair similarityThe present invention modifies the Softmax penalty function by associating such constraints with the sample FPR, constraining the consistency of the sample FPR. Specifically, the present invention calculates an adaptive m for each sample i The larger the FPR of a sample, the corresponding inter-class similarity penalty m i The larger the m i Formalized definition of (c) is as follows:
in the above formula, G(s) is an increasing function of the similarity s (s.gtoreq.τ0), and in this embodiment, G(s) adopts the following formula:
G(s)=s 2
can observe the molecules in the original loss function formulaThe similarity between the sample and the center of the self class is shown, namely the similarity in the class, and the similarity in the sample class determines the TPR (total power point) of the sample. The present invention computes an additional adaptive constraint term m 'for each sample' i And associate it with the TPR of the sample, constraining the consistency of the sample TPR by further modifying the Softmax penalty function, in particular the lower the TPR of the sample, the corresponding similarity-like penalty m' i The larger the m' i The formalized definition formula of (2) is as follows:
in the above formula, H(s) is a decreasing function concerning similarity, and in this embodiment, H(s) takes the following form:
H(s)=(∈-s) 2
in the above equation, e > s is the average of the similarity of the pair of samples with the similarity above the threshold value in the positive pair of samples.
Finally, by unifying the two penalty terms in the Softmax loss function, the consistency of the sample FPR and TPR can be restrained while the face recognition model is trained, so that the model learns the face characteristics with fair instance level, and the final loss function is as follows:
in the above-mentioned method, the step of,representing cosine similarity of samples and true class centers of classifier, cos theta j Representing the similarity of the sample with other class centers s 1 To increase the scale of Softmax penalty, making training more efficient, taking 64, s in this embodiment 2 The scale used to control the two penalty terms is taken to be 0.5 in this embodiment.
In a specific application process, before performing fairness experiments of face recognition, it is necessary to first briefly explain the face recognition process. Face recognition usually belongs to open set testing in actual deployment, namely the identity of the face does not appear in a training set in testing, a face recognition model usually comprises sample pairs in pairs of test pictures in testing, then the face features of the pictures are extracted by using the model, and the similarity of the two face features is calculated. If the similarity is greater than a certain threshold, then the person is considered to be the same person, otherwise the person is considered to be a non-same person. By calculating the FPR of the negative sample pair and the TPR of the positive sample pair, the test accuracy can be calculated, and the standard deviation of the accuracy between different populations can be calculated. Different populations G k The FPR, TPR, recognition accuracy Acc, and standard deviation σ thereof can be defined as follows:
the face recognition fairness experiment database is introduced as follows:
the RFW (Racial Faces in the Wild) database is a classical race fairness face recognition test library containing a total of 4 races, african, asian, caucasian, and Indian. Wherein each race contains 3000 positive sample pairs and 3000 negative sample pairs. NFW (National Faces in the World) contains 24 countries, argentina (Argentina), australia (Australia), brazil (Brazil), canada (Canada), china (China), czech (Czech), denmark (Denmark), france (France), germany (Germany), indian (India), ireland (Ireland), italy (Italy), japan (Japan), korea (Korea), mexico (Mexico), netherlands (Netherlands), philippines (Philippines), poland (Polland), russa (Russia), spain (Spain), sweden (Sweden), turkey (Turkey), UK (UK), USA (USA). Each country contains 3000 positive sample pairs and 3000 negative sample pairs. The method provided by the invention can also be applied to other databases, such as fairness face recognition tests of gender (IJB-C), age (AgeDB) and the like.
Table 1: RFW data set various kinds of group identification precision and standard deviation thereof
As shown in table 1, the fairness face recognition method based on example consistency proposed by the present invention is superior to all comparison methods in average accuracy (1.30% increase compared to ArcFace, 2.42% increase compared to CosFace, 0.11% increase compared to CIFP). Furthermore, the present invention is superior in terms of standard deviation to these methods. Furthermore, compared to other methods, the proposed IC-FFR shows the best face recognition accuracy in most ethnicities, especially asians (1.75% increase compared to ArcFace, 2.80% increase compared to CosFace, 0.87% increase compared to RL-RBN-arc, 1.72% increase compared to RL-RBN-cos, 0.55% increase compared to CIFP). The RFW data set race fairness test result shows that the fairness face recognition method based on instance consistency can effectively reduce recognition deviation among different race.
Table 2: NFW data set identification precision and standard deviation thereof for each country
As can be seen from table 2, the fairness face recognition method based on example consistency improves average recognition accuracy and reduces standard deviation of accuracy among different countries. As shown in Table 2, the scheme of the invention reduces the standard deviation of the recognition precision of each country (0.52% lower than ArcFace, 0.17% lower than CosFace, 0.14% lower than CIFP) and further improves the average precision (1.01% higher than ArcFace, 0.17% higher than CosFace and 0.02% higher than CIFP). Furthermore, we can find that asian countries (e.g. china and japan) have lower accuracy than caucasians (e.g. united states and uk). This phenomenon can also be observed on RFW datasets. This shows that the race bias in the RFW dataset may be caused to some extent by bias in various countries, and the test results of fairness in NFW countries show the effectiveness of the method of the invention in improving fairness identified by face recognition models on different populations.
Therefore, compared with the classical face recognition method, the example-level fairness face recognition method based on example consistency has higher recognition precision and lower standard deviation, and has excellent generalization.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.

Claims (6)

1. A fairness face recognition method based on instance consistency is characterized by comprising the following steps: the method comprises the following steps:
firstly, extracting features of a face image to obtain a feature vector of a sample;
step two, calculating the similarity of positive and negative sample pairs according to the face feature vector obtained in the step one;
step three, calculating the false recognition rate of each sample instance according to the similarity of the negative sample pair in the step two, and calculating the recall rate of each sample instance according to the similarity of the positive sample pair in the step two;
step four, modifying a Softmax loss function adopted by a training face recognition model according to the false recognition rate and recall rate of each sample in the step three, and calculating the consistency loss of each sample instance;
in the fourth step, the method for modifying the Softmax loss function is to constrain the consistency of the false recognition rate of the sample, when the false recognition rate of the sample becomes high, the loss value becomes large, then to constrain the consistency of the recall rate of the sample, when the recall rate of the sample becomes low, the loss value becomes large, and the modified formula is as follows:
in the above-mentioned method, the step of,representing cosine similarity of samples and classifier true class center,/->Representing the similarity of the sample to other class centers, < >>Scale for increasing Softmax loss, +.>The scale used to control the two penalty terms, +.>Adaptive constraints calculated for each sample positive sample pair, +.>Constraint terms for each negative sample pair.
2. The fairness face recognition method based on instance consistency according to claim 1, wherein: in the first step, the face feature extraction method comprises the following steps:
is provided withRepresenting an input face image, < >>Representing the corresponding face features->The characteristic extraction module representing the face recognition model has the following formula:
in the above, the feature extraction moduleThe neural network is convolved with the ResNet residual.
3. The fairness face recognition method based on instance consistency according to claim 1, wherein: in the second step, the calculation formula of the similarity is as follows:
in the above-mentioned method, the step of,,/>sample characteristics representing two unitizations (+)>),/>Representing the similarity of the two samples, +.>Representing a similarity measure function;
in the above equation, cosine similarity is used for measurement when similarity calculation is performed.
4. A fairness face recognition method based on instance consistency according to claim 3, wherein: similarity calculation of negative sample pair, firstly adopting sample characteristicsAnd classifier class center->The formula of the negative sample pair is as follows:
;
in the above-mentioned method, the step of,representation sample->Class label of->Indicate->Class center vector of individual class,/>Representing the number of training set categories->Representing the similarity of negative pairs of samples.
5. A fairness face recognition method based on instance consistency according to claim 3, wherein: for similarity calculation of positive sample pairs, positive sample pairs composed of samples of the same class are used for calculation using the following formula:
;
in the above-mentioned method, the step of,is the similarity of positive sample pairs, +.>Representation ofSample->Class labels of (c).
6. The fairness face recognition method based on instance consistency according to claim 1, wherein: the training set face pictures used are from an open source face database.
CN202310304443.9A 2023-03-27 2023-03-27 Fairness face recognition method based on instance consistency Active CN116386108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310304443.9A CN116386108B (en) 2023-03-27 2023-03-27 Fairness face recognition method based on instance consistency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310304443.9A CN116386108B (en) 2023-03-27 2023-03-27 Fairness face recognition method based on instance consistency

Publications (2)

Publication Number Publication Date
CN116386108A CN116386108A (en) 2023-07-04
CN116386108B true CN116386108B (en) 2023-09-19

Family

ID=86960858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310304443.9A Active CN116386108B (en) 2023-03-27 2023-03-27 Fairness face recognition method based on instance consistency

Country Status (1)

Country Link
CN (1) CN116386108B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734145A (en) * 2018-02-27 2018-11-02 北京紫睛科技有限公司 A kind of face identification method based on degree adaptive face characterization model
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN110096965A (en) * 2019-04-09 2019-08-06 华东师范大学 A kind of face identification method based on head pose
CN112668362A (en) * 2019-10-15 2021-04-16 浙江中正智能科技有限公司 Testimony comparison model training method for dynamic optimization class agent
CN113505692A (en) * 2021-07-09 2021-10-15 西北工业大学 Human face recognition method based on partial area optimization of subject working characteristic curve
WO2021243743A1 (en) * 2020-06-04 2021-12-09 青岛理工大学 Deep convolutional neural network-based submerged oil sonar detection image recognition method
CN113850243A (en) * 2021-11-29 2021-12-28 北京的卢深视科技有限公司 Model training method, face recognition method, electronic device and storage medium
WO2022056898A1 (en) * 2020-09-21 2022-03-24 Northwestern Polytechnical University A deep neural network training method and apparatus for speaker verification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11195057B2 (en) * 2014-03-18 2021-12-07 Z Advanced Computing, Inc. System and method for extremely efficient image and pattern recognition and artificial intelligence platform

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734145A (en) * 2018-02-27 2018-11-02 北京紫睛科技有限公司 A kind of face identification method based on degree adaptive face characterization model
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN110096965A (en) * 2019-04-09 2019-08-06 华东师范大学 A kind of face identification method based on head pose
CN112668362A (en) * 2019-10-15 2021-04-16 浙江中正智能科技有限公司 Testimony comparison model training method for dynamic optimization class agent
WO2021243743A1 (en) * 2020-06-04 2021-12-09 青岛理工大学 Deep convolutional neural network-based submerged oil sonar detection image recognition method
WO2022056898A1 (en) * 2020-09-21 2022-03-24 Northwestern Polytechnical University A deep neural network training method and apparatus for speaker verification
CN113505692A (en) * 2021-07-09 2021-10-15 西北工业大学 Human face recognition method based on partial area optimization of subject working characteristic curve
CN113850243A (en) * 2021-11-29 2021-12-28 北京的卢深视科技有限公司 Model training method, face recognition method, electronic device and storage medium

Also Published As

Publication number Publication date
CN116386108A (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN108846422B (en) Account number association method and system across social networks
CN106897403B (en) Fine granularity Chinese attribute alignment schemes towards knowledge mapping building
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN108052625B (en) Entity fine classification method
CN106845358B (en) Method and system for recognizing image features of handwritten characters
CN116795973B (en) Text processing method and device based on artificial intelligence, electronic equipment and medium
CN109993100A (en) The implementation method of facial expression recognition based on further feature cluster
WO2022042043A1 (en) Machine learning model training method and apparatus, and electronic device
CN107145514B (en) Chinese sentence pattern classification method based on decision tree and SVM mixed model
CN111611877A (en) Age interference resistant face recognition method based on multi-temporal-spatial information fusion
CN111460091B (en) Medical short text data negative sample sampling method and medical diagnosis standard term mapping model training method
Lai et al. Discriminative and compact coding for robust face recognition
CN107220598A (en) Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN107491729A (en) The Handwritten Digit Recognition method of convolutional neural networks based on cosine similarity activation
CN109033321B (en) Image and natural language feature extraction and keyword-based language indication image segmentation method
CN111694977A (en) Vehicle image retrieval method based on data enhancement
CN115270752A (en) Template sentence evaluation method based on multilevel comparison learning
CN104978569A (en) Sparse representation based incremental face recognition method
CN116386108B (en) Fairness face recognition method based on instance consistency
CN107533672A (en) Pattern recognition device, mode identification method and program
Wang et al. Interpret neural networks by extracting critical subnetworks
CN116935057A (en) Target evaluation method, electronic device, and computer-readable storage medium
Zhao et al. Open-world person re-identification with deep hash feature embedding
CN116612307A (en) Solanaceae disease grade identification method based on transfer learning
CN110298331A (en) A kind of testimony of a witness comparison method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant