CN116152884A - Face image recognition method and device, processor and electronic equipment - Google Patents

Face image recognition method and device, processor and electronic equipment Download PDF

Info

Publication number
CN116152884A
CN116152884A CN202211539043.8A CN202211539043A CN116152884A CN 116152884 A CN116152884 A CN 116152884A CN 202211539043 A CN202211539043 A CN 202211539043A CN 116152884 A CN116152884 A CN 116152884A
Authority
CN
China
Prior art keywords
face recognition
target
face
model
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211539043.8A
Other languages
Chinese (zh)
Inventor
林晓锐
张锦元
刘唱
吴蕃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202211539043.8A priority Critical patent/CN116152884A/en
Publication of CN116152884A publication Critical patent/CN116152884A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face image recognition method, a face image recognition device, a face image recognition processor and electronic equipment. Relates to the field of artificial intelligence, and the method comprises the following steps: n target countermeasure samples are determined, each target countermeasure sample is input into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, and the face recognition model set comprises m face recognition models; judging whether each face recognition result is the same as a preset result; under the condition that the face recognition result is different from the preset result, determining a target countermeasure sample corresponding to the face recognition result as a training sample; and updating a preset face recognition model through a training sample to obtain a target face recognition model, and recognizing a face image through the target face recognition model. According to the face recognition method and device, the problem that face images are easy to be recognized wrongly through the face recognition model trained by the countermeasure sample due to weak mobility of the countermeasure sample in the related technology is solved.

Description

Face image recognition method and device, processor and electronic equipment
Technical Field
The application relates to the field of artificial intelligence, in particular to a face image recognition method, a face image recognition device, a face image recognition processor and electronic equipment.
Background
In the face recognition field, the face recognition model can be used for recognizing the face to be recognized as other faces by mistake through the countersamples, in order to ensure the accuracy of the face recognition model in recognizing the face, the countersamples are used as training sets for training the face recognition model, the robustness of the face recognition model can be effectively improved, and the method for the face to resist the sample attack can be divided into white box attack and black box attack according to the degree of understanding of an attacker on the attacked face recognition model. White-box attacks refer to the fact that an attacker can know the internal structure of an attacked face recognition model, model parameters, output results of the face recognition model and other model internal information. The black box attack means that an attacker cannot acquire the internal information of the face recognition model to be attacked and can only acquire the external output information of the face recognition model to be attacked, but the commercial face recognition system at present only can provide the external output information, so that the black box attack has more practical significance. The key to successfully implementing the black box attack is to promote the attack mobility of the black box attack method, and the effective attack can be successfully implemented on a plurality of different face recognition models.
In the related art, in order to enhance the attack mobility, a more effective method is to perform gradient aggregation. The method is to aggregate gradients of a plurality of open-source white-box face recognition models, so that diversity of gradient information is enhanced, and the effect of improving attack mobility is achieved. Conventional gradient polymerizations are hard polymerizations, the effect of which depends on a large number of gradient sources. However, the number of the face recognition models of the current open-source white box is smaller, and the number required by hard aggregation cannot be met, that is, fewer countermeasure samples for training the face recognition models can be obtained, so that the trained face recognition models still have risks of false recognition.
Aiming at the problem that face images are easy to be wrongly identified by a face recognition model trained by an antagonistic sample due to weaker mobility of the antagonistic sample in the related art, no effective solution is proposed at present.
Disclosure of Invention
The main objective of the present application is to provide a method, an apparatus, a processor and an electronic device for recognizing a face image, so as to solve the problem that in the related art, the face image is easily recognized by mistake through a face recognition model trained by an countermeasure sample due to weaker mobility of the countermeasure sample.
In order to achieve the above object, according to one aspect of the present application, there is provided a face image recognition method. The method comprises the following steps: n target countermeasure samples are determined, each target countermeasure sample is respectively input into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, the face recognition model set comprises m face recognition models, N is greater than or equal to 1, N is smaller than m, and m is greater than 1; judging whether each face recognition result is the same as a preset result; under the condition that the face recognition result is different from the preset result, determining a target countermeasure sample corresponding to the face recognition result as a training sample; and updating a preset face recognition model through a training sample to obtain a target face recognition model, and recognizing a face image through the target face recognition model.
Optionally, determining the N target challenge samples includes: acquiring a face image and a face recognition model set; extracting feature points from a face image, determining a target area according to the feature points, and generating an initial countermeasure sample according to the face image and the target area, wherein the target area is a feature area for identifying a face through a preset feature extraction model; performing N times of iterative processing on the face recognition model set to obtain N aggregation gradients, wherein one aggregation gradient is obtained through each time of iterative processing; n target challenge samples are determined from each of the aggregate gradients and the initial challenge samples.
Optionally, extracting the feature points from the face image includes: identifying feature points of the face image through a preset feature extraction model, and adjusting the face image to a preset size to obtain a processed face image; and extracting feature points from the processed face image through a preset feature extraction model.
Optionally, determining the target area according to the feature points, and generating the initial challenge sample according to the face image and the target area includes: determining a target area according to the target area generating template and the feature points; and carrying out tensor product calculation on the target areas of the attack image and the attacked image in the face image to obtain an initial challenge sample, wherein the attack image is an image for resisting the attack when the initial challenge sample is generated, and the attacked image is an image for receiving the challenge when the initial challenge sample is generated.
Optionally, performing N iterative processes on the face recognition model set to obtain N aggregation gradients, wherein each iteration is to randomly select one model from m face recognition models as a query model, m-1 models except the query model are used as a support set, and the aggregation gradient of each iteration is calculated through the total gradient of the query model and the support lumped gradient.
Optionally, the query model total gradient and the support lumped gradient are determined by: sequentially determining one face recognition model in the support set from m-1 face recognition models in the support set as a support model; sequentially determining the support gradient of the antagonism sample under each support model to obtain m-1 support gradients; calculating query gradients according to each support gradient and the query model to obtain m-1 query gradients; the sum of the m-1 support gradients is determined as the support lumped gradient and the sum of the m-1 query gradients is determined as the query model total gradient.
Optionally, determining the support gradient comprises: inputting the countermeasure sample and the attacked image of each iteration into a support model to obtain a first result; and inputting the first result into a cosine loss function to obtain a second result, and calculating the gradient of the second result to obtain a support gradient.
Optionally, determining the query gradient includes: intercepting the countermeasure sample of each iteration into a target numerical range through an intercepting function; inputting the intercepted countermeasure sample and the intercepted attacked image of each iteration into a query model to obtain a third result; and inputting the third result into a cosine loss function to obtain a fourth result, and calculating the gradient of the fourth result to obtain the query gradient.
Optionally, determining the target challenge sample comprises: determining a disturbance adding step length, and calculating a first tensor product of the aggregation gradient and the target area of each iteration; calculating a target difference value of the challenge sample and the first tensor product of each iteration; and intercepting the target difference value into a target numerical range through an intercepting function to obtain a target countermeasure sample.
In order to achieve the above object, according to another aspect of the present application, there is provided a face image recognition apparatus. The device comprises: the first determining unit is used for determining N target countermeasure samples, and respectively inputting each target countermeasure sample into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, the face recognition model set comprises m face recognition models, N is greater than or equal to 1, N is smaller than m, and m is greater than 1; the judging unit is used for judging whether each face recognition result is the same as a preset result; the second determining unit is used for determining a target countermeasure sample corresponding to the face recognition result as a training sample under the condition that the face recognition result is different from a preset result; and the updating unit is used for updating the preset face recognition model through the training sample to obtain a target face recognition model, and recognizing the face image through the target face recognition model.
Through the application, the following steps are adopted: n target countermeasure samples are determined, each target countermeasure sample is respectively input into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, the face recognition model set comprises m face recognition models, N is greater than or equal to 1, N is smaller than m, and m is greater than 1; judging whether each face recognition result is the same as a preset result; under the condition that the face recognition result is different from the preset result, determining a target countermeasure sample corresponding to the face recognition result as a training sample; the preset face recognition model is updated through the training sample to obtain the target face recognition model, and the face image is recognized through the target face recognition model, so that the problem that the face image is easy to be recognized wrongly through the face recognition model trained by the countermeasure sample due to weak mobility of the countermeasure sample in the related technology is solved. The method comprises the steps of obtaining a plurality of target countermeasure samples with strong mobility through face recognition model set processing, training a preset face recognition model by using the target countermeasure samples to obtain a target face recognition model, and recognizing a face image by using the target face recognition model, so that the effects of improving the defenses of the face recognition model to the countermeasure samples and avoiding the recognition errors of the face image are achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
fig. 1 is a flowchart of a method for recognizing a face image according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of determining a target challenge sample provided in accordance with an embodiment of the present application;
fig. 3 is a schematic diagram of a face image recognition device according to an embodiment of the present application;
fig. 4 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the present application described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for presentation, analyzed data, etc.) related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
For convenience of description, the following will describe some terms or terms related to the embodiments of the present application:
Face challenge sample: the antagonistic sample refers to an image that causes the image classifier to generate a misclassified image after adding an antagonistic disturbance noise to a normal image. The face countermeasure sample is to specify an image classifier as a face recognition model;
face landmark feature points: and (5) locating the position coordinates of the key parts of the face by giving the face image.
The present invention will be described with reference to preferred implementation steps, and fig. 1 is a flowchart of a method for recognizing a face image according to an embodiment of the present application, as shown in fig. 1, where the method includes the following steps:
step S101, N target countermeasure samples are determined, each target countermeasure sample is respectively input into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, the face recognition model set comprises m face recognition models, N is greater than or equal to 1, N is smaller than m, and m is greater than 1.
Specifically, N and m are positive integers, the target countermeasure sample is a face countermeasure sample used for training a target face recognition model, the preset face recognition model is an initial model before the target face model is trained, and the target countermeasure sample is input into the preset face recognition model to train the preset face recognition model, so that the accuracy of the preset face recognition model in recognizing face images is improved. And inputting each target training sample into a preset training model to obtain a corresponding face recognition result.
Step S102, judging whether each face recognition result is the same as a preset result.
Specifically, if the face recognition result corresponding to each target countermeasure sample is the same as the preset result, it is indicated that the target countermeasure sample does not interfere with the preset face recognition model, that is, the robustness of the preset face recognition model on the target countermeasure sample is improved.
Step S103, when the face recognition result is different from the preset result, determining the target countermeasure sample corresponding to the face recognition result as a training sample.
Specifically, if the face recognition result is different from the preset result, it is indicated that the target countermeasure sample may interfere with the preset face recognition model, so that the preset face recognition model still has a risk of false recognition. At this time, the target countermeasures are added into the training set, and the resistance to the target countermeasures is gradually improved through training of a preset face recognition model, so that the accuracy of recognizing the face images is improved.
Step S104, updating a preset face recognition model through a training sample to obtain a target face recognition model, and recognizing a face image through the target face recognition model.
Specifically, all target countermeasure samples are used as training sets to train a preset face recognition model, the target face recognition model which has strong resistance to each target countermeasure sample and is not misidentified is trained, and face images are identified through the target face recognition model.
According to the face image recognition method, N target countermeasure samples are determined, each target countermeasure sample is input into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, the face recognition model set comprises m face recognition models, N is greater than or equal to 1, N is smaller than m, and m is greater than 1; judging whether each face recognition result is the same as a preset result; under the condition that the face recognition result is different from the preset result, determining a target countermeasure sample corresponding to the face recognition result as a training sample; the preset face recognition model is updated through the training sample to obtain the target face recognition model, and the face image is recognized through the target face recognition model, so that the problem that the face image is easy to be recognized wrongly through the face recognition model trained by the countermeasure sample due to weak mobility of the countermeasure sample in the related technology is solved. The method comprises the steps of obtaining a plurality of target countermeasure samples with strong mobility through face recognition model set processing, training a preset face recognition model by using the target countermeasure samples to obtain a target face recognition model, and recognizing a face image by using the target face recognition model, so that the effects of improving the defenses of the face recognition model to the countermeasure samples and avoiding the recognition errors of the face image are achieved.
Optionally, in the face image recognition method provided in the embodiment of the present application, fig. 2 is a flowchart of a method for determining a target challenge sample provided in the embodiment of the present application, as shown in fig. 2, where the method includes the following steps:
step S201, acquiring a face image and a face recognition model set;
specifically, the more target countermeasure samples, the higher the robustness of the trained target face recognition model, and the higher the accuracy of recognizing the face image. The face image includes an aggressor image, i.e. a source of challenge samples for interfering with the victim image recognition process, and a victim image, the face recognition model set being a model set for generating the challenge samples from the face image.
Step S202, extracting feature points from a face image, determining a target area according to the feature points, and generating an initial countermeasure sample according to the face image and the target area, wherein the target area is a feature area for identifying a face through a preset feature extraction model;
specifically, the feature points are points containing positional information of facial features, and the target region is a region selected from the feature points as a region for generating a countermeasure sample. The preset feature extraction model may be a landmark feature extraction model in dlib library (an open-source machine learning algorithm tool kit), and feature points of the face image are identified through the preset face recognition model.
Step S203, performing N times of iterative processing on the face recognition model set to obtain N aggregation gradients, wherein one aggregation gradient is obtained through each time of iterative processing;
specifically, the iterative process is to randomly select one model from the face recognition model set as a query model, and the rest models are used as support sets, wherein each iterative process reselects one model as a query model, and the query models of each generation are different, and in the process of each generation, the gradient of the query model and the gradient of each model in the support set are aggregated to obtain an aggregated gradient, and the gradient refers to characteristic information used for identifying the face in each face recognition model.
Step S204, determining N target challenge samples according to each aggregation gradient and the initial challenge samples.
Specifically, the initial challenge sample refers to a feature in a target area selected from an attacker image, and the target challenge sample is a challenge sample obtained by processing the initial challenge sample through an aggregation gradient, that is, a challenge sample that can be misidentified by a face recognition model.
According to the method for determining the target countermeasure sample, the characteristic points are extracted from the face image, the target area is selected according to the characteristic points, the characteristics in the target area are used as initial countermeasure samples, repeated iterative optimization processing is conducted on the face recognition model set, a plurality of aggregation gradients are calculated, and the initial countermeasure samples are processed according to each aggregation gradient, so that the target countermeasure sample is obtained.
The feature points are obtained through a preset feature extraction model, and optionally, in the face image recognition method provided by the embodiment of the application, the feature point extraction method for the face image includes: identifying feature points of the face image through a preset feature extraction model, and adjusting the face image to a preset size to obtain a processed face image; and extracting feature points from the processed face image through a preset feature extraction model.
Specifically, feature points of a face image are identified through a preset face recognition model, face alignment processing is performed by using Active Shape Model (active shape model), a face area is cut, and the size of the face image is adjusted to 512 x 512. And then extracting characteristic points from the adjusted face image. And determining a target area by extracting the characteristic points, and generating an initial countermeasure sample.
After the feature points are successfully extracted, determining an initial countermeasure sample according to the feature points, optionally, in the face image recognition method provided by the embodiment of the present application, determining a target area according to the feature points, and generating the initial countermeasure sample according to the face image and the target area includes: determining a target area according to the target area generating template and the feature points; and carrying out tensor product calculation on the target areas of the attack image and the attacked image in the face image to obtain an initial challenge sample, wherein the attack image is an image for resisting the attack when the initial challenge sample is generated, and the attacked image is an image for receiving the challenge when the initial challenge sample is generated.
Specifically, after the feature points are acquired, an antagonism sample area, namely a target area, is generated on the face image after face alignment by utilizing the feature points through a landmark feature extraction model. In the target area, an initial challenge sample area template, i.e., a target area template, is generated, which is a binary image, wherein the target area has pixel values of 1, the rest of 0, and the image size is 512 x 512. For example, among the feature points, feature points of the eyes are selected, then on the target area template, an eye area is selected by using the feature points of the eyes, and pixel values of the eye area are set to 0.
For example, the target area is determined by the following formula:
M=GenM(M 0 ,lms s );
wherein M is a target area, genM is a target area generation module, M 0 For target area templates, lms s Is a feature point; the initial challenge sample is determined by the following formula:
Figure BDA0003978844180000071
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003978844180000072
to initially combat the sample, x s Is an image for resisting attack in the face image, x t Is an image for accepting a challenge in the face image,>
Figure BDA0003978844180000073
a formula is calculated for the tensor product.
After the initial countermeasure sample is generated, the aggregation gradient needs to be determined, optionally, in the face image recognition method provided by the embodiment of the application, N iteration processing is performed on the face recognition model set to obtain N aggregation gradients, each iteration is to randomly select one model from m face recognition models as a query model, m-1 models except the query model are used as a support set, and the aggregation gradient of each iteration is calculated through the total gradient and the support lumped gradient of the query model.
Specifically, a plurality of polymerization gradients are determined by the following formula:
Figure BDA0003978844180000081
wherein g n For the aggregation gradient of each iteration in the processing process, N is more than or equal to 0 and less than or equal to N-1, N is the iteration number in the processing process and the number of target countermeasure samples, each iteration is to randomly select one model from m face recognition models as a query model, m-1 models except the query model are used as a support set, m is the number of models in the face recognition model set,
Figure BDA0003978844180000082
for querying the model total gradient +.>
Figure BDA0003978844180000083
To support the lumped gradient.
It should be noted that, in order to solve the problem that gradient sources encountered in gradient polymerization are few and the mobility is limited, a meta-optimization (meta-optimization) method is proposed by referring to the basic idea of meta-learning (meta-learning). First, m (m > 1) face recognition models are considered as one set. I.e. a set of face recognition models, in each round of training, the set is randomly divided into two sets, namely a support set and a query set, wherein there is only one face recognition model in the query set. That is, the query model, each round of training aggregates the gradient information obtained by the support set and the query set to obtain a new aggregated gradient and is directly utilized to generate the target countermeasure sample.
Optionally, in the face image recognition method provided in the embodiment of the present application, the query model total gradient and the support lumped gradient are determined by: sequentially determining one face recognition model in the support set from m-1 face recognition models in the support set as a support model; sequentially determining the support gradient of the antagonism sample under each support model to obtain m-1 support gradients; calculating query gradients according to each support gradient and the query model to obtain m-1 query gradients; the sum of the m-1 support gradients is determined as the support lumped gradient and the sum of the m-1 query gradients is determined as the query model total gradient.
Specifically, the support model is a model sequentially selected from the face recognition model set during each iteration, and the support gradient corresponding to each support model is sequentially determined to obtain m-1 support gradients. In each iterative process, query gradients are calculated according to each support gradient and the query model to obtain m-1 query gradients. The sum of m-1 support gradients is the support lumped gradient in each iteration processing process, the sum of m-1 query gradients is the query model total gradient in each iteration processing process, and N times of iteration obtain N support lumped gradients and N query model total gradients.
Optionally, in the method for identifying a face image provided in the embodiment of the present application, determining the support gradient includes: inputting the countermeasure sample and the attacked image of each iteration into a support model to obtain a first result; and inputting the first result into a cosine loss function to obtain a second result, and calculating the gradient of the second result to obtain a support gradient.
Specifically, the support gradient is determined by the following formula:
Figure BDA0003978844180000084
wherein g sup In order to support the gradient,
Figure BDA0003978844180000085
for the challenge sample of each iteration, +.>
Figure BDA0003978844180000086
For each generation of gradient calculation formula of the countermeasure sample under the support model, L is a cosine loss function, and the cosine loss function is a function for calculating cosine similarity between two vectors, and +.>
Figure BDA0003978844180000091
Is a support model.
Optionally, in the face image recognition method provided in the embodiment of the present application, the challenge sample of each iteration is intercepted into the target numerical range by an interception function; inputting the intercepted countermeasure sample and the intercepted attacked image of each iteration into a query model to obtain a third result; and inputting the third result into a cosine loss function to obtain a fourth result, and calculating the gradient of the fourth result to obtain the query gradient.
Specifically, the query gradient is determined by the following formula:
Figure BDA0003978844180000092
Wherein g que In order to query the gradient,
Figure BDA0003978844180000093
to query the model, x meta For the challenge sample after treatment; calculating a processed challenge sample by the following formula:
Figure BDA0003978844180000094
Figure BDA0003978844180000095
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003978844180000096
as the first intermediate variable, clip [0,1] To intercept a value between 0 and 1,
Figure BDA0003978844180000097
to intercept the value at +.>
Figure BDA0003978844180000098
To->
Figure BDA0003978844180000099
The interception function between the two is epsilon as a disturbance boundary, alpha as a disturbance adding step length and sign as a sign function.
The sign function is a function that extracts a sign of an input, and the Clip function replaces a numeric value equal to or smaller than a lower limit value with a lower limit value and replaces a numeric value equal to or larger than an upper limit value with an upper limit value. The upper and lower threshold values are custom. The query gradient is calculated by the above formula.
Optionally, in the method for identifying a face image provided in the embodiment of the present application, determining the target countermeasure sample includes: determining a disturbance adding step length, and calculating a first tensor product of the aggregation gradient and the target area of each iteration; calculating a target difference value of the challenge sample and the first tensor product of each iteration; and intercepting the target difference value into a target numerical range through an intercepting function to obtain a target countermeasure sample.
Specifically, the target challenge sample is determined by the following formula:
Figure BDA00039788441800000910
Figure BDA00039788441800000911
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA00039788441800000912
for target challenge samples per iteration, +.>
Figure BDA00039788441800000913
Is a second intermediate variable.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment of the application also provides a device for recognizing the face image, and it should be noted that the device for recognizing the face image in the embodiment of the application can be used for executing the method for recognizing the face image provided in the embodiment of the application. The following describes a face image recognition device provided in an embodiment of the present application.
Fig. 3 is a schematic diagram of a face image recognition device according to an embodiment of the present application. As shown in fig. 3, the apparatus includes:
a first determining unit 10, configured to determine N target countermeasure samples, and input each target countermeasure sample into a preset face recognition model to obtain N face recognition results, where the target countermeasure samples are obtained by processing a face recognition model set, and the face recognition model set includes m face recognition models, N is greater than or equal to 1 and N is less than m, and m is greater than 1;
A judging unit 20, configured to judge whether each face recognition result is the same as a preset result;
a second determining unit 30, configured to determine, as a training sample, a target challenge sample corresponding to the face recognition result when the face recognition result is different from the preset result;
the updating unit 40 is configured to update the preset face recognition model through the training sample, obtain the target face recognition model, and identify the face image through the target face recognition model.
According to the face image recognition device provided by the embodiment of the application, N target countermeasure samples are determined through the first determining unit 10, each target countermeasure sample is respectively input into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, the face recognition model set comprises m face recognition models, N is greater than or equal to 1, N is smaller than m, and m is greater than 1; a judging unit 20 for judging whether each face recognition result is identical to a preset result; a second determining unit 30 for determining a target countermeasure sample corresponding to the face recognition result as a training sample when the face recognition result is different from the preset result; the updating unit 40 updates the preset face recognition model through the training sample to obtain the target face recognition model, and recognizes the face image through the target face recognition model, so that the problem that the face image is easy to be recognized by mistake through the face recognition model trained by the countermeasure sample due to weaker mobility of the countermeasure sample in the related art is solved, a plurality of target countermeasure samples with strong mobility are obtained through face recognition model set processing, the preset face recognition model is trained by using the target countermeasure sample to obtain the target face recognition model, and the face image is recognized through the target face recognition model, thereby achieving the effects of improving the defense of the face recognition model to the countermeasure sample and avoiding the recognition error of the face image.
Optionally, in the face image recognition apparatus provided in the embodiment of the present application, the first determining unit 10 includes: the acquisition module is used for acquiring the face image and the face recognition model set; the extraction module is used for extracting characteristic points from the face image, determining a target area according to the characteristic points, and generating an initial countermeasure sample according to the face image and the target area, wherein the target area is a characteristic area for identifying the face through a preset characteristic extraction model; the processing module is used for carrying out N times of iterative processing on the face recognition model set to obtain N aggregation gradients, wherein one aggregation gradient is obtained through each time of iterative processing; a determining module for determining N target challenge samples from each aggregation gradient and the initial challenge samples.
Optionally, in the facial image recognition device provided in the embodiment of the present application, the extraction module includes: the recognition sub-module is used for recognizing feature points of the face image through a preset feature extraction model, and adjusting the face image to a preset size to obtain a processed face image; and the extraction sub-module is used for extracting feature points from the processed face image through a preset feature extraction model.
Optionally, in the facial image recognition device provided in the embodiment of the present application, the extraction module includes: the first determining submodule is used for determining a target area according to the target area generating template and the characteristic points; and the second determination submodule is used for carrying out tensor product calculation on the attack image and the target area of the attacked image in the face image to obtain an initial challenge sample, wherein the attack image is an image used for resisting the attack when the initial challenge sample is generated, and the attacked image is an image used for receiving the challenge when the initial challenge sample is generated.
Optionally, in the facial image recognition device provided in the embodiment of the present application, the processing module includes: and the third determining submodule is used for carrying out N times of iterative processing on the face recognition model set to obtain N aggregation gradients, wherein each iteration is to randomly select one model from m face recognition models as a query model, m-1 models except the query model are used as a support set, and the aggregation gradient of each iteration is calculated through the total gradient of the query model and the support lumped gradient.
Optionally, in the facial image recognition device provided in the embodiment of the present application, the device further includes: a third determining unit, configured to sequentially determine one face recognition model in the support set as a support model from m-1 face recognition models in the support set; a fourth determining unit, configured to sequentially determine support gradients of the challenge sample under each support model, to obtain m-1 support gradients; the computing unit is used for computing query gradients according to each support gradient and the query model to obtain m-1 query gradients; and a fifth determining unit for determining the sum of the m-1 support gradients as a support lumped gradient and determining the sum of the m-1 query gradients as a query model total gradient.
Optionally, in the face image recognition apparatus provided in the embodiment of the present application, the fourth determining unit includes: the first input module is used for inputting the challenge sample and the attacked image of each iteration into the support model to obtain a first result; and the second input module is used for inputting the first result into the cosine loss function to obtain a second result, and calculating the gradient of the second result to obtain the support gradient.
Optionally, in the face image recognition device provided in the embodiment of the present application, the fifth determining unit includes: the intercepting module is used for intercepting the countermeasure sample of each iteration into a target numerical range through an intercepting function; the third input module is used for inputting the intercepted countermeasure sample and the intercepted attacked image of each iteration into the query model to obtain a third result; and the fourth input module is used for inputting the third result into the cosine loss function to obtain a fourth result, and calculating the gradient of the fourth result to obtain the query gradient.
Optionally, in the facial image recognition device provided in the embodiment of the present application, the determining module includes: a fourth determining submodule, configured to determine a disturbance adding step size, and calculate a first tensor product of the aggregation gradient and the target area in each iteration; a calculation sub-module for calculating a target difference value of the challenge sample and the first tensor product for each iteration; and intercepting the target difference value into a target numerical range through an intercepting function to obtain a target countermeasure sample.
The face image recognition apparatus includes a processor and a memory, the first determination unit 10, the judgment unit 20, the second determination unit 30, the update unit 40, and the like are stored as program units in the memory, and the processor executes the program units stored in the memory to realize the corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel can be provided with one or more than one kernel, and the defense of the face recognition model to the countermeasure sample is improved by adjusting the kernel parameters, so that the error in recognition of the face image is avoided.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
The embodiment of the invention provides a computer readable storage medium, on which a program is stored, which when executed by a processor, implements a method for recognizing a face image.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program runs to execute a face image recognition method.
As shown in fig. 4, an embodiment of the present invention provides an electronic device, where a device 401 includes a processor, a memory, and a program stored on the memory and executable on the processor, and when the processor executes the program, the following steps are implemented: a face image recognition method. The device herein may be a server, PC, PAD, cell phone, etc.
The present application also provides a computer program product adapted to perform, when executed on a data processing device, a program initialized with the method steps of: a face image recognition method.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (12)

1. A method for recognizing a face image, comprising:
n target countermeasure samples are determined, each target countermeasure sample is respectively input into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, the face recognition model set comprises m face recognition models, N is greater than or equal to 1, N is smaller than m, and m is greater than 1;
Judging whether each face recognition result is the same as a preset result;
under the condition that the face recognition result is different from the preset result, determining a target countermeasure sample corresponding to the face recognition result as a training sample;
and updating the preset face recognition model through the training sample to obtain a target face recognition model, and recognizing a face image through the target face recognition model.
2. The method of claim 1, wherein determining N target challenge samples comprises:
acquiring the face image and the face recognition model set;
extracting feature points from the face image, determining a target area according to the feature points, and generating an initial countermeasure sample according to the face image and the target area, wherein the target area is a feature area for identifying a face through a preset feature extraction model;
performing N times of iterative processing on the face recognition model set to obtain N aggregation gradients, wherein one aggregation gradient is obtained through each time of iterative processing;
determining the N target challenge samples from each of the aggregation gradients and the initial challenge samples.
3. The method of claim 2, wherein extracting feature points for the face image comprises:
Identifying the feature points of the face image through the preset feature extraction model, and adjusting the face image to a preset size to obtain a processed face image;
and extracting the feature points from the processed face image through the preset feature extraction model.
4. The method of claim 2, wherein determining a target area from the feature points and generating an initial challenge sample from the face image and the target area comprises:
determining a target area according to a target area generating template and the characteristic points;
and carrying out tensor product calculation on the target areas of the attack image and the attacked image in the face image to obtain the initial challenge sample, wherein the attack image is an image used for resisting the attack when the initial challenge sample is generated, and the attacked image is an image used for receiving the challenge when the initial challenge sample is generated.
5. The method according to claim 2, wherein N iterations of the face recognition model set are performed to obtain N aggregated gradients, each iteration is to randomly select one model from the m face recognition models as a query model, m-1 models other than the query model are used as a support set, and the aggregated gradient of each iteration is calculated by querying a total gradient of the models and supporting the lumped gradient.
6. The method of claim 5, wherein the query model total gradient and the support total gradient are determined by:
sequentially determining one face recognition model in the support set from m-1 face recognition models in the support set as a support model;
sequentially determining the support gradient of the antagonism sample under each support model to obtain m-1 support gradients;
calculating query gradients according to each support gradient and the query model to obtain m-1 query gradients;
determining a sum of the m-1 support gradients as the support lumped gradient and determining a sum of the m-1 query gradients as the query model total gradient.
7. The method of claim 6, wherein determining the support gradient comprises:
inputting the challenge sample and the attacked image of each iteration into the support model to obtain a first result;
and inputting the first result into a cosine loss function to obtain a second result, and calculating the gradient of the second result to obtain the support gradient.
8. The method of claim 6, wherein determining the query gradient comprises:
intercepting the countermeasure sample of each iteration into a target numerical range through an intercepting function;
Inputting the intercepted countermeasure sample and the intercepted attacked image of each iteration into the query model to obtain a third result;
and inputting the third result into a cosine loss function to obtain a fourth result, and calculating the gradient of the fourth result to obtain the query gradient.
9. The method of claim 8, wherein determining the target challenge sample comprises:
determining a disturbance adding step length, and calculating a first tensor product of the aggregation gradient and the target area of each iteration;
calculating a target difference value of the challenge sample and the first tensor product for each iteration;
and intercepting the target difference value into the target numerical range through an intercepting function to obtain the target countermeasure sample.
10. A face image recognition apparatus, comprising:
the first determining unit is used for determining N target countermeasure samples, and respectively inputting each target countermeasure sample into a preset face recognition model to obtain N face recognition results, wherein the target countermeasure samples are processed by a face recognition model set, the face recognition model set comprises m face recognition models, N is greater than or equal to 1, N is smaller than m, and m is greater than 1;
The judging unit is used for judging whether each face recognition result is identical to a preset result;
the second determining unit is used for determining a target countermeasure sample corresponding to the face recognition result as a training sample under the condition that the face recognition result is different from the preset result;
and the updating unit is used for updating the preset face recognition model through the training sample to obtain a target face recognition model, and recognizing a face image through the target face recognition model.
11. A processor, characterized in that the processor is configured to run a program, wherein the program runs to perform the method of recognizing a face image according to any one of claims 1 to 9.
12. An electronic device comprising one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of face image recognition of any of claims 1-9.
CN202211539043.8A 2022-12-02 2022-12-02 Face image recognition method and device, processor and electronic equipment Pending CN116152884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211539043.8A CN116152884A (en) 2022-12-02 2022-12-02 Face image recognition method and device, processor and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211539043.8A CN116152884A (en) 2022-12-02 2022-12-02 Face image recognition method and device, processor and electronic equipment

Publications (1)

Publication Number Publication Date
CN116152884A true CN116152884A (en) 2023-05-23

Family

ID=86349724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211539043.8A Pending CN116152884A (en) 2022-12-02 2022-12-02 Face image recognition method and device, processor and electronic equipment

Country Status (1)

Country Link
CN (1) CN116152884A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665282A (en) * 2023-07-26 2023-08-29 苏州浪潮智能科技有限公司 Face recognition model training method, face recognition method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665282A (en) * 2023-07-26 2023-08-29 苏州浪潮智能科技有限公司 Face recognition model training method, face recognition method and device

Similar Documents

Publication Publication Date Title
CN112633311A (en) Efficient black-box antagonistic attacks using input data structures
KR20190126046A (en) Risk identification methods, apparatus and electronic devices related to transactions to be processed
CN113449783B (en) Countermeasure sample generation method, system, computer device and storage medium
US20200125836A1 (en) Training Method for Descreening System, Descreening Method, Device, Apparatus and Medium
CN115115905B (en) High-mobility image countermeasure sample generation method based on generation model
CN112926654A (en) Pre-labeling model training and certificate pre-labeling method, device, equipment and medium
CN113298152B (en) Model training method, device, terminal equipment and computer readable storage medium
CN112818995B (en) Image classification method, device, electronic equipment and storage medium
CN112966685B (en) Attack network training method and device for scene text recognition and related equipment
CN113344016A (en) Deep migration learning method and device, electronic equipment and storage medium
CN114677565A (en) Training method of feature extraction network and image processing method and device
CN116152884A (en) Face image recognition method and device, processor and electronic equipment
CN113919497A (en) Attack and defense method based on feature manipulation for continuous learning ability system
CN111046957B (en) Model embezzlement detection method, model training method and device
WO2021042544A1 (en) Facial verification method and apparatus based on mesh removal model, and computer device and storage medium
Yang et al. Random subspace supervised descent method for regression problems in computer vision
CN113435264A (en) Face recognition attack resisting method and device based on black box substitution model searching
CN111382837A (en) Countermeasure sample generation method based on depth product quantization
CN116152542A (en) Training method, device, equipment and storage medium for image classification model
CN113610904B (en) 3D local point cloud countermeasure sample generation method, system, computer and medium
CN111241571A (en) Data sharing method, model and storage medium
CN115358283A (en) Electromagnetic signal twin deep learning identification method, device, equipment and storage medium
CN114742170B (en) Countermeasure sample generation method, model training method, image recognition method and device
CN117078789B (en) Image processing method, device, equipment and medium
CN117454187B (en) Integrated model training method based on frequency domain limiting target attack

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination