CN114898137A - Face recognition-oriented black box sample attack resisting method, device, equipment and medium - Google Patents
Face recognition-oriented black box sample attack resisting method, device, equipment and medium Download PDFInfo
- Publication number
- CN114898137A CN114898137A CN202210253999.5A CN202210253999A CN114898137A CN 114898137 A CN114898137 A CN 114898137A CN 202210253999 A CN202210253999 A CN 202210253999A CN 114898137 A CN114898137 A CN 114898137A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- feature
- attack
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000012795 verification Methods 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims description 69
- 238000012360 testing method Methods 0.000 claims description 59
- 239000013598 vector Substances 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 16
- 230000000007 visual effect Effects 0.000 claims description 12
- 238000003860 storage Methods 0.000 claims description 8
- 238000010200 validation analysis Methods 0.000 claims description 5
- 230000010354 integration Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 20
- 238000005457 optimization Methods 0.000 abstract description 6
- 238000013508 migration Methods 0.000 abstract description 2
- 230000005012 migration Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 13
- 238000004590 computer program Methods 0.000 description 9
- 238000013135 deep learning Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 230000003042 antagnostic effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face recognition-oriented black box sample attack resisting method which comprises S1, preparing and preprocessing a data set. S2, converting parameters to be optimized into generator network parameters by utilizing a DIP (deep Image Prior) network, and introducing a multiple feature extraction network and directional features in the training process so as to improve attack precision and mobility. S3, training the initially randomly selected hidden codes z, generating a model through a picture to obtain a final attack picture, mixing the attack picture with a clean picture, and inputting the image into a face recognition model in a picture pair mode to obtain a face verification and recognition result. Under an attack scene, a search space of an attack optimization algorithm is expanded through a DIP network, and the problems of low attack precision and low migration of the existing model to black box attack are solved.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method, a device, equipment and a medium for resisting sample attack by a black box facing face recognition.
Background
With the popularization of information technology, a face recognition technology taking a deep neural network as a main factor is widely applied, and the problems of vulnerability and insecurity exposed by deep learning become hidden dangers. The judgment of misleading the face recognition system by implementing the antagonistic attack on the input face sample becomes an important ring for revealing the vulnerability of the deep learning network and is also important priori knowledge for implementing defense. The human face features have special advantages, so that the human face features are more visual and convenient in identity identification compared with other biological features, the human face features do not need to be contacted in the whole process during collection, and the human face features can be collected in a public safety scene without being matched with concealment. Just because the human face features have the good characteristics, the human face recognition technology is widely applied, and if the human face recognition technology is used in traffic supervision, traffic violation, drunk driving, over-riding, red light running and other traffic violation phenomena can be comprehensively, timely and effectively controlled. Compare in traditional manual inspection supervision, automatic face identification can both bring the promotion in aspects such as efficiency, safety, cost and quality, therefore the face identification security problem that confronts sample attack and involves is crucial, how to promote face identification and confront sample attack efficiency, has attracted academic and business industry more and more attention.
Currently, the mainstream anti-attack methods are roughly divided into two types: white box attacks, black box attacks. White-box attacks refer to ways to attack with sufficient knowledge of the model architecture or parameters to be attacked. The specific attack method derived from the method is mainly based on gradient optimization, adopts reverse propagation continuous optimization, further generates a countersample, and has excellent effect. The drawback is that the architecture and parameters are not easily available in the actual production environment, and thus the application is not wide. In contrast, the black box attack is wide in application scene but opaque in model internal architecture and parameters. When the query conditions are harsh, the mainstream method is a substitution model method, an approximate white box model is selected to substitute an unknown model, and the transfer capability of resisting the sample is continuously enhanced, so that the method has good attack effect on different face recognition systems.
The feature vector is used as the unique identity representation of the face image, is widely applied to the current deep learning black box attack scheme, and how to accurately extract the feature vector becomes the subject of the current black box attack research. Most feature vectors are obtained in a search space formed by a few parameters, and the limited search space limits the attack capability of resisting the sample.
Disclosure of Invention
In order to improve the attack precision and sample mobility problems of the existing confrontation sample under a face recognition model, the invention provides a face recognition-oriented black box confrontation sample attack method, a face recognition-oriented black box confrontation sample attack device, face recognition-oriented black box confrontation sample attack equipment and a face recognition-oriented black box confrontation sample attack medium, and provides an attack scheme based on DIP (deep Image Prior). The method is mainly characterized in that the strong image modeling capacity of the DIP network is utilized, the search space of the optimization algorithm is greatly expanded, and the solving condition of the optimal solution is met. Meanwhile, the problem of feature direction selection of nondirectional attack is strengthened by introducing the feature distance constraint with directivity into the loss function, so that the attack capability and the migration capability of resisting a sample are improved, the introduction of the directional feature ensures that the feature selection of the parameter optimization process tends to the identity feature of a directional image, and the unnecessary search process is reduced to a certain extent.
According to a first aspect of the embodiments of the present application, a black box for face recognition to resist sample attack method includes:
acquiring face image data with a label;
preprocessing the acquired face image data to obtain preprocessed face image data;
inputting the preprocessed face Image data into a DIP (deep Image Prior) network, and iterating DIP network parameters by combining a set constraint function until a preset iteration stop condition is reached to obtain an optimized DIP (deep Image Prior) network;
based on the optimized DIP (deep Image Prior) network, combining with a set hidden code z to obtain an attack picture;
and mixing the attack pictures with clean pictures and inputting the clean pictures into the multiple face recognition models in a picture pair mode to obtain a face verification and recognition result.
In some embodiments, the preprocessing the acquired face image data includes: and converting the face image data into a size meeting the input and output requirements of the training model, and/or extracting the face by adopting a face alignment algorithm.
Further, the integrated expression formula after the multiple face recognition model feature extraction is as follows:
wherein, the image input Is an input original image; p (-) is a pre-processing function performed on the picture; f i () is the ith feature extractor; lambda [ alpha ] i The weight factor corresponding to the ith characteristic vector; embddings expressed as input Picture image input Performing feature integration expression after k personal face recognition network feature extraction;
in some embodiments, the constraint function comprises: characterizing a difference between the confrontation image and the input clean image as a first loss function; the difference of the integrated feature vector of the confrontation image and the integrated feature vector of the input clean image is characterized as a second loss function; and characterizing the difference between the integrated feature vector of the directional face and the integrated vector of the antagonistic image as a third loss function.
Further, the first loss function is specifically:
L space =Dist space (Image adv ,Image input ) (2)
wherein, the Image adv To combat an image; image (Image) input Inputting a clean image; dist space (. is a distance metric function of visual pixel space; l is space To combat the distance of the image from the input clean image in visual pixel space.
Further, the second loss function is specifically:
L feature =Dist feature (embddings adv ,embddings input ) (3)
wherein, embddings adv The method is an integrated expression of the countermeasure image after multiple face recognition network features are extracted; embddings input The method comprises the steps of performing integrated expression on an input clean image after multiple face recognition network features are extracted; dist feature () is a distance metric function of a face feature space; l is feature The distance between the image and the input clean image in the face feature space is countered.
Further, the third loss function is specifically:
L target =Dist feature (embddings adv ,embddings target ) (4)
wherein, embddings adv The method is an integrated expression of the countermeasure image after multiple face recognition network features are extracted; embddings target The method is characterized in that the method is an integrated expression of a feature guide image after multiple face recognition network features are extracted; dist feature () is a distance metric function of a face feature space; l is target The distance between the image and the feature guide image in the face feature space is resisted.
Further, mixing the attack picture with a clean picture, and inputting the clean picture into the face recognition model in a picture pair mode to obtain a face verification and recognition result, wherein the face verification and recognition result comprises the following steps:
when the human face is verified, firstly, the clean unprocessed human face image data and the human face image data after corresponding processing counterwork are paired one by one, then the paired human face image data is used as a test set and is jointly input into a uniformly selected human face recognition model, and finally the classification probability of each pair of human face image data is obtained. The method adopts two indexes of an accuracy rate ACC and a verification rate Val to evaluate the verification performance of the face recognition black box against the sample attack method, and comprises the following specific processes: the method comprises the following steps of pairing each clean unprocessed face image data and face image data after countermeasure processing to form a positive and negative test sample book, wherein the positive test sample is formed by pairing each clean unprocessed face image data and face image data after countermeasure processing with the same label, the negative test sample is formed by pairing each clean unprocessed face image data and face image data after countermeasure processing with different labels, and therefore the accuracy ACC can be expressed as:
in the formula (6), the TP indicates that the feature classification network judges a positive test sample as a positive test sample according to the classification probability, the TN indicates that the feature classification network judges a negative test sample as a negative test sample according to the classification probability, the FP indicates that the feature classification network judges the negative test sample as the positive test sample according to the classification probability, and the FN indicates that the feature classification network judges the positive test sample as the negative test sample according to the classification probability;
the validation rate Val indicator may be expressed as:
wherein,judging the positive test sample to be the proportion of the positive test sample in all real positive test samples by the characteristic classification network according to the classification probability;and the feature classification network judges the negative test samples as the proportion of the positive test samples in all the real negative test samples according to the classification probability.
The verification attack steps are as follows:
step 1: 3000 pairs of positive and negative samples were randomly selected for each pair, for a total of 6000 pairs of samples. The positive sample consists of two different face images with the same identity, and the negative sample consists of face images with different identities;
step 2: replacing one face image in the positive sample with a corresponding confrontation sample;
step 3: selecting a face recognition model, and verifying 6000 pair sample input models;
step 4: ACC and Val are obtained.
The invention is an attack method, so that the test result ACC after attack is expected to be smaller and better, the minimum value that the ACC can reach according to the steps is 50%, and the attack effect is better when the ACC is closer to the value. Val index is guaranteed to be the sameIt is worth to perform the analysis under the circumstances,the minimum value can be as high as 0%, so the attack effect is better,the closer to 0%.
Further, the preset iteration stop condition includes that the training reaches a preset training number, or a total loss function reaches a preset threshold, where the total loss function is as follows:
L total =α space L space -α feature L feature +α target L target (7)
wherein alpha is space 、α feature 、α target A weight factor for each loss; l is total As a function of the total loss.
According to a second aspect of the embodiments of the present application, a black box for face recognition against sample attack apparatus includes:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to implement the face recognition-oriented black box anti-sample attack method according to the first aspect.
According to a third aspect of embodiments herein, a computer device comprises: the face recognition-oriented black box of the second aspect resists sample attack devices.
According to a fourth aspect of embodiments herein, a computer-readable storage medium has stored therein a program executable by a processor, and when the program is executed by the processor, the program is configured to implement the method for resisting sample attack by a black box for face recognition according to any one of the first aspects.
The invention has the beneficial effects that:
1. by utilizing the strong modeling capability of DIP, the original image optimization searching range by means of a pixel domain space is transferred to a DIP self-supervision network parameter space range. The method is essentially a mode of image expression by using implicit coding z and DIP networks, has larger search space expansion and is beneficial to the solution of the optimal solution.
2. The introduction of the third loss function is helpful for selecting a search path, provides a more specific direction for the whole search process, enables the search process to approach to a directional characteristic space, and is helpful for saving unnecessary search attempts.
Drawings
Fig. 1 is a schematic view of an implementation environment of a black box anti-sample attack method for face recognition provided in an embodiment of the present application;
fig. 2 is an algorithm flowchart of a black box anti-sample attack method for face recognition provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a face-recognition-oriented black box sample attack resisting device provided in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a computer device provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a FaceNet positive and negative sample pair input format provided in an embodiment of the present application;
fig. 6 is a diagram of an effect of a black box confrontation sample for face recognition provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and the temporary and first embodiments of the present invention are intended to illustrate different stages in algorithm training, and are not limited in meaning. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The black box anti-sample attack method facing the face recognition can be applied to an application environment diagram shown in figure 1. The application environment includes a terminal 120, a server 140, and a terminal 120, where the first application environment is that the terminal 120 and the server 140 are connected through a network, where the number of the terminal 120 and the server 140 is not limited, and the network includes but is not limited to: the second application environment terminal 120 comprises a mobile phone, a tablet and the like, and the terminal 120 has the functions of image acquisition and sample attack or verification resistance on the face-recognition-oriented black box, so that the application environment is not limited by the face-recognition-oriented black box sample attack resistance method provided by the invention.
As shown in fig. 2, according to a first aspect of the embodiments of the present application, a black box for face recognition to resist sample attack includes the following steps:
s1, preparing and preprocessing a data set. A labeled face data set is selected, no label is involved in the picture generation process, and labeled data are adopted in the face identification data process. The pretreatment process comprises the following steps: 1. the size conversion needs to strictly control the size of an input picture in order to adapt to the input and output requirements of a training model, and the size of the input picture is ensured to meet the model input, and the common sizes are as follows: 124 × 124, 160 × 160. 2. The face alignment, in order to extract the face model more accurately, needs to adopt a suitable face alignment algorithm to extract the face, and filters the background part of the non-face, such as: MTCNN.
S2, parameters to be optimized are converted into generator network parameters by using a DIP network, and a multi-feature extraction network and directional features are introduced in the training process, so that attack accuracy and mobility are improved.
The final effect of image training needs to be considered according to two aspects. Firstly, the contrast image and the original clean input image have no obvious difference in a visual pixel domain, the difference between the contrast image and the input clean image is characterized as a first loss function, the value of the first loss function is ensured to be continuously reduced along with iteration, the smaller the value is, the better the visual effect is, and the stronger the authenticity of the image is. Secondly, the confrontation image and the original clean image need to show obvious difference on the expression of the feature domain to distinguish the difference of the confrontation image and the face identity of the input clean image, so that the difference of the integrated feature vector of the confrontation image and the integrated feature vector of the input clean image is characterized as a second loss function, the value of the second loss function is ensured to be continuously increased along with iteration, the larger the value is, the more violent the identity change is, and the stronger the attack effect is.
The multiple extraction network comprises multiple face recognition related networks, and the face features are extracted by using the face recognition feature extractors respectively and used as feature representations of the same face in different models. In consideration of mobility, feature vectors extracted by different face recognition models need to be integrally represented, and finally participate in a loss function. In consideration of attack accuracy, other face images with a large difference from the face identity in the original image are introduced in the attack process, and the feature representation of the images also needs to be subjected to integrated representation of different feature extraction networks, so that the face identity features of the original image are guided to change towards the expected direction. Therefore, the difference between the integrated feature vector of the directional face and the integrated vector of the counterimage is carved as a third loss function, the value of the third loss function is required to be ensured to be continuously reduced along with iteration, and the smaller the value, the closer the counterimage identity feature is to the directional feature, so that the attack effect is improved.
And S3, training the initially randomly selected hidden code z, generating a model through a picture to obtain a final attack picture, mixing the attack picture with a clean picture, and inputting the image into a face recognition model in a picture pair mode to obtain a face verification and recognition result.
Further, the integrated expression formula after the multiple face recognition model feature extraction is as follows:
wherein, the image input Is an input original image; p (-) is a preprocessing function for the picture, and the size and the face alignment can be realized by adopting an MTCNN algorithm generally; f i () is the ith feature extractor, the selectable feature extractors generally include FaceNet, ArcFace, CosFace and the like, and the proper feature extractors can be selected to be combined according to different task scenes; lambda [ alpha ] i A weighting factor corresponding to the ith eigenvector; embddings expressed as input Picture image input Performing feature integration expression after k personal face recognition network feature extraction;
further, the first loss function is specifically:
L space =Dist space (Image adv ,Image input ) (2)
wherein, the Image adv To combat an image; image (Image) input Inputting a clean image; dist space (. cndot.) is a distance metric function of visual pixel space, with smaller function values indicating less difference between challenge samples and input clean image pixels and higher quality challenge image; l is space To combat the distance of the image from the input clean image in visual pixel space.
Further, the second loss function is specifically:
L feature =Dist feature (embddings adv ,embddings input ) (3)
wherein, embddings adv The method is an integrated expression of the countermeasure image after multiple face recognition network features are extracted; embddings input The method comprises the steps of performing integrated expression on an input clean image after multiple face recognition network features are extracted; dist feature Is a human faceThe distance measurement function of the feature space, the larger the function value is, the larger the feature distance between the confrontation sample and the input clean picture is, the larger the identity feature is, namely, one of the most main embodiments of the attack precision is; l is feature The distance between the image and the input clean image in the face feature space is countered.
Further, the third loss function is specifically:
L target =Dist feature (embddings adv ,embddings target ) (4)
wherein, embddings adv The method is an integrated expression of the countermeasure image after multiple face recognition network features are extracted; embddings target The method is characterized in that the method is an integrated expression of a feature guide image after multiple face recognition network features are extracted; dist feature (-) is a distance measurement function of a face feature space, the value of the function should be smaller and higher, in order to guide the countermeasure sample to develop towards the direction expected by the user, an image guide mode is adopted to lead the identity feature of the countermeasure sample to approach the directional image feature, and the attack precision is further improved; l is target The distance between the contrast image and the feature guide image in the face feature space is used.
Further, the step S3 of mixing the attack picture with the clean picture and inputting the clean picture into the face recognition model in the form of a picture pair to obtain the face verification and recognition result includes the following steps:
when the human face is verified, firstly, the clean unprocessed human face image data and the human face image data after corresponding processing counterwork are paired one by one, and then the data are jointly input into a uniformly selected human face recognition model as a test set, and finally the classification probability of each pair of human face image data is obtained. The method adopts two indexes of the accuracy rate ACC and the verification rate Val to evaluate the verification performance of the face recognition black box against the sample attack method, and comprises the following specific processes: pairwise pairing each clean unprocessed face image data and face image data after countermeasure processing to form a positive test sample and a negative test sample, wherein the positive test sample is a sample formed by pairing each clean unprocessed face image data and face image data after countermeasure processing with the same label, and the negative test sample is a sample formed by pairing each clean unprocessed face image data and face image data after countermeasure processing with different labels, so that the accuracy ACC index can be expressed as:
in the formula (6), the TP indicates that the feature classification network judges a positive test sample as a positive test sample according to the classification probability, the TN indicates that the feature classification network judges a negative test sample as a negative test sample according to the classification probability, the FP indicates that the feature classification network judges the negative test sample as the positive test sample according to the classification probability, and the FN indicates that the feature classification network judges the positive test sample as the negative test sample according to the classification probability;
the validation rate Val indicator may be expressed as:
wherein,judging the positive test sample to be the proportion of the positive test sample in all real positive test samples by the characteristic classification network according to the classification probability;and the feature classification network judges the negative test samples as the proportion of the positive test samples in all the real negative test samples according to the classification probability.
The verification attack steps are as follows:
step 1: 3000 pairs of positive and negative samples were randomly selected for each pair, for a total of 6000 pairs of samples. The positive sample consists of two different face images with the same identity, and the negative sample consists of face images with different identities;
step 2: replacing one face image in the positive sample with a corresponding confrontation sample;
step 3: adopting a faceNet face recognition model, and verifying a 6000-pair sample input model;
step 4: ACC and Val are obtained.
The present invention is an attack method, so it is desirable that the smaller the test result ACC after the attack is, the better, and the minimum value ACC can reach according to the above steps is 50%, so the closer this value the better the attack effect, in this embodiment, ACC is about 86%. Val index is guaranteed to be the sameIt is worth to perform the analysis under the circumstances,the minimum value can reach 0 percent in the embodimentThe closer to 28%.
Further, the iteration stop condition preset in step S3 includes that the training reaches a preset training number, or the total loss function reaches a preset threshold, where the total loss function is as follows:
L total =α space L space -α feature L feature +α target L target (7)
wherein alpha is space 、α feature 、α target A weight factor for each loss; l is total As a function of the total loss.
The acquisition and preprocessing module 201 is configured to prepare and preprocess a data set, select a labeled face data set, and adopt labeled data in a face recognition data process without label participation in a picture generation process. The pretreatment process comprises the following steps: 1. the size conversion needs to strictly control the size of the input picture in order to adapt to the input and output requirements of the training model, and the size of the input picture is ensured to be in accordance with the model input. 2. And (3) face alignment, in order to extract a face model more accurately, a suitable face alignment algorithm is adopted to extract a face, and a non-face background part is filtered.
The attack picture generation module 202 converts parameters to be optimized into generator network parameters by using a DIP network, and introduces multiple feature extraction networks and directional features in a training process so as to improve attack accuracy and mobility.
The final effect of image training needs to be considered according to two aspects. Firstly, the contrast image and the original clean input image have no obvious difference in a visual pixel domain, the difference between the contrast image and the input clean image is described as a first loss function, the value of the first loss function is ensured to be continuously reduced along with iteration, and the smaller the value is, the better the visual effect is, and the stronger the authenticity of the image is. Secondly, the confrontation image and the original clean image need to show obvious difference on the expression of the feature domain to distinguish the difference of the confrontation image and the face identity of the input clean image, so that the difference of the integrated feature vector of the confrontation image and the integrated feature vector of the input clean image is characterized as a second loss function, the value of the second loss function is ensured to be continuously increased along with iteration, the larger the value is, the more violent the identity change is, and the stronger the attack effect is.
The multiple extraction network comprises multiple face recognition related networks, and the face features are extracted by using the face recognition feature extractors respectively and used as feature representations of the same face in different models. In consideration of mobility, feature vectors extracted by different face recognition models need to be integrally expressed, and finally participate in a loss function. In consideration of attack accuracy, other face images with a large difference from the face identity in the original image are introduced in the attack process, and the feature representation of the images also needs to be subjected to integrated representation of different feature extraction networks, so that the face identity features of the original image are guided to change towards the expected direction. Therefore, the difference between the integrated feature vector of the directional face and the integrated vector of the counterimage is carved as a third loss function, the value of the third loss function is required to be ensured to be continuously reduced along with iteration, and the smaller the value, the closer the counterimage identity feature is to the directional feature, so that the attack effect is improved.
And the verification and identification module 203 is used for verifying and identifying the face verification and identification result of the input image pair in the face identification model.
According to a third aspect of embodiments of the present application, as shown in fig. 4, a computer apparatus includes:
at least one processor 301;
at least one memory 302 for storing at least one program;
the processor 301 is used to provide computing and control capabilities to support the operation of the entire server. The memory 302 may include non-volatile storage media and internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program may be executed by a processor, and when the at least one program is executed by the at least one processor, the at least one processor may implement the method for resisting sample attack for a face recognition oriented black box according to the first aspect, where the computer device includes a mobile phone, a tablet computer, a personal digital assistant or a wearable device, or a server, and the present embodiment does not specifically limit the computer.
According to a fourth aspect of embodiments of the present application, a computer-readable storage medium has stored therein a processor-executable program, which when executed by a processor, is configured to implement any one of the face recognition-oriented black box anti-sample attack methods described in the first aspect.
In order to describe the method for resisting sample attack by the black box for face recognition provided by the present invention, the input clean image data in this embodiment is all derived from the LFW data set, and the image data to be attacked includes 2551 images of 1342 persons. And detecting and aligning the face area of the LFW part data image by using an MTCNN face detection and alignment network, and normalizing the high-definition face image resolution of the LFW part data to 160 multiplied by 160 by affine transformation according to the coordinates of five feature points of the face, namely the centers of two eyes, the nose tip and two mouth corners. The size adjustment of 160 x 160 is matched with the input size of a face recognition network in a subsequent verification and recognition module, and the MTCNN face detection alignment aims at better extracting the identity feature vector of the face image for each image.
In the present embodiment, the total loss function
L total =α space L space -α feature L feature +α target L target In (1), a first loss function L space The distance of the middle pixel space is measured by adopting the structural similarity of DSSIMs; second loss function L feature And a third loss function L target The distances of the feature spaces in (1) are all described by Cosine Similarity (Cosine Similarity). The weighting factor is initially set to: alpha is alpha space =0.1、α feature =0.05、α target 0.03. In which DIP self-supervision network architecture refers to the article "Deep Image Prior" published by Dmitry uliranov et al.
This example trains the model in a PyTorch deep learning framework. Using an Adam optimizer, the initial learning rate was set to 0.1 and the weight decay to 1e -5 The set iteration stop condition is finished after 3000 iteration training, and the total loss function is about-0.005;
in this embodiment, a backbone network used in the verification and identification process is inclusion ResNet v1, the entire architecture adopts a FaceNet model, and the network model parameters are obtained by training through a VGGFace2 data set.
In this embodiment, 6000 pairs of sample pairs of FaceNet are input, 3000 pairs of positive and negative sample pairs are selected by a random pairing method to generate an LFW _ validation.
In this embodiment, the experimental operating system is an ubuntu18.04 operating system, the server GPU is 1080ti, the used programming software is Pycharm2020, and the deep learning framework is pytorch 1.6.
The LFW data set contained 13233 images of 5749 subjects. The face in the LFW is first detected using MTCNN and aligned to 160 × 160 resolution to form clean face image data. In order to simulate the setting under the attack resisting scene, the attack images after the attack are replaced one by one according to the image file names, the input clean face image data and the face image data after the attack are paired pairwise to form positive and negative test samples, and the face verification is carried out. The LFW test set is used for face verification test, and the accuracy ACC and the verification rate Val are adopted as evaluation indexes.
In order to verify the performance of the face recognition-oriented black box against sample attack, the invention is similar to the method of the invention disclosed in the article "Fawkes: protection Privacy against Unauthorized Deep Learning Models "were compared. The results of an attack on the FaceNet face recognition model on the LFW dataset are shown in table 1.
In table 1, ACC of an original clean positive and negative sample pair of uinatacked is about 99.5%, and Val is 97.4% under the condition that FAR is 0.00067, which indicates that the face recognition rate of FaceNet is very high and the recognition effect is good; after Fawkes attack, the ACC is reduced to 88.2%, the FAR is kept unchanged, and the Val is reduced to 34.9%, which shows that the Fawkes attack effect is effective; under the DIP method, ACC is further reduced to 86.8%, FAR remains unchanged, and Val is further reduced to 28.3%, undoubtedly, the attack effect of the DIP method is further improved, and the expansion of the search space is really reasonable and effective.
It can be seen that the performance of the black box anti-sample attack method facing the face recognition in the FaceNet face recognition model completely surpasses the performance of Fawkes in FaceNet. The invention can improve the effect of the black box for face recognition on sample attack resistance.
Partial challenge sample images based on the DIP method are shown in fig. 6. In Fwakes, the visual perception of the confrontation sample face image is slightly different from the original clean image, and the sensory stimulation is not obvious. The challenge sample face image generated via the DIP method was slightly less sensorial than the corresponding image at Fawkes, but the sensory stimuli still fall within the acceptable range.
TABLE 1 Effect profiles for FaceNet's attack methods
Accuracy | Validation rate | |
Unattacked | 0.99517±0.00361 | 0.97467±0.01454@FAR=0.00067 |
Fawkes | 0.88217±0.01406 | 0.34967±0.05738@FAR=0.00067 |
DIP(ours) | 0.86883±0.03407 | 0.28367±0.10972@FAR=0.00067 |
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above examples represent only a few embodiments of the present invention, which are described in more detail and detail but are not to be construed as limiting the scope of the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A black box anti-sample attack method facing face recognition is characterized by comprising the following steps:
acquiring face image data with a label;
preprocessing the acquired face image data to obtain preprocessed face image data;
inputting the preprocessed face image data into a DIP network, and iterating DIP network parameters by combining a set constraint function until a preset iteration stop condition is reached to obtain an optimized DIP network;
based on the optimized DIP network, combining with a set hidden code z to obtain an attack picture;
and mixing the attack pictures with clean pictures and inputting the clean pictures into the multiple face recognition models in a picture pair mode to obtain a face verification and recognition result.
2. The black box anti-sample attack method facing to face recognition according to claim 1, wherein preprocessing the acquired face image data comprises: and converting the face image data into a size meeting the input and output requirements of the training model, and/or extracting the face by adopting a face alignment algorithm.
3. The method for resisting sample attack by a black box for face recognition according to claim 1, wherein the constraint function comprises: characterizing a difference between the confrontation image and the input clean image as a first loss function; the difference of the integrated feature vector of the confrontation image and the integrated feature vector of the input clean image is characterized as a second loss function; and the difference of the integrated feature vector of the directional face and the integrated vector of the confrontation image is characterized as a third loss function.
4. The black box anti-sample attack method for face recognition according to claim 1, wherein the integrated expression formula after the multiple face recognition model feature extraction is as follows:
wherein, the image input Is an input original image; p (-) is a pre-processing function performed on the picture; f i (. is) isAn ith feature extractor; lambda [ alpha ] i The weight factor corresponding to the ith characteristic vector; embddings expressed as input Picture image input And (5) performing feature integration expression after k personal face recognition network feature extraction.
5. The face-recognition-oriented black box sample attack resisting method according to claim 3, wherein the first loss function is specifically:
L space =Dist space (Image adv ,Image input )
wherein, the Image adv To combat an image; image (Image) input Inputting a clean image; dist space (. is a distance metric function of visual pixel space; l is space Is the distance of the challenge image from the input clean image in visual pixel space;
the second loss function is specifically:
L feature =Dist feature (embddings adv ,embddings input )
wherein, embddings adv The integrated expression of the confrontation image after the extraction of multiple face recognition network features is realized; embddings input The method comprises the steps of performing integrated expression on an input clean image after multiple face recognition network features are extracted; dist feature () is a distance metric function of a face feature space; l is feature The distance between the confrontation image and the input clean image in the face feature space;
the third loss function is specifically:
L target =Dist feature (embddings adv ,embddings target )
wherein, embddings adv The integrated expression of the confrontation image after the extraction of multiple face recognition network features is realized; embddings target The method is characterized in that the method is an integrated expression of a feature guide image after multiple face recognition network features are extracted; dist feature () is a distance metric function of a face feature space; l is target The distance between the image and the feature guide image in the face feature space is resisted.
6. The black box anti-sample attack method facing to the face recognition, according to claim 1, wherein the method for inputting the attack picture mixed with the clean picture into the multiple face recognition models in the form of picture pairs to obtain the face verification and recognition results comprises:
when the human face is verified, the clean unprocessed human face image data and the human face image data after corresponding processing counterwork are paired one by one and are used as a test set to be jointly input into the uniformly selected human face recognition model, and the classification probability of each pair of anti-human face image data is obtained;
the method for evaluating the verification performance of the face recognition black box against the sample attack method by adopting two indexes of the accuracy rate ACC and the verification rate Val comprises the following steps: pairwise pairing each clean unprocessed face image data and face image data after countermeasure processing to form a positive and negative test sample, wherein the positive test sample is a sample formed by pairing each clean unprocessed face image data and face image data after countermeasure processing with the same label, the negative sample is a sample formed by pairing each clean unprocessed face image data and face image data after countermeasure processing with different labels, and the accuracy ACC index is expressed as:
the method comprises the steps that a TP indicates a feature classification network to judge a positive test sample into a positive test sample according to classification probability, a TN indicates a feature classification network to judge a negative test sample into a negative test sample according to the classification probability, a FP indicates a feature classification network to judge the negative test sample into the positive test sample according to the classification probability, and a FN indicates the feature classification network to judge the positive test sample into the negative test sample according to the classification probability;
the validation rate Val indicator may be expressed as:
wherein,judging the positive test sample to be the proportion of the positive test sample in all real positive test samples by the characteristic classification network according to the classification probability;and the feature classification network judges the negative test samples as the proportion of the positive test samples in all the real negative test samples according to the classification probability.
7. The black box anti-sample-attack method facing face recognition according to claim 1 or 5, wherein the preset iteration stop condition is that training reaches a preset training number, and/or a total loss function reaches a preset threshold, wherein the total loss function is as follows:
L total =α space L space -α feature L feature +α target L target
wherein L is space To counter the distance between the image and the input clean image in visual pixel space, L feature To counter the distance between the image and the input clean image in the face feature space, L target To combat the distance between the image and the feature-guided image in the face feature space, α space 、α feature 、α target A weight factor for each loss; l is total As a function of the total loss.
8. A face recognition-oriented black box anti-sample attack device is characterized by comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by at least one processor, causes the at least one processor to implement the face recognition oriented black box method of combating sample attacks of any of claims 1-7.
9. A computer device characterized by comprising the face recognition-oriented black box anti-sample attack apparatus of claim 8.
10. A computer-readable storage medium, in which a processor-executable program is stored, wherein the processor-executable program, when executed by a processor, is configured to implement the face recognition oriented black box anti-sample attack method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210253999.5A CN114898137A (en) | 2022-03-15 | 2022-03-15 | Face recognition-oriented black box sample attack resisting method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210253999.5A CN114898137A (en) | 2022-03-15 | 2022-03-15 | Face recognition-oriented black box sample attack resisting method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114898137A true CN114898137A (en) | 2022-08-12 |
Family
ID=82716310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210253999.5A Pending CN114898137A (en) | 2022-03-15 | 2022-03-15 | Face recognition-oriented black box sample attack resisting method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114898137A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115345280A (en) * | 2022-08-16 | 2022-11-15 | 东北林业大学 | Face recognition attack detection system, method, electronic device and storage medium |
-
2022
- 2022-03-15 CN CN202210253999.5A patent/CN114898137A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115345280A (en) * | 2022-08-16 | 2022-11-15 | 东北林业大学 | Face recognition attack detection system, method, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537743B (en) | Face image enhancement method based on generation countermeasure network | |
CN108446700B (en) | License plate attack generation method based on anti-attack | |
CN107403142B (en) | A kind of detection method of micro- expression | |
CN111931758B (en) | Face recognition method and device combining facial veins | |
CN110851835A (en) | Image model detection method and device, electronic equipment and storage medium | |
CN113011357B (en) | Depth fake face video positioning method based on space-time fusion | |
CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
CN111325115A (en) | Countermeasures cross-modal pedestrian re-identification method and system with triple constraint loss | |
CN113221655B (en) | Face spoofing detection method based on feature space constraint | |
CN114067444A (en) | Face spoofing detection method and system based on meta-pseudo label and illumination invariant feature | |
CN108108760A (en) | A kind of fast human face recognition | |
CN112668557A (en) | Method for defending image noise attack in pedestrian re-identification system | |
CN111507320A (en) | Detection method, device, equipment and storage medium for kitchen violation behaviors | |
CN111476727B (en) | Video motion enhancement method for face-changing video detection | |
CN112818774A (en) | Living body detection method and device | |
CN114842524A (en) | Face false distinguishing method based on irregular significant pixel cluster | |
CN114693607A (en) | Method and system for detecting tampered video based on multi-domain block feature marker point registration | |
Saealal et al. | Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance | |
Jiang et al. | Practical face swapping detection based on identity spatial constraints | |
CN114627424A (en) | Gait recognition method and system based on visual angle transformation | |
CN114898137A (en) | Face recognition-oriented black box sample attack resisting method, device, equipment and medium | |
CN109886251A (en) | A kind of recognition methods again of pedestrian end to end guiding confrontation study based on posture | |
CN116824695B (en) | Pedestrian re-identification non-local defense method based on feature denoising | |
CN117197543A (en) | Network anomaly detection method and device based on GMD imaging and improved ResNeXt | |
CN117152486A (en) | Image countermeasure sample detection method based on interpretability |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |