CN110991299B - Confrontation sample generation method aiming at face recognition system in physical domain - Google Patents

Confrontation sample generation method aiming at face recognition system in physical domain Download PDF

Info

Publication number
CN110991299B
CN110991299B CN201911179565.XA CN201911179565A CN110991299B CN 110991299 B CN110991299 B CN 110991299B CN 201911179565 A CN201911179565 A CN 201911179565A CN 110991299 B CN110991299 B CN 110991299B
Authority
CN
China
Prior art keywords
sample
face
mask
generator
disturbance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911179565.XA
Other languages
Chinese (zh)
Other versions
CN110991299A (en
Inventor
胡永健
蔡楚鑫
王宇飞
刘琲贝
葛治中
李皓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sino Singapore International Joint Research Institute
Original Assignee
Sino Singapore International Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sino Singapore International Joint Research Institute filed Critical Sino Singapore International Joint Research Institute
Priority to CN201911179565.XA priority Critical patent/CN110991299B/en
Publication of CN110991299A publication Critical patent/CN110991299A/en
Application granted granted Critical
Publication of CN110991299B publication Critical patent/CN110991299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for generating confrontation samples aiming at a face recognition system in a physical domain, which generates a spectacle shape anti-interference block which can be reproduced in the physical domain through a generator to mislead the face recognition system; meanwhile, the influence of different illumination and printer color difference is considered, data enhancement is carried out by simulating illumination change, and the success rate of attack in a physical domain is improved by combining a plurality of loss functions. On the other hand, different face recognition networks are connected into the whole training framework, so that digital domain confrontation samples aiming at different face recognition methods can be generated conveniently and rapidly, and the confrontation disturbance can be reproduced physically. The invention effectively realizes the attack to the face recognition system in the physical domain, effectively solves the problem that the system lacks enough confrontation samples in the training process, can quickly generate a large amount of confrontation samples to train the network to improve the reliability of the network, and simultaneously, the confrontation samples can be physically reproduced and have robustness to illumination change.

Description

Confrontation sample generation method aiming at face recognition system in physical domain
Technical Field
The invention relates to the technical field of computer vision and biological recognition, in particular to a confrontation sample generation method aiming at a face recognition system in a physical domain.
Background
In recent years, face recognition technology has been developed vigorously, and particularly, with the development of deep learning technology, a face recognition system realized based on a deep neural network has a good recognition effect under the support of sufficient training data and calculation power. However, the deep learning technique is very vulnerable to counterattack, i.e. the deep neural network can output any desired classification with high confidence by making a fine disturbance to the input which is not noticeable to human eyes, which reveals the security risk of the deep learning system, on the other hand, the counterattack can be implemented in the physical domain, i.e. the deep neural network is deceived by creating a physically reproducible counterattack, which further reveals the vulnerability of the deep neural network to the counterattack, so it is important to research how to generate high-quality countersamples quickly and in large quantities to defend against the counterattack.
In the prior art, some methods find a countermeasure sample in an input space by optimizing and traversing a manifold network representation; some generate countermeasure samples by adding the countermeasure disturbance on the source image in equal steps along the direction of the maximum gradient change of the neural network; some use the fast gradient descent method multiple times with a small step size to generate the challenge sample; however, these methods all add countermeasures to one original sample to construct a countermeasure sample, and are slow in generation speed and complex in calculation, and cannot meet the requirement of large-scale rapid generation of the countermeasure sample. In order to generate countermeasure samples for training in a large scale, there are also those that generate countermeasure disturbances rapidly and in large quantities by a Generative network such as GAN (Generative adaptive network), countermeasure autoencoder; and generating a countermeasure sample which can cause the face recognition system to mistake the face by utilizing the countermeasure generating network, and attacking in a physical domain. However, the countermeasure disturbance of the shape of the glasses is generated by using the GAN, so that the generated countermeasure sample is unstable, and meanwhile, the time and labor are wasted because a database is constructed by collecting the pictures of the glasses in advance to train the GAN; in addition, the method does not consider the influence of different illumination conditions, so that the attack success rate in the physical world with variable illumination conditions is greatly influenced and is difficult to popularize into a new environment, and the practical significance of the generated countermeasure sample is greatly reduced.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a method for generating confrontation samples aiming at a face recognition system in a physical domain, the method can quickly generate a large amount of high-quality confrontation samples aiming at the face recognition system, the generated eyeglass shape confrontation disturbance can be physically reproduced and has larger robustness to the real illumination change, the face recognition system can be successfully attacked in the physical domain, the difficulty of obtaining the confrontation samples for network training is greatly reduced, and the physical reproducibility and the robustness to the illumination change enable the generated confrontation samples to have practical significance;
the invention generates the glasses shape anti-interference block which can be reproduced in the physical domain through the generator, and can mislead the face recognition system to recognize one person as another person or another specific person; meanwhile, the influence of different illumination and printer color difference is considered, data enhancement is carried out by simulating illumination change, and the success rate of attack in a physical domain is improved by combining a plurality of loss functions. On the other hand, different face recognition networks based on the traditional method or the deep learning are connected into the overall training framework, so that confrontation samples aiming at different face recognition methods can be generated conveniently. The invention effectively realizes the attack to the face recognition system in the physical domain, can further arouse the attention to the safety of the face recognition system, effectively solves the problem that the system lacks enough confrontation samples in the training process, can quickly generate a large amount of confrontation samples to train the network to improve the reliability of the network, and simultaneously, the confrontation samples can be physically reproduced and have robustness to illumination change.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a confrontation sample generation method aiming at a face recognition system in a physical domain, which comprises a data preprocessing step, a model training step and a model application step;
the data preprocessing step comprises:
determining the input resolution of a face region image of an attacked face recognition system;
obtaining an original sample X O Cutting a face area according to each sample face position frame, adjusting the resolution to the face area image input resolution by adopting an interpolation algorithm, and obtaining a face input sample X F Training data of the modelSample is set to XT F Setting model application data samples to XA F Acquiring the position coordinates of key points of both eyes of the human face by adopting a human face detection algorithm;
establishing an anti-disturbance mask M according to the position coordinates of key points of the eyes of the human face, and setting model training data anti-disturbance mask as MT F Setting the model application data to MA against the perturbation mask F
The model training step comprises:
constructing a generator, wherein the input of the generator is a Gaussian white noise sequence, and the output of the generator is a rectangular anti-disturbance matrix;
generating a face confrontation sample by the generator and the confrontation disturbance mask;
data enhancement is carried out by adopting an image processing method for simulating illumination change in a physical domain to obtain a face confrontation sample XT finally used for training A
Confront human face with sample XT A Inputting the data into a face recognition network F which needs to resist attacks, and constructing an integral training network;
constructing the penalty function L of the generator A Constructing a training network print fractional loss function L P Constructing a training network total loss function L;
setting a model optimization algorithm;
training and optimizing the generator parameters until the training network parameters are stable, and storing the model and the weight of the generator after the training is finished;
the model application step comprises:
acquiring the original resolution of the face subjected to the data preprocessing step;
loading a generator model and weights after training, and constructing an application network by a generator and anti-disturbance mask input, gaussian white noise sequence input and human face area image input;
acquiring a face region countermeasure sample in the mask countermeasure disturbance and model training step, inputting a Gaussian white noise sequence into the trained generator, outputting rectangular countermeasure disturbance by the trained generator, combining the rectangular countermeasure disturbance with the countermeasure disturbance mask and the face region image, and obtaining the mask countermeasure disturbance and the face region countermeasure sample through the model training step;
obtaining digital domain countermeasure samples: adjusting the resolution of the face region countermeasure sample to the original resolution, and then covering the face region of the original sample according to the face position frame to obtain a digital domain countermeasure sample of the original sample;
acquiring a physical domain confrontation sample: and printing the mask countermeasure disturbance to obtain a physical domain countermeasure sample.
As a preferred technical solution, the resolution is adjusted to the input resolution of the face region image by using an interpolation algorithm, and the interpolation algorithm uses any one of a Lanczos interpolation algorithm, a nearest neighbor interpolation method, a linear interpolation method, or a Cubic interpolation algorithm.
As a preferred technical scheme, an anti-disturbance mask M is constructed according to the position coordinates of key points of the eyes of the human face, and comprises a spectacle frame mask and a spectacle beam mask;
the specific construction steps of the spectacle frame mask are as follows: generating a rectangular inner frame according to the position coordinates of key points of the two eyes of the human face, wherein the distance between the rectangular inner frame and the key points of the left eye and the right eye is set to be L 1 、L 2 Expanding the rectangular inner frame outwards by a distance L 3 Obtaining a rectangular outer frame, and taking the area between the rectangular outer frame and the rectangular inner frame as a spectacle frame mask;
connecting the horizontal middle points of the spectacle frame masks of the two eyes by adopting a direct connection mode, and setting the width of a straight line to be L 4 Obtaining a glasses beam mask;
L 1 ,L 2 ,L 3 、L 4 the value of (d) is calculated in the following manner:
Figure BDA0002290882140000041
Figure BDA0002290882140000042
Figure BDA0002290882140000051
Figure BDA0002290882140000052
h, W represents the height and width of the face region image, respectively.
As a preferred technical scheme, the generator is constructed, the generator adopts a generator structure of a deep convolution generation countermeasure network, a 100-dimensional white gaussian noise sequence is input, and the number of neurons in a first fully-connected layer is determined according to the input resolution of a face region image through a fully-connected layer and a batch normalization layer of N neurons:
Figure BDA0002290882140000053
converting N-dimensional feature adjustment shapes to resolutions
Figure BDA0002290882140000054
A profile with a channel number of 128.
As a preferred technical solution, the generating of the face confrontation sample by the generator and the confrontation disturbance mask specifically comprises the steps of:
fighting model training data against perturbation mask MT F Respectively arranged in a rectangular anti-disturbance matrix generated by a generator and a model training data sample XT F In the above, the elements of the rectangular disturbance resisting matrix in the mask are multiplied by 255, the elements outside the mask are set to zero to obtain the disturbance resisting mask, and meanwhile, the model training data sample XT is subjected to XT F The pixel values within the mask are set to zero and the remaining pixel values are retained, the result is added to the mask's immunity to perturbations and rounded, and a [0,255] operation is performed]And (5) cutting the range to obtain a face confrontation sample.
As a preferred technical solution, the data enhancement is performed by using an image processing method simulating illumination change in a physical domain, and the specific steps are as follows:
randomly adopting a contrast and brightness conversion method and a gamma conversion method to simulate illumination change in a physical domain for the generated face confrontation sample, and performing data enhancement to obtain a confrontation sample XT finally used for training A
The contrast and brightness conversion method carries out linear conversion on each pixel of the face confrontation sample:
v′=v×a+b
wherein v' and v represent the original pixel value and the transformed pixel value, respectively, a determines the contrast, b determines the brightness;
the gamma transformation method carries out nonlinear transformation on each pixel of a face confrontation sample:
v′=v γ
wherein v' and v represent the original pixel value and the transformed pixel value, respectively, and γ is randomly selected to have an accuracy of 3 decimal in [0.50,1.50 ].
Preferably, the loss-fighting function L of the construction generator A The method specifically comprises the following steps:
L A =λ 1 L F2 L S3 L O
wherein L is F Loss function, L, for face recognition networks F that need to combat attacks S As a function of the probability score loss associated with the last layer output of the network F, L O Loss of face recognition method or combination thereof, lambda 1 、λ 2 、λ 3 The ratio weights representing 3 losses, respectively;
the training network prints a fractional loss function L P The method specifically comprises the following steps:
Figure BDA0002290882140000061
wherein p is G Representing the color value of a pixel in the mask's opposition perturbation p, and c P Representing a set of printable colors C in a printer A In one ofA color value;
the total loss function L of the constructed training network is specifically set as follows:
L=L A +λL P
where λ is the print fractional loss function L P The duty ratio weight of (c).
As a preferred technical solution, the model optimization algorithm is set, and the model optimization algorithm adopts any one of Adam algorithm, SGD algorithm, adaGrad algorithm or RMSprop algorithm.
As a preferred technical scheme, the training optimizes the generator parameters, and the specific steps are as follows:
freezing the face recognition network F parameters, unfreezing the generator parameters, and XT the model training data sample F Training in batches, acquiring n face input samples each time, and countering the perturbation mask MT from the model training data F Acquiring n masks corresponding to the samples, acquiring n Gaussian white noise sequence samples as the input of a generator, generating n rectangular counterdisturbance through the generator, acquiring n face counterdisturbance samples for training through the process of generating the face counterdisturbance samples and the process of simulating illumination change data enhancement, sending the face counterattack samples into a face recognition network F, setting a sample label value according to an attack purpose, and finally adjusting generator parameters by using the minimization of a total loss function L as a target.
As a preferred technical solution, the setting of the sample tag value according to the attack purpose includes the following specific steps: for evasion attack, the sample tag value is randomly set to be an incorrect sample tag, and for impersonation attack, the tag value is set to be a tag of the impersonation target.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) The invention can conveniently generate the glasses shape anti-disturbance block which can be reproduced in the physical domain through the generator, mislead the face recognition system to recognize one person as another person or another specific person, provide a large amount of high-quality anti-training samples for the training of the face recognition system, be beneficial to improving the robustness of the face recognition system, better deal with the anti-attack aiming at the face recognition system in the physical domain and improve the safety of the face recognition system.
(2) The invention utilizes the key point positions of the eyes of the human face to generate the glasses-shaped mask, and the added anti-disturbance is limited in the range of the glasses shape through the mask, so that the formed glasses shape has adaptability and stability to the anti-disturbance, and can be actually printed in a physical domain for actual physical attack.
(3) The method considers the influence of different illumination and printer chromatic aberration on the counterattack when generating the countersample, and adds random illumination through gamma conversion, contrast change, brightness change and the like to enhance data, thereby improving the attack success rate in a physical domain and simultaneously enabling the generated countersample for countertraining to better accord with the actual situation of the physical domain.
(4) The invention sets the penalty function of the generator to L F ,L S ,L O Is weighted sum of where L F Loss function for face recognition networks F that need to combat attacks, L S As a function of the probability score loss associated with the last layer classification output of the network F, L O The method is a loss or a combination of common face recognition methods, and can improve the success rate and the confidence coefficient of resisting sample attack.
(5) The invention can take any deep network or traditional method for face recognition as an attack object, access the whole training frame, train the anti-disturbance generator aiming at the method, and further generate a large amount of anti-samples aiming at the specific face recognition method, thereby having better universality.
Drawings
Fig. 1 is a schematic overall flow chart of a method for generating a confrontation sample for a face recognition system in a physical domain according to the present embodiment;
FIG. 2 is a schematic flow chart illustrating the data preprocessing step according to the present embodiment;
FIG. 3 is a schematic flowchart of the training procedure of the model according to this embodiment;
FIG. 4 is a flowchart illustrating the application steps of the model of the present embodiment;
FIG. 5 is a schematic structural diagram of the generator according to the present embodiment;
FIG. 6 is a schematic diagram of an overall structure of the training network according to the present embodiment;
FIG. 7 is a schematic diagram of the overall structure of the generation of the challenge sample according to the present embodiment;
FIG. 8 is a diagram illustrating generation of countermeasure samples and recognition results according to the present embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
In this embodiment, an implementation process of generating a countermeasure sample of a impersonation attack in this embodiment is described in detail by taking a PubFig face recognition database as an example. The PubFig database consists of 58797 pictures of 200 different people IDs, with each ID containing 300 pictures on average. The VGGFace10 based on the VGG16 structure is trained as an attacked face recognition system by using 8 IDs in a PubFig database and two person IDs in a laboratory, wherein the 10 IDs are respectively named as No. 00 to No. 09, the system inputs a face region image with the resolution of 112 multiplied by 3, and the ID corresponding to the face is output. Pictures for each ID in the PubFig database are in proportion 7:2:1 are randomly divided into a training set, a validation set and a test set. Two image samples of the person ID in the laboratory are from real samples of ID No. 2 and ID No. 61 in SSIJRI-Face spoofing detection database, each ID specifically includes 30 videos of different backgrounds under 5 kinds of shooting devices, the frame rate of the video is 30Hz, the duration is 15 seconds, in this embodiment, framing is performed at an interval of 40 to finally obtain about 350 images of each ID, and then according to different shooting devices, the ratio is 3:1:1 is divided into a training set, a validation set and a test set. The VGGFace10 was then trained using the data from the training set and tested with the test set with an accuracy of 97.94%. Training the anti-disturbance generator by using the training set and verification set data of ID No. 08 in the data set as training data to enable the anti-disturbance generator to respectively play a role of attacking other 9 IDs, then constructing an anti-sample by using all data of ID No. 08 as application data by using the trained generator, and performing testing by using the anti-sample generated by the data of the test set. The experiments were performed on a Win10 system using Python version 3.6.7, keras version 2.2.4, tensorFlow version 1.12.0 for Keras, CUDA version 9.0.0, cudnn version 7.1.4.
As shown in fig. 1, the embodiment discloses a method for generating a confrontation sample for a face recognition system in a physical domain, which includes a data preprocessing step, a model training step and a model application step.
As shown in fig. 2, the data preprocessing specifically includes the following steps:
the first step, determining the input resolution of the attacked face recognition system: h × W × C, where H, W and C respectively refer to the height, width and number of color channels of the face region image, and in this embodiment, H =112, W =112, C =3;
secondly, acquiring a face area and eye key points of the sample;
obtaining all samples in training data or application data as original sample X O Acquiring an original sample X by using a face detection algorithm based on an SSD (Single ShotMultiBoxDector, single step Multi-Box Detector) network O Face position R in (1) F
For each sample, the face frame position is given by [ x ] min ,x max ,y min ,y max ]Is represented by a list of (x) min ,y min ) And (x) max ,y max ) Respectively representing the coordinates of the upper left and the lower right of the face position frame;
cutting the face area according to the face position frame of each sample, and adjusting the resolution to 112X 3 by using an interpolation algorithm to obtain a face input sample X of the network F For training data, referred to as XT in particular F For application data, referred to as XA in particular F (ii) a Simultaneously using face key point detection algorithm to obtain face input sample X F The face key point detection algorithm of the embodiment can select a face key point detection algorithm based on a 2DFAN network;
in this embodiment, the interpolation algorithm may select a Lanczos interpolation algorithm, a nearest neighbor interpolation method, a linear interpolation method, a Cubic interpolation algorithm, or the like to adjust the size of the clipped human face area to 112 × 112 × 3, and obtain the human face input sample X F
Third, a lens-shaped countermeasure disturbance mask is constructed.
In the present embodiment, the spectacle-shaped mask is composed of a spectacle frame and a spectacle beam, and the distance L between the spectacle frame and the eye 1 、L 2 And width L of the spectacle frame and the spectacle cross-piece 3 、L 4 The resolution H × W × C of the face region is determined, and L is determined from the resolution H × W × C using the following method 1 ,L 2 ,L 3 、L 4 The value of (c):
Figure BDA0002290882140000101
the purpose is to ensure the rationality of the size of the glasses and the distance between the eyes and the glasses;
in this embodiment, a rectangular inner frame is generated according to positions of key points of human eyes, so that the distances from the inner frame to an inner corner key point, an outer corner key point, an upper eyelid key point and a lower eyelid key point are all 3 pixels, the inner frame is expanded outwards by 5 pixels to obtain a rectangular outer frame, an area between the inner frame and the outer frame is used as a spectacle frame mask, horizontal middle points of the spectacle frame masks of two eyes are connected by straight lines, the width of each straight line is set to be 5 pixels to obtain a spectacle beam mask, the spectacle frame and the spectacle beams jointly form a disturbance resisting mask in the shape of a spectacle and is recorded as M, and training data, specifically called as MT F For application data, referred to as MA in particular F (ii) a The pixel value of the mask position is 1, and the pixel values of the rest positions are 0; the embodiment generates the glasses shape mask according to the key points of the eyes, and aims to ensure the stability and the universality of the glasses shape and ensure the stability and the universality of the glasses shapeDetermining the position for placing the glasses;
as shown in fig. 3, the specific steps of model training are as follows:
the method comprises the following steps of firstly, constructing a generator for generating an anti-disturbance;
as shown in fig. 5, the generator of this embodiment is to convert input D-dimensional gaussian white noise into rectangular anti-disturbance with specific output resolution and specific value, the dimension D is reasonably selected according to the output resolution of the generator, the generator can use any D-dimensional gaussian white noise sequence, the output resolution is H × W × C, and the value is [ -1,1]The network structure of the anti-disturbance matrix of (1); a generator structure of an optional generator structure reference DCGAN (deep convolution generation countermeasure network) is input into a 100-dimensional Gaussian white noise sequence, and then the number of neurons of a first layer of full connection layer is determined to be a number of neurons of a first layer of full connection layer according to resolution through a full connection layer and a batch normalization layer of N neurons
Figure BDA0002290882140000111
The N-dimensional feature adjustment shape is then converted to a resolution
Figure BDA0002290882140000112
A profile of channel number 128;
the generator of this embodiment has an input of a 100-dimensional gaussian white noise sequence, and determines that the number of neurons in a first fully-connected layer is N =6272 according to an input resolution of an attacked face recognition system of 112 × 112 × 3, then adjusts a 6272-dimensional feature vector into a feature map having a resolution of 7 × 7 × 128 by adjusting a shape layer, and then sequentially outputs a deconvolution layer, a batch normalization layer, a 64-channel output deconvolution layer, a batch normalization layer, a 3 × 3 64-channel output deconvolution layer, a 3 × 3-convolution kernel, a 32-channel output deconvolution layer, a batch normalization layer, a 3-channel output deconvolution layer, a 3 × 3-channel output deconvolution layer, a first three deconvolution layers use a ReLU activation function, a last deconvolution layer uses a Tanh activation function, and a zero-filled layer, and finally outputs an anti-disturbance matrix having a resolution of 112 × 112 × 3 and a value of [ -1,1 ];
secondly, generating a face confrontation sample by using the generator and the glasses mask;
masking the shape of the glasses with a mask MT F Respectively arranged in an anti-disturbance matrix generated by a generator and a human face input sample XT F Multiplying the element value of the disturbance matrix in the mask by 255, and setting the elements outside the mask to zero to obtain the anti-disturbance of the glasses shape; simultaneously inputting human face into sample XT F The pixel values within the mask are zeroed out and the remaining pixel values are retained, the result is added to the spectacle shape opposition perturbation and rounded up, and [0,255] is performed]Truncating the range to obtain a face region confrontation sample;
thirdly, simulating illumination change to enhance data;
randomly using contrast and brightness conversion method or gamma conversion to simulate illumination change in physical domain for data enhancement with probability of 0.5 to obtain a confrontation sample XT for training A
The contrast and brightness transformation method carries out linear transformation on each pixel of the confrontation sample:
v′=v×a+b
wherein v' and v represent the original pixel value and the transformed pixel value respectively, a determines contrast, b determines brightness, a randomly selects decimal with accuracy of 3 in [0.50,1.50], b randomly selects integer in [ -20,20], and finally truncates the pixel value to [0,255];
the gamma conversion method simulating illumination change carries out nonlinear conversion on each pixel of the confrontation sample:
v′=v γ
wherein v' and v respectively represent the original pixel value and the transformed pixel value, and gamma randomly selects a decimal with an accuracy of 3 in [0.50,1.50 ];
in the embodiment, the data enhancement is carried out by using an image processing method for simulating illumination change in a physical domain, so that the robustness of the confrontation sample on the illumination change is improved, the success rate of the confrontation sample in the physical domain is increased, and the generated confrontation sample has practical significance;
fourthly, constructing an integral training network;
antagonistic sample XT to be used for training A Inputting the data into a face recognition system VGGFace10 serving as an attack object, constructing an integral training network, and as shown in FIG. 6, generating an anti-disturbance matrix by a generator through the integral training network, constructing a training anti-sample together with a mask and an original face region sample, enhancing the anti-disturbance matrix by simulating illumination change data to obtain an anti-sample finally used for training, inputting the anti-sample into the face recognition system VGGFace10 to complete the integral construction of the training network, and initializing network parameters by using a He method to initialize the network parameters; in the embodiment, any deep network or traditional method for face spoofing detection is used as an attack object, and the whole training framework is accessed, so that the method has universality;
fifthly, constructing a loss function of the network countermeasure training;
the penalty function of the generator is set to:
L A =L F +0.1×L S +0.01×L O
in this embodiment, L F Cross entropy loss function used in training for the attack object VGGFace 10:
Figure BDA0002290882140000131
where n represents the number of samples in a batch that are simultaneously sent to the network for training, y i And
Figure BDA0002290882140000132
respectively representing the true value and the predicted value of the ith sample, wherein both are m-dimensional vectors, m is the number of the face classes identified by the network, and y i With one-hot encoding, in this embodiment, n =64,m =10;
L S is a probability score loss function related to the last layer output of the network F, and aims to mainly improve the confidence of resisting the attack and avoid the attack, L S The probability score for lowering the correct category is defined as:
Figure BDA0002290882140000133
for impersonation attacks, L S Probability score for improving impersonation category, defined as:
Figure BDA0002290882140000134
wherein, x is a sample without disturbance, z is a Gaussian white noise sequence, G (-) is the output of the generator, M (x, G (z)) is the output of the generator, the sample is added to the original face area sample after the masking operation, the face confrontation sample after the enhancement of the simulated illumination, F i A score (confidence) indicating that the network F discriminates the input as a class i, F x And F t Scores representing the correct class and the target class of the input sample x, respectively; l is O For face recognition method loss or a combination thereof, in order to further improve the success rate and confidence of the counterattack, L O In this embodiment, a cosine similarity loss function associated with the output of the second layer from the last of the network F is selected, and for evasive attack, the cosine similarity loss function is defined as:
Figure BDA0002290882140000141
for impersonation attacks, the definition is:
Figure BDA0002290882140000142
wherein x and a represent original samples and corresponding anchor samples, the anchor samples and the original samples belong to the same classification, and e represents the reciprocal F of the networkOutput vector of the second layer, e x ,e a Respectively representing a countermeasure sample output vector and a corresponding anchor point sample output vector of which original samples are x, wherein n is the number of a batch of samples simultaneously sent to a network for training, and m is the number of categories identified by the system;
l of the present embodiment O An L-softmax loss function, an A-softmax loss function, a CosFace loss function, an ArcFace loss function, or a combination of these functions may also be employed;
this embodiment sets the penalty function of the generator to L F ,L S ,L O In order to increase the success rate of the attack countermeasure and its confidence as much as possible, where L F The loss function of the face recognition network F which needs to resist the attack aims at reproducing the training process of the F to resist the attack;
sixthly, constructing a printing fraction loss function of the network, wherein the printing fraction loss function is used for measuring the difference between the anti-disturbance color value and the color value which can be printed by the printer;
in order to improve the success probability of the physical domain attack, considering the influence of the color difference of the printer on the countermeasure sample, a printing fraction loss function is defined as follows:
Figure BDA0002290882140000143
wherein p is G Is the color value of a pixel in the disturbance p of the shape of the glasses, and c P Is a set of printable colors C for a printer A A color value of (1).
Seventhly, constructing a total loss function of the network;
the total loss function of the network is set to:
L=L A +λL P
where λ is the print fractional loss function L P The scaling weight of (a);
step eight, setting a model optimization algorithm;
in the embodiment, parameter optimization is performed by adopting an Adam algorithm, and the learning rate is set to be 5 multiplied by 10 -5 First order smoothing parameter β 1 =0.50, second order smoothing parameter β 2 =0.999,ε=10 -8 (ii) a The model optimization algorithm of this embodiment may also use optimization algorithms such as SGD, adaGrad, RMSprop, and the like;
ninthly, training and optimizing generator parameters;
freezing parameters of a face recognition network VGGFace10, unfreezing generator parameters, inputting a face into a sample XT F Training is performed in batches, taking 64 face input samples at a time, and from the glasses shape mask MT F Obtaining a mask corresponding to a sample, simultaneously obtaining 64 white Gaussian noise sequence samples as the input of a generator, generating 64 rectangular counterdisturbance through the generator, then obtaining 64 counterattack samples for training through a face counterattack sample generating process and a simulated illumination change data enhancing process, sending the counterattack samples into a face recognition network VGGFace10, setting the label value of the samples as a target, and adjusting the generator parameters by minimizing a total loss function L;
the sample label value setting method in this embodiment is as follows: for avoiding the attack, the sample label value is randomly set as an incorrect sample label, and for the play attack, the label value is set as a label of the play target;
and step ten, repeating the operation of the step nine until the network parameters are stable, completing the model training, and storing the model of the generator and the weight thereof.
As shown in fig. 4, the specific steps of the model application are as follows:
firstly, obtaining the original resolution of a human face according to the data preprocessing step;
acquiring a 112 x 3 resolution face region image XA of each original sample according to a second step of the data preprocessing step F Face position frame R F And key points of both eyes, obtaining a lens shape mask MA according to the data preprocessing step F According to the face position frame R F Obtaining resolution H of face region image O ×W O In particular for each sampleIn other words, the face position frame is [ x ] min ,x max ,y min ,y max ]So that its original resolution is h O ×w O Wherein h is O =y max -y min ,w O =x max -x min
Secondly, constructing an application network structure;
loading the model and weight of the trained generator, constructing an application network by the generator together with the input of the glasses shape mask, the input of the white gaussian noise sequence and the input of the face region image, as shown in fig. 7, generating an anti-disturbance matrix by the white gaussian noise sequence through the generator, constructing the glasses shape anti-disturbance together with the mask, constructing a face region anti-disturbance sample together with the original face region sample, and then dividing into two branches: firstly, a confrontation sample of an original sample is obtained by covering a confrontation sample of a face area adjusted to the original resolution with the face area of the original sample; secondly, printing the shape confrontation disturbance of the glasses by using a printer to obtain a reproduced confrontation attack glasses sample in the physical domain, and wearing the confrontation glasses sample by a person corresponding to the ID to obtain the physical domain confrontation sample;
thirdly, acquiring a glasses shape countermeasure disturbance sample and a face area countermeasure sample;
sending the 100-dimensional white Gaussian noise sequence into a trained generator, combining the output rectangular counterdisturbance with an eye shape mask and a face region image, and obtaining a glasses shape counterdisturbance and a face region countersample with 112 multiplied by 3 resolution according to the second step of the model training step;
fourthly, acquiring a digital domain confrontation sample;
adjusting the resolution of the face region confrontation sample to h by using Lanczos interpolation O ×w O Then, covering the face area of the original sample with the face position frame to obtain a confrontation sample of the original sample;
fifthly, acquiring a physical domain confrontation sample;
printing the shape countermeasure disturbance of the glasses by using a printer to obtain a recurring countermeasure attack glasses sample in the physical domain, and wearing the countermeasure attack glasses sample by a person corresponding to the ID to obtain a physical domain countermeasure sample;
sixthly, repeating the third step, the fourth step and the fifth step to obtain digital domain confrontation samples and physical domain confrontation attack glasses samples of all application data;
in the embodiment, printed attack-resisting glasses are worn by No. 08 in the physical domain, and the VGGFace10 is attacked in the real world, and part of the attack samples can successfully play the attacks No. 00-07 and No. 09, as shown in FIG. 8, the results of No. 08 playing the attacks No. 02 and No. 09 are shown in the figure, rectangles in the figure samples are face position frames, the upper sides of the rectangles are identification result IDs of the VGGFace10, and the lower sides of the rectangles are confidence degrees of the identification results. Specifically, column 1 is a number domain raw sample of number 08, which VGGFace10 can correctly recognize as number 08; column 2 is a 08 # digital domain confrontation sample added with a spectacle shape to resist disturbance, and VGGFace10 is respectively and wrongly identified as No. 02 and No. 09, which succeeds in attack; column 3 is a physical domain original sample No. 08 and a blank glasses wearing sample No. 08 respectively, and the latter is used as a blank contrast, so that the VGGFace10 can correctly identify the sample as No. 08; column 4 is 08 number physical domain anti-attack samples of physical domain anti-attack glasses with printed on, respectively, VGGFace10 is wrongly identified as number 02 and number 09, respectively, which is successful in attack; column 5 is the printed physical domain counter attack glasses.
The embodiment respectively generates challenge samples of No. 00-No. 07 and No. 09 for all image samples of No. 08 ID in the data set, and then identifies the ID of the challenge sample generated by the test set by using a VGGFace10 face recognition network. In the digital domain, the ratio of the number of samples for which VGGFace10 was misclassified as the target class to the total number of samples was defined as the success rate, and the experimental results are shown in table 1 below;
TABLE 1 Experimental results Table
Figure BDA0002290882140000171
The experimental results show that the countermeasure sample generated by the method has high quality, the VGGFace10 can output the classification result of the acting target with high probability, the success rate of the acting attack is high, and meanwhile, the possibility of attack success in the physical domain exists, and the effectiveness of the method is proved.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A method for generating a confrontation sample aiming at a face recognition system in a physical domain is characterized by comprising a data preprocessing step, a model training step and a model application step;
the data preprocessing step comprises:
determining the input resolution of the face region image of the attacked face recognition system;
obtaining an original sample X O Cutting a face area according to each sample face position frame, adjusting the resolution to the face area image input resolution by adopting an interpolation algorithm, and obtaining a face input sample X F Set model training data sample to XT F Setting model application data samples to XA F Acquiring coordinates of key points of the two eyes of the human face by adopting a human face detection algorithm;
establishing an anti-disturbance mask M according to the position coordinates of key points of the eyes of the human face, and setting model training data anti-disturbance mask as MT F Model application data versus perturbation mask set to MA F
The model training step comprises:
constructing a generator, wherein the input of the generator is a Gaussian white noise sequence, and the output of the generator is a rectangular anti-disturbance matrix;
generating, by the generator and the confrontational perturbation mask, a face confrontation sample;
data enhancement is carried out by adopting an image processing method for simulating illumination change in a physical domain to obtain a person finally used for trainingFace confrontation sample XT A
Confront human face with sample XT A Inputting the data into a face recognition network F which needs to resist attacks, and constructing an integral training network;
constructing the penalty function L of the generator A Constructing a training network print fractional loss function L P Constructing a training network total loss function L;
setting a model optimization algorithm;
training and optimizing the generator parameters until the training network parameters are stable, and storing the model and the weight of the generator after the training is finished;
the model application step comprises:
acquiring the original resolution of the face subjected to the data preprocessing step;
loading a generator model and weight which are trained, and constructing an application network by a generator and anti-disturbance mask input, gaussian white noise sequence input and human face area image input;
acquiring a face region countermeasure sample in the mask countermeasure disturbance and model training step, inputting a Gaussian white noise sequence into a trained generator, outputting rectangular countermeasure disturbance by the trained generator, combining the rectangular countermeasure disturbance with the countermeasure disturbance mask and the face region image, and obtaining the mask countermeasure disturbance and the face region countermeasure sample through the model training step;
obtaining digital domain countermeasure samples: adjusting the resolution of the face region countermeasure sample to the original resolution, and then covering the face region of the original sample according to the face position frame to obtain a digital domain countermeasure sample of the original sample;
acquiring a physical domain confrontation sample: and printing the mask countermeasure disturbance to obtain a physical domain countermeasure sample.
2. The method of claim 1, wherein the resolution is adjusted to the face region image input resolution by using an interpolation algorithm, and the interpolation algorithm is any one of a Lanczos interpolation algorithm, a nearest neighbor interpolation algorithm, a linear interpolation algorithm, or a Cubic interpolation algorithm.
3. The method for generating the confrontation sample for the human face recognition system in the physical domain according to claim 1, wherein the confrontation disturbance mask M is constructed according to the position coordinates of the key points of the two eyes of the human face, and comprises a spectacle frame mask and a spectacle cross beam mask;
the specific construction steps of the spectacle frame mask are as follows: generating a rectangular inner frame according to the position coordinates of key points of the two eyes of the human face, wherein the distance between the rectangular inner frame and the key points of the left eye and the right eye is set to be L 1 、L 2 Expanding the rectangular inner frame outwards by a distance L 3 Obtaining a rectangular outer frame, and taking the area between the rectangular outer frame and the rectangular inner frame as a spectacle frame mask;
connecting the horizontal middle points of the spectacle frame masks of the two eyes by adopting a direct connection mode, and setting the width of a straight line to be L 4 Obtaining a glasses beam mask;
L 1 ,L 2 ,L 3 、L 4 the value of (d) is calculated in the following manner:
Figure FDA0002290882130000031
Figure FDA0002290882130000032
Figure FDA0002290882130000033
Figure FDA0002290882130000034
h, W represents the height and width of the face region image, respectively.
4. The method for generating confrontation samples in the physical domain for the human face recognition system according to claim 1, wherein the generator is constructed, the generator structure of the confrontation network is generated by adopting deep convolution, a 100-dimensional white gaussian noise sequence is input, and the number of neurons in the first fully-connected layer is determined as follows according to the input resolution of the human face region image through a fully-connected layer and a batch normalization layer of N neurons:
Figure FDA0002290882130000035
converting N-dimensional feature adjustment shapes to resolutions
Figure FDA0002290882130000036
A profile with a channel number of 128.
5. The method for generating confrontational samples for a face recognition system in physical domain according to claim 1, wherein the generation of the confrontational samples for the face by the generator and the confrontational disturbance mask comprises the following steps:
fighting model training data against perturbation mask MT F Respectively arranged in a rectangular anti-disturbance matrix generated by a generator and a model training data sample XT F In the above, the elements of the rectangular disturbance resisting matrix in the mask are multiplied by 255, the elements outside the mask are set to zero to obtain the disturbance resisting mask, and meanwhile, the model training data sample XT is subjected to XT F The pixel values within the mask are set to zero, the remaining pixel values are retained, the result is added to the mask countermeasure disturbance and rounded, and [0,255] is performed]And (5) truncating the range to obtain a face confrontation sample.
6. The method for generating confrontational sample for human face recognition system in physical domain according to claim 1, wherein the image processing method for simulating illumination change in physical domain is used for data enhancement, and the method comprises the following specific steps:
randomly adopting a contrast and brightness conversion method for the generated face confrontation sample,The gamma conversion method simulates the illumination change in the physical domain to enhance the data and obtain the confrontation sample XT finally used for training A
The contrast and brightness conversion method carries out linear conversion on each pixel of the face confrontation sample:
v′=v×a+b
wherein v' and v represent the original pixel value and the transformed pixel value, respectively, a determines the contrast, b determines the brightness;
the gamma transformation method carries out nonlinear transformation on each pixel of a face confrontation sample:
v′=v γ
wherein v' and v represent the original pixel value and the transformed pixel value, respectively, and γ is randomly selected to have an accuracy of 3 decimal in [0.50,1.50 ].
7. The method of generating confrontational sample for face recognition system in physical domain according to claim 1, wherein the confrontational loss function L of said construction generator A The method specifically comprises the following steps:
L A =λ 1 L F2 L S3 L O
wherein L is F Loss function, L, for face recognition networks F that need to combat attacks S As a function of the probability score loss associated with the last layer output of the network F, L O Loss of face recognition method or combination thereof, lambda 1 、λ 2 、λ 3 The ratio weights representing 3 losses, respectively;
the training network prints a fractional loss function L P The method specifically comprises the following steps:
Figure FDA0002290882130000041
wherein p is G Representing the color value of a pixel in the mask against disturbance p, and c P Representing a set of printable colors C in a printer A One kind of pigmentA color value;
the method for constructing the training network total loss function L specifically comprises the following steps:
L=L A +λL P
where λ is the print fractional loss function L P The duty ratio weight of (c).
8. The method for generating confrontational samples for human face recognition system in physical domain according to claim 1, wherein said method comprises setting a model optimization algorithm, said model optimization algorithm using any one of Adam algorithm, SGD algorithm, adaGrad algorithm or RMSprop algorithm.
9. The method for generating confrontational sample for human face recognition system in physical domain according to claim 1, wherein said training optimizes said generator parameters by the specific steps of:
freezing the face recognition network F parameters, unfreezing the generator parameters, and XT the model training data sample F Training in batches, acquiring n face input samples each time, and countering the perturbation mask MT from the model training data F Acquiring n masks corresponding to the samples, acquiring n Gaussian white noise sequence samples as the input of a generator, generating n rectangular counterdisturbance through the generator, acquiring n face counterdisturbance samples for training through a face countersample generating process and a simulated illumination change data enhancing process, sending the face counterattack samples into a face recognition network F, setting a sample label value according to an attack purpose, and finally, minimizing a total loss function L to be a target adjustment generator parameter.
10. The method for generating the confrontation sample in the physical domain for the human face recognition system according to claim 9, wherein the setting of the sample label value according to the attack purpose includes the following specific steps: for evasion attack, the sample tag value is randomly set to be an incorrect sample tag, and for impersonation attack, the tag value is set to be a tag of the impersonation target.
CN201911179565.XA 2019-11-27 2019-11-27 Confrontation sample generation method aiming at face recognition system in physical domain Active CN110991299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911179565.XA CN110991299B (en) 2019-11-27 2019-11-27 Confrontation sample generation method aiming at face recognition system in physical domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911179565.XA CN110991299B (en) 2019-11-27 2019-11-27 Confrontation sample generation method aiming at face recognition system in physical domain

Publications (2)

Publication Number Publication Date
CN110991299A CN110991299A (en) 2020-04-10
CN110991299B true CN110991299B (en) 2023-03-14

Family

ID=70087291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911179565.XA Active CN110991299B (en) 2019-11-27 2019-11-27 Confrontation sample generation method aiming at face recognition system in physical domain

Country Status (1)

Country Link
CN (1) CN110991299B (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582384B (en) * 2020-05-11 2023-09-22 西安邮电大学 Image countermeasure sample generation method
CN111340008B (en) * 2020-05-15 2021-02-19 支付宝(杭州)信息技术有限公司 Method and system for generation of counterpatch, training of detection model and defense of counterpatch
CN111639589B (en) * 2020-05-28 2022-04-19 西北工业大学 Video false face detection method based on counterstudy and similar color space
CN113808003B (en) * 2020-06-17 2024-02-09 北京达佳互联信息技术有限公司 Training method of image processing model, image processing method and device
CN111797732B (en) * 2020-06-22 2022-03-25 电子科技大学 Video motion identification anti-attack method insensitive to sampling
CN112001249B (en) * 2020-07-21 2022-08-26 山东师范大学 Method for canceling biological characteristics by generating sticker structure in physical world
CN111737691B (en) * 2020-07-24 2021-02-23 支付宝(杭州)信息技术有限公司 Method and device for generating confrontation sample
CN111881436B (en) * 2020-08-04 2024-08-06 公安部第三研究所 Method, device and storage medium for generating black box face anti-attack sample based on feature consistency
CN112115781B (en) * 2020-08-11 2022-08-16 西安交通大学 Unsupervised pedestrian re-identification method based on anti-attack sample and multi-view clustering
CN112000578B (en) * 2020-08-26 2022-12-13 支付宝(杭州)信息技术有限公司 Test method and device of artificial intelligence system
CN111931707A (en) * 2020-09-16 2020-11-13 平安国际智慧城市科技股份有限公司 Face image prediction method, device, equipment and medium based on countercheck patch
CN112116026A (en) * 2020-09-28 2020-12-22 西南石油大学 Countermeasure sample generation method, system, storage medium and device
CN112532601B (en) * 2020-11-20 2021-12-24 浙江大学 Terminal equipment safety analysis method based on bypass vulnerability
CN112306778B (en) * 2020-11-20 2022-05-10 浙江大学 Resource-limited terminal equipment safety monitoring method based on bypass
CN112529760A (en) * 2020-12-23 2021-03-19 山东彦云信息科技有限公司 Image privacy protection filter generation method based on anti-noise and cloud separation
CN112883874B (en) * 2021-02-22 2022-09-06 中国科学技术大学 Active defense method aiming at deep face tampering
CN112884802B (en) * 2021-02-24 2023-05-12 电子科技大学 Attack resistance method based on generation
CN112949618A (en) * 2021-05-17 2021-06-11 成都市威虎科技有限公司 Face feature code conversion method and device and electronic equipment
CN113239867B (en) * 2021-05-31 2023-08-11 西安电子科技大学 Mask area self-adaptive enhancement-based illumination change face recognition method
CN113221858B (en) * 2021-06-16 2022-12-16 中国科学院自动化研究所 Method and system for defending face recognition against attack
CN114240732B (en) * 2021-06-24 2023-04-07 中国人民解放军陆军工程大学 Anti-patch generation method for attacking face verification model
CN113487015A (en) * 2021-07-07 2021-10-08 中国人民解放军战略支援部队信息工程大学 Countermeasure sample generation method and system based on image brightness random transformation
CN113537381B (en) * 2021-07-29 2024-05-10 大连海事大学 Human rehabilitation exercise data enhancement method based on countermeasure sample
CN113780123B (en) * 2021-08-27 2023-08-08 广州大学 Method, system, computer device and storage medium for generating countermeasure sample
CN113792791B (en) * 2021-09-14 2024-05-03 百度在线网络技术(北京)有限公司 Processing method and device for vision model
CN114363509B (en) * 2021-12-07 2022-09-20 浙江大学 Triggerable countermeasure patch generation method based on sound wave triggering
CN114333029A (en) * 2021-12-31 2022-04-12 北京瑞莱智慧科技有限公司 Template image generation method, device and storage medium
CN114972170B (en) * 2022-03-31 2024-05-14 华南理工大学 Anti-shielding object detection method based on fisheye camera under dense scene
CN115131581B (en) * 2022-06-28 2024-09-27 中国人民解放军国防科技大学 Color space component-based method and system for generating countermeasure sample image
CN114898450B (en) * 2022-07-14 2022-10-28 中国科学院自动化研究所 Face confrontation mask sample generation method and system based on generation model
CN115391764A (en) * 2022-10-28 2022-11-25 吉林信息安全测评中心 Information security management system based on image recognition technology
CN116071797B (en) * 2022-12-29 2023-09-26 北华航天工业学院 Sparse face comparison countermeasure sample generation method based on self-encoder

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986041A (en) * 2018-06-13 2018-12-11 浙江大华技术股份有限公司 A kind of image recovery method, device, electronic equipment and readable storage medium storing program for executing
CN109977841A (en) * 2019-03-20 2019-07-05 中南大学 A kind of face identification method based on confrontation deep learning network
CN110309798A (en) * 2019-07-05 2019-10-08 中新国际联合研究院 A kind of face cheat detecting method extensive based on domain adaptive learning and domain
CN110414428A (en) * 2019-07-26 2019-11-05 厦门美图之家科技有限公司 A method of generating face character information identification model
CN110443203A (en) * 2019-08-07 2019-11-12 中新国际联合研究院 The face fraud detection system counter sample generating method of network is generated based on confrontation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10599951B2 (en) * 2018-03-28 2020-03-24 Kla-Tencor Corp. Training a neural network for defect detection in low resolution images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986041A (en) * 2018-06-13 2018-12-11 浙江大华技术股份有限公司 A kind of image recovery method, device, electronic equipment and readable storage medium storing program for executing
CN109977841A (en) * 2019-03-20 2019-07-05 中南大学 A kind of face identification method based on confrontation deep learning network
CN110309798A (en) * 2019-07-05 2019-10-08 中新国际联合研究院 A kind of face cheat detecting method extensive based on domain adaptive learning and domain
CN110414428A (en) * 2019-07-26 2019-11-05 厦门美图之家科技有限公司 A method of generating face character information identification model
CN110443203A (en) * 2019-08-07 2019-11-12 中新国际联合研究院 The face fraud detection system counter sample generating method of network is generated based on confrontation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
5 种流行假脸视频检测网络性能分析和比较;胡永健 等;《应用科学学报》;20190930;第37卷(第5期);第590-608页 *
Single Image Snow Removal via Composition;ZHI LI 等;《IEEE Access》;20190307;第7卷;第25016-25025页 *

Also Published As

Publication number Publication date
CN110991299A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110991299B (en) Confrontation sample generation method aiming at face recognition system in physical domain
CN110443203B (en) Confrontation sample generation method of face spoofing detection system based on confrontation generation network
CN111340214B (en) Method and device for training anti-attack model
CN111709409B (en) Face living body detection method, device, equipment and medium
CN108520202B (en) Method for extracting image characteristics with robustness resistance based on variational spherical projection
CN108229381B (en) Face image generation method and device, storage medium and computer equipment
CN110543846B (en) Multi-pose face image obverse method based on generation countermeasure network
CN109858368B (en) Rosenbrock-PSO-based face recognition attack defense method
CN111783748B (en) Face recognition method and device, electronic equipment and storage medium
CN109214327B (en) Anti-face recognition method based on PSO
CN106295694B (en) Face recognition method for iterative re-constrained group sparse representation classification
CN108537743A (en) A kind of face-image Enhancement Method based on generation confrontation network
CN109902667A (en) Human face in-vivo detection method based on light stream guide features block and convolution GRU
CN107909008A (en) Video target tracking method based on multichannel convolutive neutral net and particle filter
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN112287973A (en) Digital image countermeasure sample defense method based on truncated singular value and pixel interpolation
Ryu et al. Adversarial attacks by attaching noise markers on the face against deep face recognition
CN115331079A (en) Attack resisting method for multi-mode remote sensing image classification network
CN113435264A (en) Face recognition attack resisting method and device based on black box substitution model searching
CN113420289B (en) Hidden poisoning attack defense method and device for deep learning model
CN113221388B (en) Method for generating confrontation sample of black box depth model constrained by visual perception disturbance
CN113642003A (en) Safety detection method of face recognition system based on high-robustness confrontation sample generation
CN111582202A (en) Intelligent course system
CN113902044B (en) Image target extraction method based on lightweight YOLOV3
CN115510986A (en) Countermeasure sample generation method based on AdvGAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant