CN110705652B - Countermeasure sample, generation method, medium, device and computing equipment thereof - Google Patents

Countermeasure sample, generation method, medium, device and computing equipment thereof Download PDF

Info

Publication number
CN110705652B
CN110705652B CN201910988796.9A CN201910988796A CN110705652B CN 110705652 B CN110705652 B CN 110705652B CN 201910988796 A CN201910988796 A CN 201910988796A CN 110705652 B CN110705652 B CN 110705652B
Authority
CN
China
Prior art keywords
image
fused
attacked
sample
attacked object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910988796.9A
Other languages
Chinese (zh)
Other versions
CN110705652A (en
Inventor
萧子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Real AI Technology Co Ltd
Original Assignee
Beijing Real AI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Real AI Technology Co Ltd filed Critical Beijing Real AI Technology Co Ltd
Priority to CN201910988796.9A priority Critical patent/CN110705652B/en
Publication of CN110705652A publication Critical patent/CN110705652A/en
Application granted granted Critical
Publication of CN110705652B publication Critical patent/CN110705652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

An embodiment of the present invention provides a challenge sample generation method, including: respectively acquiring an image of an attacking object and an image of an attacked object; selecting a characteristic image to be fused according to the image of the attacked object; carrying out image fusion based on the image of the attack object and the characteristic image; a challenge sample is obtained based on the fused image. In addition, according to the technical scheme disclosed by the invention, the countermeasure sample based on image fusion can be adopted as initialization, the attack success rate of the existing countermeasure sample generation method based on optimization can be obviously improved, and the embodiment of the invention provides a countermeasure sample generation device, a medium and a computing device.

Description

Countermeasure sample, generation method, medium, device and computing equipment thereof
Technical Field
The embodiment of the invention relates to the technical field of computer vision, in particular to a confrontation sample and a generation method, a medium, a device and a computing device thereof.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The challenge sample generation method represents a class of methods that attack machine learning models during the testing phase. Existing challenge sample generation methods can be classified into optimization-based methods and non-optimization-based methods.
In the optimization-based method, different countermeasure sample generation methods are mainly distinguished by an optimization algorithm and a disturbance form. For common white-box based attacks and black-box based attacks, the perturbation is usually in the form of linearly superimposing a noise on the normal sample. They differ only in the use of different optimization procedures. Obviously, the optimization-based countermeasure sample generation method needs to acquire or operate a model which is the same as or similar to the victim model, and the model is not easy to acquire and has higher requirements on computing resources when being operated.
The existing countermeasure sample generation method which is not based on optimization does not need to obtain or run a model which is the same as or similar to a victim model, but often only uses simple geometric patterns or image transformation to apply disturbance, and machine learning models (such as face comparison and target detection) which relate to complex semantic information are difficult to attack.
Disclosure of Invention
In a first aspect of embodiments of the present invention, there is provided a challenge sample generation method, comprising:
respectively acquiring an image of an attacking object and an image of an attacked object;
selecting a characteristic image to be fused according to the image of the attacked object;
carrying out image fusion based on the image of the attack object and the characteristic image;
a challenge sample is obtained based on the fused image.
In one example of this embodiment, the feature image can affect the output result of an object model, which includes an object detection model and an image recognition model.
In an embodiment of the present invention, selecting a feature image to be fused according to an image of an attacked object includes:
aligning an image of the attacked object with an image of the attacking object;
and selecting a characteristic image to be fused from the aligned images of the attacked object.
In an embodiment of the present invention, the image of the attacked object is subjected to perspective transformation so as to align the image of the attacked object with the image of the attacking object.
In one embodiment of the present embodiment, aligning an image of an attacked object with an image of an attacking object includes:
respectively acquiring key points on an image of an attacking object and an image of an attacked object;
estimating based on the key points to obtain a perspective transformation matrix;
and performing perspective transformation on the image of the attacked object based on the perspective transformation matrix so as to align the image of the attacked object with the image of the attacking object.
In an embodiment of the present invention, a similarity transformation matrix is used as a perspective transformation matrix, and the image of the attacked object is subjected to perspective transformation by using the similarity transformation matrix.
In an embodiment of the present embodiment, a mask matrix is used to select a feature image to be fused from the aligned images of the attacked object.
In an embodiment of the present invention, the size of the mask matrix is consistent with the aligned image of the attacked object.
In one embodiment of the present embodiment, an image fusion method is used to perform image fusion on an image of an attack object and a feature image.
In an embodiment of the present invention, obtaining a countermeasure sample based on the fused image includes:
taking the fused image as a countermeasure sample; and/or
Performing a plurality of iterations based on the fused image to generate a confrontation sample; and/or
And generating a countermeasure sample by adopting an optimized countermeasure sample generation method based on the fused image.
In one embodiment of this embodiment, performing multiple iterations based on the fused image to generate a challenge sample includes:
acquiring an image of at least one attacked object;
repeatedly executing the following steps until all the acquired images of the attacked object are traversed:
aligning the image of the attacked object with the fused image;
selecting a characteristic image to be fused from the aligned images of the attacked object;
and carrying out image fusion based on the fused image and the characteristic image.
In an embodiment of the present invention, a mask matrix is used to select a region to be optimized from the fused image, and then a countermeasure sample is generated based on the fused image by using an optimization-based countermeasure sample generation method.
In one embodiment of this embodiment, after obtaining the challenge sample, the method further comprises:
the challenge sample is made as an entity.
In a second aspect of embodiments of the present invention, there is provided a challenge sample generating device comprising:
an image acquisition module configured to acquire an image of an attack object and an image of an attacked object, respectively;
the characteristic selection module is configured to select a characteristic image to be fused according to the image of the attacked object;
the image fusion module is configured to perform image fusion based on the image of the attack object and the characteristic image;
a confrontation sample generation module configured to derive a confrontation sample based on the fused image.
In one example of this embodiment, the feature image can affect the output result of an object model, which includes an object detection model and an image recognition model.
In an embodiment of the present invention, the feature selecting module includes:
an image alignment unit configured to align an image of an attacked object with an image of an attacking object;
and the characteristic selecting unit is configured to select a characteristic image to be fused from the aligned images of the attacked objects.
In an embodiment of the present invention, the image alignment unit is further configured to perform perspective transformation on the image of the attacked object, so as to align the image of the attacked object with the image of the attacking object.
In one embodiment of the present embodiment, the image alignment unit includes:
a key point acquisition subunit configured to acquire key points on the image of the attack object and the image of the attacked object, respectively;
an estimation subunit configured to estimate based on the keypoints to obtain a perspective transformation matrix;
an image comparison subunit configured to perform perspective transformation on the image of the attacked object based on the perspective transformation matrix so as to align the image of the attacked object with the image of the attacking object.
In an embodiment of the present invention, a similarity transformation matrix is used as a perspective transformation matrix, and the image of the attacked object is subjected to perspective transformation by using the similarity transformation matrix.
In an embodiment of the present embodiment, a mask matrix is used to select a feature image to be fused from the aligned images of the attacked object.
In an embodiment of the present invention, the size of the mask matrix is consistent with the aligned image of the attacked object.
In one embodiment of the present embodiment, an image fusion method is used to perform image fusion on an image of an attack object and a feature image.
In one embodiment of this embodiment, the confrontation sample generation module comprises:
a fusion-type confrontation sample generation unit configured to take the fused image as a confrontation sample; and/or
A multi-image fusion type confrontation sample generation unit configured to perform a plurality of iterations based on the fused image to generate a confrontation sample; and/or
And the enhanced countermeasure sample generation unit is configured to generate countermeasure samples by adopting an optimized countermeasure sample generation method based on the fused images.
In one example of the present embodiment, the multi-map fusion type confrontation sample generation unit is further configured to:
acquiring an image of at least one attacked object;
the following steps are repeatedly executed until all the acquired images of the attacked object are traversed. And taking the image generated by final fusion as a countermeasure sample:
aligning the image of the attacked object with the fused image;
selecting a characteristic image to be fused from the aligned images of the attacked object;
and carrying out image fusion based on the fused image and the characteristic image.
In an embodiment of the present invention, the enhanced confrontation sample generation unit is further configured to select a region to be optimized from the fused image by using a mask matrix selection, and then generate the confrontation sample by using an optimized confrontation sample generation method based on the fused image.
In one embodiment of this embodiment, the apparatus further comprises:
a physical sample making module configured to make the countermeasure sample as a physical entity.
In a third aspect of embodiments of the present invention, there is provided a computer-readable storage medium storing a computer program for executing the countermeasure sample generation method according to any one of the embodiments of the first aspect of the present embodiments.
In a fourth aspect of embodiments of the present invention, there is provided a computing device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to execute the method for generating a challenge sample according to any embodiment of the first aspect of this embodiment.
In a fifth aspect of the embodiments of the present invention, there is provided a challenge sample, wherein the challenge sample is generated by using the challenge sample generation method according to any one of the embodiments of the first aspect of the present invention;
or by a challenge sample generating device as described in any of the examples of the second aspect of this embodiment;
or generated by a computing device as described in any of the fourth aspect of this embodiment.
The embodiment of the invention provides a countermeasure sample and a generation method, a generation device, a generation medium and a calculation device thereof, aiming at a classification, regression or representation learning machine learning model in a computer vision application scene, the important characteristics of an attacked object image are fused to an image of an attacking object by using an image fusion technology to obtain the countermeasure sample, the countermeasure sample can be used for detecting the vulnerability of the machine learning model (the machine learning classification, regression or representation learning model for executing a computer vision task) related to complex semantic information, a model which is the same as or similar to a victim model does not need to be obtained and operated, and the calculation resources are saved. In addition, the invention also adopts the confrontation sample based on image fusion as initialization, improves the initialization state of the confrontation sample generation method based on optimization, and obviously improves the attack success rate of the prior confrontation sample generation method based on optimization
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a schematic flow chart of a challenge sample generation method according to an embodiment of the present invention;
fig. 2a and fig. 2b are connected in sequence, and the whole is a schematic flow chart of the method for generating the confrontation sample according to some embodiments of the embodiment of the present invention, which is implemented by taking a human face object as an example;
FIG. 3 is a schematic flow chart of a method for generating a countermeasure sample according to an embodiment of the invention;
FIG. 4 is a graph illustrating the results of experimental validation of the method shown in FIG. 3;
FIG. 5 is a schematic flow chart of a method for generating a multi-map fusion type confrontation sample according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of the application of the fusion-type confrontation sample to physical world attack according to the embodiment of the present invention;
FIG. 7 is a block diagram of a challenge sample generation device according to an embodiment of the present invention
FIG. 8 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a computing device according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, a countermeasure sample generation method, a medium, a device and a computing device are provided.
In this context, it is to be understood that the terms attack target, attack object and attacker are all referred to by the same concept;
victim target, attacked/victim object, victim represents the same concept.
Moreover, any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
The countermeasure sample generation method provided by the invention can directly fuse the important characteristics of the image of the attacked object to obtain the countermeasure sample on the basis of not obtaining the target image identification model.
The countermeasure sample generation method provided by the invention can generate countermeasure samples (such as human faces, animals and plants, vehicles and the like) for various target objects, and in one embodiment of the invention, a human face is taken as an example for explanation.
A countermeasure sample generation method according to an exemplary embodiment of the present invention is described below with reference to fig. 1. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
In this embodiment, the method comprises:
step S110, respectively acquiring an image of an attacking object and an image of an attacked object;
illustratively, images from an attacking object and an attacked object are first acquired, respectively. The images can be face images, animal images, vehicle images or images of other objects, and the images can be acquired in real time through an image acquisition device (such as a camera) and can also be from an image database.
Step S120, selecting a characteristic image to be fused according to the image of the attacked object;
the principle of the technical solution of the present embodiment is to utilize the property of a machine learning model, which is a good feature extractor, and to recognize or detect one image, mainly based on the features of the image. Therefore, through feature transplantation and fusion, the key features of the image of the attacking object and the image of the attacked object are generated as countermeasure samples, so that misjudgment can be caused to the machine learning model, and the purpose of vulnerability detection is achieved.
It should be noted that the object models (e.g., the object detection model, the image recognition model, the face recognition model, etc.) presented in the technical solution provided in the present embodiment relate to machine learning models that perform specific (e.g., classification, regression, and representation learning) tasks based on computer vision, that is, the application scenarios of the present embodiment should not be limited to the examples and/or models presented in the embodiments of the specification, and all the machine learning models that perform specific (e.g., classification, regression, and representation learning) tasks related to computer vision belong to the coverage of the technical solution provided in the present embodiment.
The feature image extraction method may be selected based on a specific recognition or detection object, for example, if the target object is a human face, the feature image may be extracted based on a current mainstream human face key point detection model, or if the target image is an automobile, the feature image may be extracted based on a feature extractor in an existing vehicle recognition model, which is not limited in this embodiment.
It should be noted that the feature image may be extracted automatically not only in the manner provided in this embodiment or in the following embodiments, but also in an interactive manner, for example, a user may select (intercept) a feature image to be fused from an image of an attacked object by using image editing software (e.g., Photoshop).
In addition, in consideration that the sizes of the acquired image of the attack object and the image of the attacked object may not be consistent, and the region to be fused may not be aligned, it is first necessary to align the two images, specifically, in an embodiment of the present embodiment, the step S120 includes:
aligning an image of the attacked object with an image of the attacking object;
in an embodiment of the present embodiment, the image of the attacked object is subjected to perspective transformation to align the image of the attacked object with the image of the attacking object, and specifically, the image of the attacked object and the image of the attacking object may be aligned through the following steps:
respectively acquiring key points on an image of an attacking object and an image of an attacked object;
for the face image, in the above steps, firstly, a face image X of an attack object is selectedattAnd a face image X of the attacked objectvicThen, any trained face key point detection model can be used for detecting the key point positions of the face images. Note the book
Figure BDA0002237566220000091
Is XattIs at image XattThe position of the coordinates on the upper plate,
Figure BDA0002237566220000092
is XvicIs at image XvicWhere K is the number of keypoints output by the face keypoint detection model. The trained face keypoint detection model may use dlib open source packet or MTCNN model, and any other optional mainstream model or open source program, which is not limited in this embodiment.
Estimating based on the key points to obtain a perspective transformation matrix;
in the present embodiment, after acquiring key points on the image of the attack object and the image of the attacked object, the perspective transformation matrix M from the key point of the face image of the attacked object to the key point of the face image of the attack object is calculatedvic(in one embodiment of this embodiment, the perspective transformation matrix MvicA 3 × 3 matrix), specifically, the perspective transformation matrix may be estimated by a fitting method based on a least square method, and it is understood that other available estimation methods may be adopted in other embodiments of the present embodiment, which is not limited in the present embodiment.
And performing perspective transformation on the image of the attacked object based on the perspective transformation matrix so as to align the image of the attacked object with the image of the attacking object. It is understood that in other embodiments of this embodiment, other perspective transformation matrices (for example, a similarity transformation matrix, which may be obtained in the manner provided in the above embodiments and will not be described again in this embodiment) may be further used to perform perspective transformation on the images so as to align the corresponding images, which is not limited in this embodiment.
For example, in the present embodiment, the face image X of the attacked objectvicFace image X of attack objectattObtaining an image X after alignmentvic-a
In this embodiment, the image alignment may be automatically performed in the manner described in the above embodiment based on the image of the attacked object and the image of the attacking object, or the alignment operation may be performed in response to an operation by the user. Specifically, the user may place the corresponding image in the image editing software, and then perform corresponding operations using a tool (an image editing tool based on the principles of perspective transformation, etc.) provided by the image editing software, so as to enable the facial image X of the attacked objectvicFace image X of attack objectattObtaining an image X after alignmentvic-a
And selecting a characteristic image to be fused from the aligned images of the attacked object.
Specifically, in an embodiment of the present invention, a mask matrix may be used to select a feature image to be fused from the aligned images of the attacked object, where the mask matrix is set to have a size of (h, w), where h and w are the image X respectivelyvic-aHeight and width of (d); wvicIs {0,1}, the element marked 1 marks the pixel locations that need to be fused, and the element marked 0 marks the pixel locations that do not need to be fused. It is understood that the value of each element in the mask matrix may be determined based on the above-mentioned key point or the region of the feature image detected by the key feature, wherein the pixel position belonging to the key feature region is marked as the pixel position to be fused.
In addition, although the above embodiment of the present embodiment is performed based on the above-described sequence of steps, in another embodiment of the present embodiment, the feature image may be extracted first, and then the feature image and the image of the region to be fused are aligned, and the specific implementation steps are similar to the method described in the above embodiment and are not repeated here.
Step S130, image fusion is carried out based on the image of the attack object and the characteristic image;
in an embodiment of the present invention, image fusion may be performed by using one of direct replacement, linear interpolation, and poisson image editing methods, and the following description specifically takes linear interpolation as an example, and a countermeasure sample generated based on image fusion is calculated by the following formula:
Xatt-adv=(1-Wvic)Xatt+Wvic[(1-η)Xatt+ηXvic-a]。
wherein eta ∈ [0,1 ]]The method is a hyper-parameter which is input in advance and is used for measuring the proportion of the characteristic diagram and the attack object image. In the above formula, image XattAnd image Xvic-aAt WvicElement positions of 0 use only image XattAt the value of the corresponding pixel position in WvicElement position of 1 will image XattAnd Xvic-aThe value of the corresponding pixel position in the image is linearly interpolated by the weight eta. When η is 1, then linear interpolation degenerates to direct substitution. It is understood that, in other embodiments of the present embodiment, other available image fusion methods may also be adopted, and the present embodiment does not limit this. In addition, when the image fusion is performed by the method described above, other operations may be added to further improve the generated countermeasure sample image, for example, color histogram matching may be included.
In step S140, a countermeasure sample is obtained based on the fused image.
In one example of this embodiment, the fused image may be used as a countermeasure sample;
compared with the countermeasure sample generation method only capable of fusing simple geometric patterns mentioned in the background art, the embodiment can fuse more complex patterns, such as the local images of human faces mentioned in the above embodiments.
To verify the performance of the challenge samples generated in this example, the inventors conducted experiments that selected two average persons as the attacker and victim, respectively. Steps 1 to 6 in fig. 2a and 2b illustrate the generation of challenge samples in the experiment. The effectiveness of the attack is verified on a face recognition model MobileFacenet through an experiment. Before attack, the cosine similarity of the attacker image and the victim image is 0.24; after the attack, the cosine similarity of the fused challenge sample and the victim image was 0.36. The higher cosine similarity after the attack shows that the attack method can mislead the human face comparison model to be wrong, so that the fusion type countermeasure sample and the victim image are two human face images which are similar, and the effectiveness of the invention is verified.
In addition, the inventor also designs and carries out experiments to verify the performance of the above embodiment on targets other than human face objects, and the experiments select one image of a bus as a victim target and one image of a truck as an attack target. FIG. 3 shows the experimental implementation steps. Fig. 4 shows the experimental results, and in fig. 4, the left image shows: the confluent challenge samples can make the truck difficult to detect. The right image shows: normal van images can be easily detected. While a normal van image may correctly identify a van category with a confidence of 0.9, the fusion-type countermeasure sample is either incorrectly identified as a bus category with a probability of 0.98 or identified as a van category with only a low probability of 0.43. The fusion-type challenge sample can effectively impair the performance of target detection.
In one embodiment of the present embodiment, multiple iterations are performed based on the fused image to generate a challenge sample;
specifically, an image of at least one attacked object is obtained;
repeatedly executing the following steps until all the acquired images of the attacked object are traversed:
aligning the image of the attacked object with the fused image;
selecting a characteristic image to be fused from the aligned images of the attacked object;
and carrying out image fusion based on the fused image and the characteristic image.
Taking the face image as an example for specific explanation, as shown in fig. 5, first, a face image X of an attacker is selectedattAnd M face images of the victim
Figure BDA0002237566220000121
Then, the face image X of the attacker is iteratively editedattObtaining a multi-graph fusion type confrontation sample
Figure BDA0002237566220000122
The whole process iterates M times. Wherein the m-th iteration is performed by the method steps of the above embodiments
Figure BDA0002237566220000123
And
Figure BDA0002237566220000124
generating countermeasure samples in iterations
Figure BDA0002237566220000125
The formula is as follows:
Figure BDA0002237566220000126
in which initialization is carried out
Figure BDA0002237566220000127
Specifically, firstly, the method in the above embodiment is used based on a face image X of an attackerattAnd M face images of the victim
Figure BDA0002237566220000128
First sheet of
Figure BDA0002237566220000129
Antagonistic samples in one iteration
Figure BDA00022375662200001210
Antagonistic samples in the iteration
Figure BDA00022375662200001211
I.e., X in the above embodimentatt-advThen, when iteration is carried out again, the face images of the victim are acquired in sequence
Figure BDA00022375662200001212
Align with the challenge sample in the iteration (in the second iteration, with
Figure BDA00022375662200001213
Alignment) to obtain a feature image to be fused, and then fusing the feature image to be fused and a countermeasure sample in iteration by adopting the method described in the above factual manner to update the countermeasure sample, so that the countermeasure sample is continuously updated in iteration, different and diversified key features of a victim can be fused, and the success rate of a digital world attack target model (a human face comparison model) is improved.
In one embodiment of the present invention, a countermeasure sample is generated based on the fused image by an optimization-based countermeasure sample generation method.
Specifically, in this embodiment, the challenge samples generated by fusion in the above embodiments may be used as initialization, so as to enhance the existing challenge sample generation method based on optimization, and further improve the attack success rate of the existing challenge sample generation method based on optimization, as shown in fig. 4. By the scheme provided by the embodiment, the success rate of the digital world attack target model (the face comparison model) can be improved, and the specific implementation mode is as follows:
for example, first, a face image X of an attack object is selectedattAnd a face image X of the attacked objectvicThen the method steps in the above example are used to generate a fused confrontation sample Xatt-advThen, from the fusion type confrontation sample Xatt-advThe area to be optimized is selected. In particular, a mask matrix W is usedoptTo represent a selected area, wherein the mask matrix WoptIs set to a size of (h, w), h and w being images X, respectivelyatt-advHeight and width of (d); woptThe domain of each element in (a) is {0,1}, the element marked 1 marks the pixel locations that need to be optimized, and the element marked 0 marks the pixel locations that do not need to be optimized.
And 8, taking the confrontation sample based on image fusion as initialization, and generating an enhanced fusion type confrontation sample by using the existing confrontation sample generation technology based on optimization. In one example of the present embodiment, a method of generating an enhanced fusion-type challenge sample is described by taking a momentum-based challenge sample generation method as an example. The objective function of the momentum-based confrontation sample generation method is assumed to be:
Xatt-adv-aug=argmaxxL(X),
s.t.|X-Xatt-adv|≤∈,
xatt-adv-aug⊙(1-Wopt)=xatt-adv⊙(1-Wopt)
wherein L (X) is a derivable objective function (e.g., cosine similarity of face image) arbitrarily describing the attack effect | · ceilingIs an infinite norm, ∈ is the maximum possible perturbation value, and is a vector element-by-element product. To obtain an enhanced fusion-type challenge sample, the momentum and challenge samples in the iteration are updated by the following equations:
Figure BDA0002237566220000131
X=proj(X+Wopt⊙α·sign(g))
wherein g represents momentum, μ represents the decay rate of momentum,
Figure BDA0002237566220000132
represents the gradient of the objective function with respect to the model input, | · calculation1Is the L1 norm, X is a variable representing the challenge sample in the iteration, proj represents projecting the variable back into the constraint range, α represents the step size of the gradient descent, sign is a sign function.
To verify the performance of the enhanced confluent challenge samples, the inventors designed and conducted experiments that selected two average persons as the aggressor and victim, respectively. The generation of the enhanced confluent challenge samples in the experiment is shown in fig. 2a and 2b, steps 1 to 8. The effectiveness of the attack is verified on a face recognition model MobileFacenet through an experiment. The cosine similarity of the fused confrontation sample and the victim image is 0.36, the cosine similarity of the enhanced fused confrontation sample and the victim image is 0.51, and the cosine similarity of the confrontation sample and the victim image obtained by using the same region to be optimized and the existing confrontation sample generation method based on optimization is 0.47. The enhanced fusion type confrontation sample has higher cosine similarity than the fusion type confrontation sample and the confrontation sample obtained by the prior method, and the human face comparison model is misled to ensure that the enhanced fusion type confrontation sample and the victim image are two similar human face images, thereby verifying the effectiveness of the invention.
In addition, it can be understood that the countermeasure samples generated by the embodiments in the above embodiments can be made into entities, so as to attack the target model in the real physical world.
As shown in fig. 6, the confrontation sample generated in the above embodiment may be printed out to attack the face comparison model in the physical world.
To verify the performance of the enhanced confluent challenge samples in the physical world, the inventors designed and conducted experiments that chose two average people as the aggressor and victim, respectively. Steps 1 to 9 in fig. 2a and 2b illustrate the generation of enhanced confluent-type challenge samples in the physical world in the experiment. The effectiveness of the attack is verified on a face recognition model MobileFacenet through an experiment. Before the attack, the cosine similarity of the attacker and victim images was 0.24, while the cosine similarity of the physical world, enhanced fused countermeasure sample and victim image was 0.45. The enhanced fusion type confrontation sample in the physical world has higher cosine similarity than the attacker image, the face comparison model is misled to be two face images similar to the enhanced fusion type confrontation sample in the physical world and the victim image, and the effectiveness of the invention is verified.
The device provided by the invention will be explained with reference to the attached drawings. Fig. 7 is a schematic structural diagram of a challenge sample generating device according to an embodiment of the present invention, the device including:
710 an image acquisition module configured to acquire an image of an attacking object and an image of an attacked object, respectively;
the 720 characteristic selection module is configured to select a characteristic image to be fused according to the image of the attacked object;
730 an image fusion module configured to perform image fusion based on the image of the attack object and the feature image;
740 a confrontation sample generation module configured to derive a confrontation sample based on the fused image.
In one example of this embodiment, the feature image can affect the output result of an object model, which includes an object detection model and an image recognition model.
In an embodiment of the present invention, the feature extracting module 720 includes:
an image alignment unit configured to align an image of an attacked object with an image of an attacking object;
and the characteristic selecting unit is configured to select a characteristic image to be fused from the aligned images of the attacked objects.
In an embodiment of the present invention, the image alignment unit is further configured to perform perspective transformation on the image of the attacked object, so as to align the image of the attacked object with the image of the attacking object.
In one embodiment of the present embodiment, the image alignment unit includes:
a key point acquisition subunit configured to acquire key points on the image of the attack object and the image of the attacked object, respectively;
an estimation subunit configured to estimate based on the keypoints to obtain a perspective transformation matrix;
an image comparison subunit configured to perform perspective transformation on the image of the attacked object based on the perspective transformation matrix so as to align the image of the attacked object with the image of the attacking object.
In an embodiment of the present invention, a similarity transformation matrix is used as a perspective transformation matrix, and the image of the attacked object is subjected to perspective transformation by using the similarity transformation matrix.
In an embodiment of the present embodiment, a mask matrix is used to select a feature image to be fused from the aligned images of the attacked object.
In an embodiment of the present invention, the size of the mask matrix is consistent with the aligned image of the attacked object.
In an embodiment of the present invention, image fusion is performed by using one of direct substitution, linear interpolation, and poisson image editing methods.
In one embodiment of this embodiment, the confrontation sample generation module 740 includes:
a fusion-type confrontation sample generation unit configured to take the fused image as a confrontation sample; and/or
A multi-image fusion type confrontation sample generation unit configured to perform a plurality of iterations based on the fused image to generate a confrontation sample; and/or
And the enhanced countermeasure sample generation unit is configured to generate countermeasure samples by adopting an optimized countermeasure sample generation method based on the fused images.
In one example of the present embodiment, the multi-map fusion type confrontation sample generation unit is further configured to:
acquiring an image of at least one attacked object;
the following steps are repeatedly executed until all the acquired images of the attacked object are traversed. And taking the image generated by final fusion as a countermeasure sample:
aligning the image of the attacked object with the fused image;
selecting a characteristic image to be fused from the aligned images of the attacked object;
and carrying out image fusion based on the fused image and the characteristic image.
In an embodiment of the present invention, a mask matrix is used to select a region to be optimized from the fused image, and then a countermeasure sample is generated based on the fused image by using an optimization-based countermeasure sample generation method.
In one embodiment of this embodiment, the apparatus further comprises:
a physical sample making module configured to make the countermeasure sample as a physical entity.
FIG. 8 illustrates a block diagram of an exemplary computing device 80 suitable for use in implementing embodiments of the present invention, the computing device 80 may be a computer system or server. The computing device 80 shown in FIG. 8 is only one example and should not be taken to limit the scope of use and functionality of embodiments of the present invention.
As shown in fig. 8, components of computing device 80 may include, but are not limited to: one or more processors or processing units 801, a system memory 802, and a bus 803 that couples various system components including the system memory 802 and the processing unit 801.
Computing device 80 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computing device 80 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 802 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)8021 and/or cache memory 8022. Computing device 80 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, ROM8023 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, and typically referred to as a "hard disk drive"). Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 803 by one or more data media interfaces. At least one program product may be included in system memory 802 having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
Program/utility 8025, having a set (at least one) of program modules 8024, can be stored, for example, in system memory 802, and such program modules 8024 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. Program modules 8024 generally perform the functions and/or methodologies of embodiments of the present invention as described herein.
Computing device 80 may also communicate with one or more external devices 804 (e.g., keyboard, pointing device, display, etc.). Such communication may be through input/output (I/O) interfaces 805. Moreover, computing device 80 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 806. As shown in FIG. 8, the network adapter 806 communicates with other modules of the computing device 80, such as the processing unit 801, over the bus 803. It should be appreciated that although not shown in FIG. 8, other hardware and/or software modules may be used in conjunction with computing device 80.
The processing unit 801 executes various functional applications and data processing by running a program stored in the system memory 802, for example, acquiring an image of an attack object and an image of an attacked object, respectively; selecting a characteristic image to be fused according to the image of the attacked object; carrying out image fusion based on the image of the attack object and the characteristic image; a challenge sample is obtained based on the fused image. The specific implementation of each step is not repeated here. It should be noted that although in the above detailed description several units/modules or sub-units/sub-modules of the challenge sample generation means are mentioned, such division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the countermeasure sample generation method of the above embodiment are performed.
Next, referring to fig. 9, a computer-readable storage medium according to an exemplary embodiment of the present invention is described, referring to fig. 9, which shows a computer-readable storage medium, namely an optical disc 90, on which a computer program (i.e. a program product) is stored, where the computer program, when executed by a processor, implements the steps described in the above method embodiments, for example, acquiring an image of an attacking object and an image of an attacked object respectively; selecting a characteristic image to be fused according to the image of the attacked object; carrying out image fusion based on the image of the attack object and the characteristic image; obtaining a confrontation sample based on the fused image; the specific implementation of each step is not repeated here.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Through the above description, embodiments of the present invention provide the following solutions, but are not limited thereto:
1. a challenge sample generation method, comprising:
respectively acquiring an image of an attacking object and an image of an attacked object;
selecting a characteristic image to be fused according to the image of the attacked object;
carrying out image fusion based on the image of the attack object and the characteristic image;
a challenge sample is obtained based on the fused image.
2. The method of claim 1, wherein the feature images can affect output results of an object model, the object model including an object detection model and an image recognition model.
3. The method according to scheme 2, wherein selecting the feature image to be fused according to the image of the attacked object includes:
aligning an image of the attacked object with an image of the attacking object;
and selecting a characteristic image to be fused from the aligned images of the attacked object.
4. The method of claim 3, wherein the image of the attacked object is perspective transformed to align the image of the attacked object with the image of the attacking object.
5. The method of scheme 4, wherein aligning the image of the attacked object with the image of the attacking object, comprises:
respectively acquiring key points on an image of an attacking object and an image of an attacked object;
estimating based on the key points to obtain a perspective transformation matrix;
and performing perspective transformation on the image of the attacked object based on the perspective transformation matrix so as to align the image of the attacked object with the image of the attacking object.
6. The method according to scheme 5, wherein a similarity transformation matrix is used as a perspective transformation matrix, and the image of the attacked object is subjected to perspective transformation by using the similarity transformation matrix.
7. The method according to any of the schemes 3 to 6, wherein the mask matrix is adopted to select the feature image to be fused from the aligned images of the attacked objects.
8. The method of claim 7, wherein the mask matrix is sized to coincide with the aligned image of the attacked object.
9. The method according to claim 8, wherein the image fusion method is used to perform image fusion on the image of the attack object and the characteristic image.
10. The method of claim 9, wherein deriving the challenge sample based on the fused image comprises:
taking the fused image as a countermeasure sample; and/or
Performing a plurality of iterations based on the fused image to generate a confrontation sample; and/or
And generating a countermeasure sample by adopting an optimized countermeasure sample generation method based on the fused image.
11. The method of claim 10, wherein performing a plurality of iterations based on the fused image to generate the challenge sample comprises:
acquiring an image of at least one attacked object;
repeatedly executing the following steps until all the acquired images of the attacked object are traversed:
aligning the image of the attacked object with the fused image;
selecting a characteristic image to be fused from the aligned images of the attacked object;
and carrying out image fusion based on the fused image and the characteristic image.
12. The method of claim 11, wherein the mask matrix is used to select the region to be optimized from the fused image, and then the countermeasure sample is generated based on the fused image by an optimization-based countermeasure sample generation method.
13. The method of any of protocols 10-12, wherein after obtaining the challenge sample, the method further comprises:
the challenge sample is made as an entity.
14. A challenge sample generation device comprising:
an image acquisition module configured to acquire an image of an attack object and an image of an attacked object, respectively;
the characteristic selection module is configured to select a characteristic image to be fused according to the image of the attacked object;
the image fusion module is configured to perform image fusion based on the image of the attack object and the characteristic image;
a confrontation sample generation module configured to derive a confrontation sample based on the fused image.
15. The apparatus of claim 14, wherein the feature image is capable of affecting an output of an object model, the object model including an object detection model and an image recognition model.
16. The apparatus of claim 15, wherein the feature extraction module comprises:
an image alignment unit configured to align an image of an attacked object with an image of an attacking object;
and the characteristic selecting unit is configured to select a characteristic image to be fused from the aligned images of the attacked objects.
17. The apparatus according to claim 16, wherein the image alignment unit is further configured to perform perspective transformation on the image of the attacked object so as to align the image of the attacked object with the image of the attacking object.
18. The apparatus of claim 17, wherein the image alignment unit comprises:
a key point acquisition subunit configured to acquire key points on the image of the attack object and the image of the attacked object, respectively;
an estimation subunit configured to estimate based on the keypoints to obtain a perspective transformation matrix;
an image comparison subunit configured to perform perspective transformation on the image of the attacked object based on the perspective transformation matrix so as to align the image of the attacked object with the image of the attacking object.
19. The apparatus according to claim 18, wherein a similarity transformation matrix is used as a perspective transformation matrix, and the image of the attacked object is subjected to perspective transformation by using the similarity transformation matrix.
20. The apparatus according to any of schemes 16 to 19, wherein the mask matrix is used to select the feature image to be fused from the aligned images of the attacked objects.
21. The apparatus of claim 20, wherein the mask matrix is sized to conform to the aligned image of the attacked object.
22. The apparatus according to claim 21, wherein the image fusion method is used to perform image fusion on the image of the attack object and the feature image.
23. The apparatus of claim 22, wherein the challenge sample generation module comprises:
a fusion-type confrontation sample generation unit configured to take the fused image as a confrontation sample; and/or
A multi-image fusion type confrontation sample generation unit configured to perform a plurality of iterations based on the fused image to generate a confrontation sample; and/or
And the enhanced countermeasure sample generation unit is configured to generate countermeasure samples by adopting an optimized countermeasure sample generation method based on the fused images.
24. The apparatus of scheme 23, wherein the multi-map fusion type confrontation sample generation unit is further configured to:
acquiring an image of at least one attacked object;
repeatedly executing the following steps until all the obtained images of the attacked objects are traversed, and taking the images generated by final fusion as countermeasure samples:
aligning the image of the attacked object with the fused image;
selecting a characteristic image to be fused from the aligned images of the attacked object;
and carrying out image fusion based on the fused image and the characteristic image.
25. The apparatus of scheme 23, wherein the enhanced confrontation sample generation unit is further configured to select a region to be optimized from the fused image using a mask matrix selection, and then generate the confrontation sample based on the fused image using an optimized confrontation sample generation method.
26. The apparatus of any of aspects 23-25, wherein the apparatus further comprises:
a physical sample making module configured to make the countermeasure sample as a physical entity.
27. A computer-readable storage medium storing a computer program for executing the countermeasure sample generation method according to any one of the above-described aspects 1 to 13.
28. A computing device, the computing device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to perform the challenge sample generation method of any of the above schemes 1-13.
29. A challenge sample, wherein the challenge sample is generated by the challenge sample generation method of any of schemes 1-13;
or by the challenge sample generating means of any of protocols 14-26;
or generated by the computing device of scheme 28.

Claims (27)

1. A challenge sample generation method, comprising:
respectively acquiring an image of an attacking object and an image of an attacked object;
selecting a characteristic image to be fused according to the image of the attacked object, wherein the method comprises the following steps:
aligning an image of the attacked object with an image of the attacking object;
selecting a characteristic image to be fused from the aligned images of the attacked object;
carrying out image fusion based on the image of the attack object and the characteristic image;
a challenge sample is obtained based on the fused image.
2. The method of claim 1, wherein the feature image can affect an output result of an object model, the object model including an object detection model and an image recognition model.
3. The method of claim 1, wherein the image of the attacked object is perspective transformed to align the image of the attacked object with the image of the attacking object.
4. The method of claim 3, wherein aligning the image of the attacked object with the image of the attacking object comprises:
respectively acquiring key points on an image of an attacking object and an image of an attacked object;
estimating based on the key points to obtain a perspective transformation matrix;
and performing perspective transformation on the image of the attacked object based on the perspective transformation matrix so as to align the image of the attacked object with the image of the attacking object.
5. The method of claim 4, wherein the similarity transformation matrix is used as a perspective transformation matrix, and the image of the attacked object is subjected to perspective transformation by using the similarity transformation matrix.
6. The method according to any one of claims 1 to 5, wherein a mask matrix is used to select the feature image to be fused from the aligned images of the attacked object.
7. The method of claim 6, wherein the mask matrix is sized to coincide with the aligned image of the attacked object.
8. The method according to claim 7, wherein the image fusion method is used for image fusion of the image of the attack object and the characteristic image.
9. The method of claim 8, wherein deriving a challenge sample based on the fused image comprises:
taking the fused image as a countermeasure sample; and/or
Performing a plurality of iterations based on the fused image to generate a confrontation sample; and/or
And generating a countermeasure sample by adopting an optimized countermeasure sample generation method based on the fused image.
10. The method of claim 9, wherein performing a plurality of iterations based on the fused image to generate a challenge sample comprises:
acquiring an image of at least one attacked object;
repeatedly executing the following steps until all the acquired images of the attacked object are traversed:
aligning the image of the attacked object with the fused image;
selecting a characteristic image to be fused from the aligned images of the attacked object;
and carrying out image fusion based on the fused image and the characteristic image.
11. The method of claim 10, wherein a mask matrix is used to select a region to be optimized from the fused image, and then a confrontation sample is generated based on the fused image using an optimization-based confrontation sample generation method.
12. The method of any of claims 9-11, wherein after obtaining the challenge sample, the method further comprises:
the challenge sample is made as an entity.
13. A challenge sample generation device comprising:
an image acquisition module configured to acquire an image of an attack object and an image of an attacked object, respectively;
the characteristic selection module is configured to select a characteristic image to be fused according to the image of the attacked object;
the characteristic selecting module comprises:
an image alignment unit configured to align an image of an attacked object with an image of an attacking object;
the characteristic selecting unit is configured to select a characteristic image to be fused from the aligned images of the attacked object;
the image fusion module is configured to perform image fusion based on the image of the attack object and the characteristic image;
a confrontation sample generation module configured to derive a confrontation sample based on the fused image.
14. The apparatus of claim 13, wherein the feature image is capable of affecting an output result of an object model, the object model including an object detection model and an image recognition model.
15. The apparatus of claim 13, wherein the image alignment unit is further configured to perform a perspective transformation on the image of the attacked object to align the image of the attacked object with the image of the attacking object.
16. The apparatus of claim 15, wherein the image alignment unit comprises:
a key point acquisition subunit configured to acquire key points on the image of the attack object and the image of the attacked object, respectively;
an estimation subunit configured to estimate based on the keypoints to obtain a perspective transformation matrix;
an image comparison subunit configured to perform perspective transformation on the image of the attacked object based on the perspective transformation matrix so as to align the image of the attacked object with the image of the attacking object.
17. The apparatus of claim 16, wherein a similarity transformation matrix is used as a perspective transformation matrix, and the image of the attacked object is perspective-transformed using the similarity transformation matrix.
18. The apparatus according to any one of claims 13-17, wherein a mask matrix is used to select the feature image to be fused from the aligned images of the attacked objects.
19. The apparatus of claim 18, wherein the mask matrix is sized to coincide with the aligned image of the attacked object.
20. The apparatus of claim 19, wherein the image fusion method is used to image-fuse the image of the attack object and the feature image.
21. The apparatus of claim 20, wherein the challenge sample generation module comprises:
a fusion-type confrontation sample generation unit configured to take the fused image as a confrontation sample; and/or
A multi-image fusion type confrontation sample generation unit configured to perform a plurality of iterations based on the fused image to generate a confrontation sample; and/or
And the enhanced countermeasure sample generation unit is configured to generate countermeasure samples by adopting an optimized countermeasure sample generation method based on the fused images.
22. The apparatus of claim 21, wherein the multi-map fusion type confrontation sample generation unit is further configured to:
acquiring an image of at least one attacked object;
repeatedly executing the following steps until all the obtained images of the attacked objects are traversed, and taking the images generated by final fusion as countermeasure samples:
aligning the image of the attacked object with the fused image;
selecting a characteristic image to be fused from the aligned images of the attacked object;
and carrying out image fusion based on the fused image and the characteristic image.
23. The apparatus of claim 21, wherein the enhanced countermeasure sample generation unit is further configured to select a region to be optimized from the fused image using a masking matrix selection, and then generate a countermeasure sample using an optimization-based countermeasure sample generation method based on the fused image.
24. The apparatus of any of claims 21-23, wherein the apparatus further comprises:
a physical sample making module configured to make the countermeasure sample as a physical entity.
25. A computer-readable storage medium storing a computer program for executing the countermeasure sample generation method of any of claims 1-12 above.
26. A computing device, the computing device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the challenge sample generation method of any of claims 1-12 above.
27. A challenge sample, wherein the challenge sample is generated by the challenge sample generation method of any one of claims 1 to 12;
or by the challenge sample generating device of any of claims 13-24;
or generated by the computing device of claim 26.
CN201910988796.9A 2019-10-17 2019-10-17 Countermeasure sample, generation method, medium, device and computing equipment thereof Active CN110705652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910988796.9A CN110705652B (en) 2019-10-17 2019-10-17 Countermeasure sample, generation method, medium, device and computing equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910988796.9A CN110705652B (en) 2019-10-17 2019-10-17 Countermeasure sample, generation method, medium, device and computing equipment thereof

Publications (2)

Publication Number Publication Date
CN110705652A CN110705652A (en) 2020-01-17
CN110705652B true CN110705652B (en) 2020-10-23

Family

ID=69200456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910988796.9A Active CN110705652B (en) 2019-10-17 2019-10-17 Countermeasure sample, generation method, medium, device and computing equipment thereof

Country Status (1)

Country Link
CN (1) CN110705652B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450244A (en) * 2020-03-26 2021-09-28 阿里巴巴集团控股有限公司 Data processing method and device
CN111783621B (en) * 2020-06-29 2024-01-23 北京百度网讯科技有限公司 Method, device, equipment and storage medium for facial expression recognition and model training
CN112035834A (en) * 2020-08-28 2020-12-04 北京推想科技有限公司 Countermeasure training method and device, and application method and device of neural network model
CN112418332B (en) * 2020-11-26 2022-09-23 北京市商汤科技开发有限公司 Image processing method and device and image generation method and device
CN113111776B (en) * 2021-04-12 2024-04-16 京东科技控股股份有限公司 Method, device, equipment and storage medium for generating countermeasure sample
CN113610904B (en) * 2021-07-19 2023-10-20 广州大学 3D local point cloud countermeasure sample generation method, system, computer and medium
CN114333029A (en) * 2021-12-31 2022-04-12 北京瑞莱智慧科技有限公司 Template image generation method, device and storage medium
CN114663946B (en) * 2022-03-21 2023-04-07 中国电信股份有限公司 Countermeasure sample generation method, apparatus, device and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296692A (en) * 2016-08-11 2017-01-04 深圳市未来媒体技术研究院 Image significance detection method based on antagonism network
CN108446700A (en) * 2018-03-07 2018-08-24 浙江工业大学 A kind of car plate attack generation method based on to attack resistance
CN109241830A (en) * 2018-07-26 2019-01-18 合肥工业大学 It listens to the teacher method for detecting abnormality in the classroom for generating confrontation network based on illumination
CN109272031A (en) * 2018-09-05 2019-01-25 宽凳(北京)科技有限公司 A kind of training sample generation method and device, equipment, medium
CN109523493A (en) * 2017-09-18 2019-03-26 杭州海康威视数字技术股份有限公司 A kind of image generating method, device and electronic equipment
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN110020593A (en) * 2019-02-03 2019-07-16 清华大学 Information processing method and device, medium and calculating equipment
WO2019143384A1 (en) * 2018-01-18 2019-07-25 Google Llc Systems and methods for improved adversarial training of machine-learned models
CN110210573A (en) * 2019-06-11 2019-09-06 腾讯科技(深圳)有限公司 Fight generation method, device, terminal and the storage medium of image
CN110210617A (en) * 2019-05-15 2019-09-06 北京邮电大学 A kind of confrontation sample generating method and generating means based on feature enhancing
CN110245598A (en) * 2019-06-06 2019-09-17 北京瑞莱智慧科技有限公司 It fights sample generating method, device, medium and calculates equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215007B (en) * 2018-09-21 2022-04-12 维沃移动通信有限公司 Image generation method and terminal equipment
CN109949386A (en) * 2019-03-07 2019-06-28 北京旷视科技有限公司 A kind of Method for Texture Image Synthesis and device
CN110321790B (en) * 2019-05-21 2023-05-12 华为技术有限公司 Method for detecting countermeasure sample and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296692A (en) * 2016-08-11 2017-01-04 深圳市未来媒体技术研究院 Image significance detection method based on antagonism network
CN109523493A (en) * 2017-09-18 2019-03-26 杭州海康威视数字技术股份有限公司 A kind of image generating method, device and electronic equipment
WO2019143384A1 (en) * 2018-01-18 2019-07-25 Google Llc Systems and methods for improved adversarial training of machine-learned models
CN108446700A (en) * 2018-03-07 2018-08-24 浙江工业大学 A kind of car plate attack generation method based on to attack resistance
CN109241830A (en) * 2018-07-26 2019-01-18 合肥工业大学 It listens to the teacher method for detecting abnormality in the classroom for generating confrontation network based on illumination
CN109272031A (en) * 2018-09-05 2019-01-25 宽凳(北京)科技有限公司 A kind of training sample generation method and device, equipment, medium
CN110020593A (en) * 2019-02-03 2019-07-16 清华大学 Information processing method and device, medium and calculating equipment
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN110210617A (en) * 2019-05-15 2019-09-06 北京邮电大学 A kind of confrontation sample generating method and generating means based on feature enhancing
CN110245598A (en) * 2019-06-06 2019-09-17 北京瑞莱智慧科技有限公司 It fights sample generating method, device, medium and calculates equipment
CN110210573A (en) * 2019-06-11 2019-09-06 腾讯科技(深圳)有限公司 Fight generation method, device, terminal and the storage medium of image

Also Published As

Publication number Publication date
CN110705652A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110705652B (en) Countermeasure sample, generation method, medium, device and computing equipment thereof
CN110245598B (en) Countermeasure sample generation method, apparatus, medium, and computing device
CN111738374B (en) Multi-sample anti-disturbance generation method and device, storage medium and computing equipment
CN108229488B (en) Method and device for detecting key points of object and electronic equipment
CN111723865B (en) Method, apparatus and medium for evaluating performance of image recognition model and attack method
CN111914946B (en) Countermeasure sample generation method, system and device for outlier removal method
CN107545241A (en) Neural network model is trained and biopsy method, device and storage medium
JP6597914B2 (en) Image processing apparatus, image processing method, and program
CN105678778B (en) A kind of image matching method and device
CN107609463A (en) Biopsy method, device, equipment and storage medium
CN109800682A (en) Driver attributes' recognition methods and Related product
WO2021042544A1 (en) Facial verification method and apparatus based on mesh removal model, and computer device and storage medium
CN113111963A (en) Method for re-identifying pedestrian by black box attack
CN111814916A (en) Multi-sample anti-disturbance generation method and device, storage medium and computing equipment
CN113240718A (en) Multi-target identification and tracking method, system, medium and computing device
CN113111776A (en) Method, device and equipment for generating countermeasure sample and storage medium
CN113643365A (en) Camera pose estimation method, device, equipment and readable storage medium
CN114419346B (en) Model robustness detection method, device, equipment and medium
CN114463798A (en) Training method, device and equipment of face recognition model and storage medium
CN110020593B (en) Information processing method and device, medium and computing equipment
CN113935034B (en) Malicious code family classification method, device and storage medium based on graph neural network
Yu et al. Defending person detection against adversarial patch attack by using universal defensive frame
CN112488137A (en) Sample acquisition method and device, electronic equipment and machine-readable storage medium
CN113075212B (en) Vehicle verification method and device
CN110502961A (en) A kind of facial image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200117

Assignee: Beijing Intellectual Property Management Co.,Ltd.

Assignor: Beijing Ruili Wisdom Technology Co.,Ltd.

Contract record no.: X2023110000073

Denomination of invention: Adversarial samples and their generation methods, media, devices, and computing equipment

Granted publication date: 20201023

License type: Common License

Record date: 20230531

EE01 Entry into force of recordation of patent licensing contract