CN114742170B - Countermeasure sample generation method, model training method, image recognition method and device - Google Patents

Countermeasure sample generation method, model training method, image recognition method and device Download PDF

Info

Publication number
CN114742170B
CN114742170B CN202210426651.1A CN202210426651A CN114742170B CN 114742170 B CN114742170 B CN 114742170B CN 202210426651 A CN202210426651 A CN 202210426651A CN 114742170 B CN114742170 B CN 114742170B
Authority
CN
China
Prior art keywords
component
attention
sample image
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210426651.1A
Other languages
Chinese (zh)
Other versions
CN114742170A (en
Inventor
刘彦宏
吴海英
王洪斌
蒋宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Consumer Finance Co Ltd
Original Assignee
Mashang Consumer Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Consumer Finance Co Ltd filed Critical Mashang Consumer Finance Co Ltd
Priority to CN202210426651.1A priority Critical patent/CN114742170B/en
Publication of CN114742170A publication Critical patent/CN114742170A/en
Application granted granted Critical
Publication of CN114742170B publication Critical patent/CN114742170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an countermeasure sample generation method, a model training method, an image recognition method and a device, wherein an original sample image is input into a trained attention model to obtain first attention force diagrams and component coefficients respectively corresponding to first components contained in a first target object in the original sample image; determining an anti-patch area in the original sample image based on the first attention map and the component coefficients; performing disturbance processing on the original sample image based on the countermeasure patch area to obtain a countermeasure sample image; the located anti-patch area is more specific and higher in accuracy, so that the anti-attack effect of the anti-sample image generated based on the anti-patch area can be improved, the safety of the target model obtained by performing anti-training based on the anti-sample image is higher, and potential safety hazards of the target model in an application stage are avoided.

Description

Countermeasure sample generation method, model training method, image recognition method and device
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an countermeasure sample generation method, a model training method, an image recognition method and an image recognition device.
Background
At present, along with the rapid development of artificial intelligence technology, a neural network model is also inoculated, wherein model parameters in the neural network model are iteratively trained based on a large amount of historical sample data, so that the neural network model can learn rules from the large amount of historical sample data, and the trained neural network model is utilized to classify and identify target images.
However, considering that potential safety hazards may exist in the stage of classifying and identifying the target image by using the neural network model, in order to improve the safety of the neural network model in the application stage, the neural network model is generally subjected to countermeasure training by using a countermeasure sample; however, the conventional determination method of the challenge patch area is usually that an expert performs manual specification or simply extracts a key area in an image according to experience, which results in low accuracy of the determined challenge patch area and poor attack effect of the challenge sample.
Disclosure of Invention
The object of the embodiments of the present application is to provide a method for generating a challenge sample, a method for training a model, a method for identifying an image, and a device thereof, which can select a more targeted and more accurate challenge patch area, so that the challenge effect of the challenge sample image generated based on the challenge patch area can be improved, and thus the security of a target model obtained by performing challenge training based on the challenge sample image is higher, and further potential safety hazards of the target model in an application stage are avoided, so that the target model can better cope with the challenge of illegal molecules.
In order to achieve the above technical solution, the embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides a method for generating an challenge sample, the method including:
acquiring an original sample image to be disturbed; the original sample image includes an image region of a first target object, the first target object including at least one first component;
inputting the original sample image into a trained attention model to obtain a first attention map and a component coefficient respectively corresponding to the first component; the component coefficients are used for representing the importance degree of the image subareas corresponding to the first component to the first target object identified as the real label;
determining an anti-patch area in the original sample image based on the first attention map and the component coefficients;
and carrying out disturbance processing on the corresponding image area in the original sample image based on the countermeasure patch area to obtain a countermeasure sample image.
In a second aspect, an embodiment of the present application provides a method for training an attention model in the method according to the first aspect, where the training method includes:
Acquiring a plurality of first sample images; the first sample image includes an image region of a second target object, the second target object including at least one second component;
inputting each first sample image into an attention model to be trained, and obtaining a prediction probability set corresponding to each first sample image; the set of prediction probabilities includes a prediction probability of the second target object under each classification category, the prediction probability being determined based on component coefficients of each of the second components, the component coefficients being determined based on a second attention profile of the second component;
and carrying out iterative training on the attention model to be trained based on the prediction probability set and the real label of the first sample image to obtain a trained attention model.
In a third aspect, an embodiment of the present application provides an image recognition model training method, where the method includes:
acquiring a challenge sample image; the challenge sample image is derived based on the method as described in the first aspect;
performing countermeasure training on the image recognition model to be countered based on the countermeasure sample image to obtain a trained image recognition model; the image recognition model to be opposed is an image recognition model obtained by performing model training based on the second sample image.
In a fourth aspect, an embodiment of the present application provides an image recognition method, where the method includes:
acquiring a target image to be identified;
inputting the target image into a trained image recognition model, and recognizing the target image to obtain a recognition result of the target image; the trained image recognition model is trained using the method as described in the third aspect.
In a fifth aspect, an embodiment of the present application provides an challenge sample generating device, the device including:
the first acquisition module is configured to acquire an original sample image to be disturbed; the original sample image includes an image region of a first target object, the first target object including at least one first component;
the first output module is configured to input the original sample image into a trained attention model to obtain first attention force diagrams and component coefficients respectively corresponding to the first components; the component coefficients are used for representing the importance degree of the image subareas corresponding to the first component to the first target object identified as the real label;
a patch area determination module configured to determine an anti-patch area in the original sample image based on the first attention map and the component coefficients;
And the countermeasure sample generation module is configured to perform disturbance processing on a corresponding image area in the original sample image based on the countermeasure patch area to obtain a countermeasure sample image.
In a sixth aspect, an embodiment of the present application provides an attention model training device, configured to train an attention model in the method according to the first aspect, where the device includes:
a second acquisition module configured to acquire a plurality of first sample images; the first sample image includes an image region of a second target object, the second target object including at least one second component;
the second output module is configured to input each first sample image into an attention model to be trained, and obtain a prediction probability set corresponding to each first sample image; the set of prediction probabilities includes a prediction probability of the second target object under each classification category, the prediction probability being determined based on component coefficients of each of the second components, the component coefficients being determined based on a second attention profile of the second component;
and the first model training module is configured to iteratively train the attention model to be trained based on the prediction probability set and the real label of the first sample image to obtain a trained attention model.
In a seventh aspect, an embodiment of the present application provides an image recognition model training apparatus, where the apparatus includes:
a third acquisition module configured to acquire a challenge sample image; the challenge sample image is derived based on the method as described in the first aspect;
the second model training module is configured to perform countermeasure training on the image recognition model to be countered based on the countermeasure sample image to obtain a trained image recognition model; the image recognition model to be opposed is an image recognition model obtained by performing model training based on the second sample image.
In an eighth aspect, an embodiment of the present application provides an image recognition apparatus, including:
a fourth acquisition module configured to acquire a target image to be identified;
the image recognition module is configured to input the target image into a trained image recognition model, recognize the target image and obtain a recognition result of the target image; the trained image recognition model is trained using the method as described in the third aspect.
In a ninth aspect, an embodiment of the present application provides a computer device, including:
A processor; and a memory arranged to store computer executable instructions configured to be executed by the processor, the executable instructions comprising steps for performing the method as described in the first, second, third or fourth aspects.
In a tenth aspect, embodiments of the present application provide a storage medium storing computer-executable instructions for causing a computer to perform steps in a method as described in the first, second, third, or fourth aspects.
It can be seen that, in the embodiment of the present application, an original sample image is input to a trained attention model, so as to obtain first attention force diagrams and component coefficients corresponding to each first component included in a first target object in the original sample image; determining an anti-patch area in the original sample image based on the first attention map and the component coefficients; performing disturbance processing on the original sample image based on the countermeasure patch area to obtain a countermeasure sample image; by taking each first component contained in a first target object in an original sample image as a minimum analysis unit and fully utilizing intermediate data (namely first attention force diagrams and component coefficients of all first components) output by a trained attention model, an image sub-region corresponding to the first component with higher importance degree is selected as an anti-patch region in the original sample image, so that the anti-patch region can be positioned in the original sample image in a finer granularity, the positioned anti-patch region has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch region can be improved, the safety of a target model obtained by using the anti-sample image for anti-training is higher, potential safety hazards of the target model in an application stage are avoided, and the target model can better cope with anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments described in one or more of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a method for generating an challenge sample according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an implementation principle involved in determining a first attention map and a first component coefficient in an challenge sample generation method provided in an embodiment of the present application;
FIG. 3 is a second flowchart of an challenge sample generating method according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an implementation principle of generating a challenge sample image involved in the challenge sample generating method provided in the embodiment of the present application;
FIG. 5 is a flowchart of an attention model training method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of the training process of the attention model according to the embodiment of the present application;
Fig. 7 is a flowchart of an image recognition model training method according to an embodiment of the present application;
fig. 8 is a flowchart of an image recognition model training method according to an embodiment of the present application;
fig. 9 is a flowchart of an image recognition method according to an embodiment of the present application;
fig. 10 is a schematic diagram of a specific implementation principle of an image recognition process provided in an embodiment of the present application;
FIG. 11 is a schematic block diagram of an challenge sample generating device according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of module components of an attention model training device according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram of module composition of an image recognition model training device according to an embodiment of the present application;
fig. 14 is a schematic block diagram of an image recognition device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions in one or more embodiments of the present application, the following description will clearly and completely describe the technical solutions in embodiments of the present application with reference to the drawings in embodiments of the present application, and it is obvious that the described embodiments are only one or more of some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one or more of the embodiments of the present application without inventive faculty, are intended to be within the scope of the present application.
It should be noted that, without conflict, one or more embodiments of the present application and features of the embodiments may be combined with each other. Embodiments of the present application will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
One or more embodiments of the present application provide an antagonistic sample generating method, a model training method, an image identifying method, and an image identifying device, which take into account that if a trained neural network model (such as a key region extraction model) is directly utilized, at least one key image region is located in an original sample image to be perturbed, and then the key image region is used as an antagonistic patch region, there is a problem that a certain key image region is located on a plurality of components, so that the positioning accuracy of the antagonistic patch region is low, and based on this, the present technical solution uses each first component included in a first target object in an original sample image as a minimum analysis unit, and fully uses intermediate data (i.e. a first attention map and a component coefficient of each first component) output by a trained attention model, the image subarea corresponding to the first component with higher importance degree is selected from the original sample image as the countermeasure patch area, so that the countermeasure patch area can be positioned in the original sample image in a finer granularity, the positioned countermeasure patch area has higher pertinence and higher accuracy, the countermeasure attack effect of the countermeasure sample image generated based on the countermeasure patch area can be improved, the safety of the target model obtained by using the countermeasure training based on the countermeasure sample image is higher, and potential safety hazards of the target model in an application stage are avoided, and the target model can better cope with the countermeasure attack of illegal molecules; meanwhile, each sub-patch area in the contrast patch area is located in one component image sub-area, namely, each contrast patch is formed on a single component, so that the contrast patch in the contrast sample image is more hidden, and the contrast effect of the contrast sample image is further improved.
Specifically, fig. 1 is a schematic flow chart of a first method for generating an challenge sample according to one or more embodiments of the present application, where the method in fig. 1 can be executed by a server or a specific terminal device, and as shown in fig. 1, the method at least includes the following steps:
s102, acquiring an original sample image to be disturbed; wherein the original sample image comprises an image area of a first target object, the first target object comprising at least one first component;
the first target object may include K first components, where each first component is a component of the first target object, that is, the number of components of the first target object is K, a certain component in the first target object is a first component K, where K is an integer greater than or equal to 1, and k= [1, K ], specifically, for different types of target objects, it may be preset which components the target object includes, for example, the first target object is a table, and the corresponding K first components may include: desktop, table leg, etc., and as another example, the first target object is a face, and the corresponding K first components may include: five sense organs, cheeks, etc., and for another example, the first target object is a cat, the corresponding K first components may include: head, cat body, cat leg, etc., and, still if the first target object is a portrait, the corresponding K first components may include: glasses, jackets, hats, etc. to be worn.
S104, inputting the original sample image into a trained attention model to obtain a first attention map and a component coefficient corresponding to a first component respectively; the component coefficients are used for representing the importance degree of the image subareas corresponding to the first components to the first target object identified as the real label;
the component coefficient corresponding to the first component k can represent the contribution degree of the first component k to the identification of the class of the first target object as the correct class, namely the importance degree of the image sub-region of the first component k for prompting the image classification model to identify the first target object as the correct class, namely whether the image sub-region where the first component k is located in the original sample image has a key influence on the image identification result or not; that is, the greater the influence of the image sub-region where the first component K is located on the image recognition result, that is, the higher the importance degree of the image sub-region of the first component K on the first target object being recognized as a true label, that is, the higher the contribution degree of the image sub-region to the first target object being recognized as a correct class, the greater the component coefficient corresponding to the first component K, so that, next, the sub-patch region corresponding to the first component having the higher importance degree can be more accurately located in the original sample image based on the magnitude relation of the component coefficients of the K first components, and the anti-patch region is further determined based on at least one sub-patch region.
Specifically, the trained attention model may include a channel feature map extraction layer, an attention map generation layer, and a component coefficient determination layer; the input of the channel characteristic map extraction layer is an original sample image, and the input is a first characteristic map under a plurality of channels; the input of the attention force diagram generating layer is a first characteristic diagram under a plurality of channels, and the input is a first attention force diagram A corresponding to K first components respectively k The method comprises the steps of carrying out a first treatment on the surface of the The input of the component coefficient determination layer is a first attention map A corresponding to the first component k k Output is the component coefficient p corresponding to the first component k k
Taking the example that the channel characteristic diagram extraction layer comprises c channels, a certain channel m outputsIs a first feature map F h×w×m H denotes the number of pixel points on the abscissa axis, w denotes the number of pixel points on the ordinate axis, i.e. the number of pixel points (i, j) contained in the first feature map is h×w, i denotes the abscissa of pixel point (i, j), j denotes the ordinate of pixel point (i, j), i= [1, h],j=[1,w]C represents the number of channels in the convolutional network layer, m represents the channel number, m= [1, c]。
Correspondingly, the first attention is directed to A k The method comprises the steps of carrying out weighting treatment on c first feature graphs corresponding to c channels, wherein the value of a weight parameter corresponding to each first feature graph is learned based on a first sample image in an attention model training stage, and the value of the weight parameter is used for representing a first feature graph F output by a channel m h×w×m The degree of correlation with the first component k, and thus, in generating a corresponding first attention profile A for the first component k k The values of the weight parameters of the first feature map corresponding to each channel are also different, namely, the first attention map A k Pixel value a of pixel point (i, j) of the image sub-region where component k is located k (i, j) is mainly composed of the effective feature map F h×w×m The pixel values of the medium-high response region are determined, wherein, for the first component k, the corresponding effective feature map F h×w×m And a first feature map output for the channel m with the weight parameter larger than a certain preset threshold value.
Correspondingly, the component coefficient p corresponding to the first component k k Is based on a first attention diagram A k It is obtained that, for the first component k, if the values of the weight parameters corresponding to the c channels are all relatively large, the first attention is directed to A k The pixel value A of the pixel point (i, j) of the image subregion where the first component k is located k (i, j) is relatively large, i.e. the importance of the first component k is considered to be relatively high by the c channels, i.e. the importance of the image sub-region in which the first component k is located to the first target object identified as a real label is also relatively high, and therefore the component coefficient p corresponding to the first component k k And will be relatively large.
S106, determining an anti-patch area in the original sample image based on the first attention map and the component coefficients of each first component;
specifically, based on the component coefficients p corresponding to the K first components respectively k And the corresponding first attention diagram, performing simulated disturbance attack; and selecting a certain number of target sub-patch areas from the image sub-areas corresponding to the K first components respectively according to the simulated disturbance attack result, wherein a simulated attack sample image obtained by carrying out disturbance processing on the certain number of target sub-patch areas can promote the false recognition of the model, so that the finally selected anti-patch area can be determined based on the union of the certain number of target sub-patch areas.
S108, based on the countermeasure patch area, performing disturbance processing on the corresponding image area in the original sample image to obtain a countermeasure sample image.
Specifically, after determining an anti-patch area corresponding to an original sample image, performing disturbance processing on an image area corresponding to the anti-patch area in the original sample image by using a preset anti-disturbance mode to obtain an anti-sample image; the preset disturbance resisting mode can be a projection gradient descent PGD iterative disturbance attack, a fast gradient symbol method FGSM disturbance attack or other disturbance attack modes.
In the embodiment of the application, an original sample image is input into a trained attention model to obtain first attention force diagrams and component coefficients respectively corresponding to first components contained in a first target object in the original sample image; determining an anti-patch area in the original sample image based on the first attention map and the component coefficients; performing disturbance processing on the original sample image based on the countermeasure patch area to obtain a countermeasure sample image; by taking each first component contained in a first target object in an original sample image as a minimum analysis unit and fully utilizing intermediate data (namely first attention force diagrams and component coefficients of all first components) output by a trained attention model, an image sub-region corresponding to the first component with higher importance degree is selected as an anti-patch region in the original sample image, so that the anti-patch region can be positioned in the original sample image in a finer granularity, the positioned anti-patch region has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch region can be improved, the safety of a target model obtained by using the anti-sample image for anti-training is higher, potential safety hazards of the target model in an application stage are avoided, and the target model can better cope with anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
Wherein, for the above-mentioned attention model includes: the channel feature map extracting layer, attention attempt generating layer and component coefficient determining layer, corresponding to S104, inputs the original sample image to the trained attention model to obtain a first attention attempt and a component coefficient corresponding to the first component, respectively, and specifically includes:
(1) The channel feature map extraction layer in the attention model is used for carrying out feature extraction processing on the original sample image to obtain c first feature maps corresponding to c channels; each channel corresponds to a first feature map, c represents the number of channels in a channel feature map extraction layer, and c is an integer greater than 1;
specifically, the channel characteristic map extraction layer includes a convolutional network layer having c channels, each for outputting a first characteristic map F h×w×m ,m=[1,c]The channel characteristic diagram extraction layer has the following functions: extracting features of the original sample image to obtain c first feature images F h×w ={F h×w×1, F h×w×2 ,...,F h×w×m ,...F h×w×c A channel feature map extraction layer for generating a respective c channel feature maps for each original sample image.
(2) The attention force diagram generation layer in the attention model carries out weighting processing on the c first feature diagrams to obtain K first attention force diagrams corresponding to the K first components; wherein each first component corresponds to a first attention map, K represents the number of components in the first target object, and K is an integer greater than 1;
Wherein attention is paid to model parameters corresponding to the layer (i.e. weight matrix W containing KXc weight parameters) K×c ) The method comprises the following steps:
wherein the weight matrix W K×c The values of the weight parameters in the method are obtained by performing iterative training on the attention model based on the first sample image in the training stage of the attention model, W km For the dependency weight parameter of component k on channel m, i.e. W km For characterizing the first characteristic F output by channel m h×w×m A degree of correlation with the first component k;
specifically, for each component k, a weight matrix W K×c The correlation weight parameter of a component k on c channels is characterized by each row of the component k, namely the component k corresponds to the weight row vector W k =[W k1 ,W k2 ,...,W km ,...W kc ]The method comprises the steps of carrying out a first treatment on the surface of the Thus, the first attention is directed to A k May be based on the weight row vector W corresponding to the first component k k C first feature maps corresponding to the c channels are weighted and summed to obtain the weighted sum; in particular implementation, in order to draw a first attention to k The first attention is focused on
Wherein W (k, m) xF (: m) represents the first characteristic diagram F for channel m h×w×m The same weighting process is performed for each pixel (i, j), and the sigmoid function represents the conversion of the pixel value of each pixel to a value in the range of (0, 1) such that the first sign is given Italian diagram A k The pixel value of h×w pixel points contained in (0, 1), i.e. the first attention map a k The pixel value of each pixel point (i, j) in (a) is c first feature maps F corresponding to c channels h×w×m And (3) carrying out weighted summation on the pixel values of the pixel points (i, j) in progress, and converting the pixel values of each pixel point (i, j) after weighted summation into a range (0, 1) by utilizing a sigmoid function.
Wherein due to W km Characterization map F under channel m h×w×m Correlation degree with component k, feature map F under channel m h×w×m The higher the correlation with component k (i.e. channel m considers the importance of component k to be high), the feature map F h×w×m The larger the pixel value of the pixel point of the image subarea where the component k is positioned, the representation that the corresponding pixel point is positioned in the feature map F h×w×m The higher the importance of (c), therefore, during the training phase of the attention model, W km The importance of the pixel points in the feature images can be automatically learned, and the W obtained by training km The larger the attention model is, the more in the use phase of the attention model, i.e. the first attention map a of the first component k is generated by using the trained attention model k At this time, try A due to the first attention k Pixel value of middle pixel point (i, j)If the weight parameters of the first component k under the X channels are all larger than the first preset threshold value, the correlation degree between the feature images output by the X channels and the first component k is high, the correlation degree between the feature images output by the other (c-X) channels and the first component k is low, and at the moment, the first attention attempts A k Pixel value a of pixel point (i, j) of the image sub-region where component k is located k (i, j) is mainly determined by the feature map output by the X channels, and thus attention is paid to Chart A k The larger the pixel value of the middle pixel is, the more relevant to the first component k, i.e. the first attention map A k The pixels with the middle pixel value larger than the second preset threshold are highlighted pixels, i.e. the image area formed by the highlighted pixels is a high response area (i.e. the first attention attempt A k The high response area in (1) is concentrated in the first groupImage subregion where element k is located).
That is, the feature map F of each channel m h×w×m With specific visual representation, X effective feature maps F corresponding to the same component k h×w×m (i.e. for component k, the X feature maps corresponding to the X channels with high correlation degree) should contain high response regions with similar coordinates, and the weighting parameters W of component k under the X channels km Will also be relatively large, i.e. if the feature map F under channel m h×w×m In which the correlation with component k is relatively high, then feature map F h×w×m The pixel point of the image sub-region where the component k is located will be of higher importance, so that in the feature map F h×w×m The image sub-region where component k is located will form a high response region, so if X channels correspond to X feature maps F h×w×m The coordinates of the high response areas contained in the feature map are similar, and the high response areas correspond to the same component k, and in the process of generating the attention map of the component k, the pixel values of the pixel points in the feature map are weighted and summed so that the attention map A k Pixel value a of pixel point (i, j) of the image sub-region where component k is located k (i, j) is mainly composed of X feature maps F h×w×m The pixel values of the high response areas with similar coordinates are contained in the image data, namely X feature images corresponding to X channels containing the high response areas with similar coordinates are regarded as a cluster, and thus a first attention map A corresponding to the first component k is obtained k
It can be understood that, for the first component k, the effective feature map F corresponding to the first component k is h×w×m For the weight row vector W k =[W k1 ,W k2 ,...,W km ,...W kc ]The first feature map is output by a channel m corresponding to a weight parameter larger than a certain preset threshold.
(3) The component coefficient determining layer in the attention model is used for respectively carrying out average pooling processing on the K first attention attempts to obtain K component coefficients corresponding to the K first components, wherein each first component corresponds to one component coefficient.
In particular, due to the component coefficient p k For characterizing the importance of the sub-region of the image in which the first component k is located to the identification of the first target object as a real label, and the first attention is directed to A k May be based on the weight row vector W corresponding to the first component k k The c first feature maps corresponding to the c channels are weighted and summed, so that the component coefficient p corresponding to the first component k k May be to first attention attempt A k The product of the pixel value of each pixel point (i, j) and the pixel value of the corresponding pixel point (i, j) in the first feature map under the channel m is obtained by global average pooling; in the specific implementation, the component coefficient corresponding to the first component k
Wherein A is k (i, j) represents a first attention attempt A k The pixel value of the middle pixel point (i, j), F (i, j, m) represents the first feature map F output by the channel m h×w×m Pixel values of the middle pixel point (i, j);
that is, in the training phase of the attention model, the higher the importance of the image feature information of which component k in the target object is considered by the channel m, the training of the resulting W km The larger the same; if for component k, weight row vector W k =[W k1 ,W k2 ,...,W km ,...W kc ]The values of a plurality of weight parameters are larger, and the component k is a key component of the target object; thus, during the use phase of the attention model, a first attention profile A is generated for the first component k k The pixel value of the high response area in the first attention map A is also larger k The resulting component coefficient p k The larger the component coefficient p corresponding to the first component k is obtained k The method comprises the steps of carrying out a first treatment on the surface of the Next, in determining the counterpatch area, p may be preferentially selected k The largest first component k corresponds to the first attention map A k The high response area in the method is used as a patch area to simulate the attack resistance, so that the final patch resistance area is determined.
Wherein, for the procedure of generating the first attention map corresponding to each first component by using the attention map generating layer (2), the attention map generating layer may include K full convolution layers, each of which corresponds to one of the first components;
each full convolution layer is used for carrying out weighting processing on the c first feature maps to obtain a first attention map of a first component corresponding to the full convolution layer; specifically, for the full convolution layer k corresponding to the first component k, the weight parameter used in the weighting process is the weight row vector W corresponding to the first component k k =[W k1 ,W k2 ,...,W km ,...W kc ];
Wherein the first attention of the K full convolutional layer outputs is intended for K first attention corresponding to K first components.
In particular, attention is paid to the fact that the layer of creation comprises K full convolution layers, each corresponding to a component K, k= [1, K ]Each full convolution layer has c channels, and each channel corresponds to a model parameter (i.e., weight parameter W km ) That is, each full convolution layer corresponds to c weight parameters, e.g., for the full convolution layer corresponding to component k, the corresponding c weight parameters are W k =[W k1 ,W k2 ,...,W km ,...W kc ]Thus, attention is paid to the fact that the layer is generated corresponding to K×c weight parameters (i.e. weight matrix W K×c ) The method comprises the steps of carrying out a first treatment on the surface of the Note that each full convolution layer in the striving to generate layer functions as: based on c channel feature maps F for the corresponding first component k h×w×m And corresponding weight row vector W k (i.e. weight matrix W K×c A set of weight parameters associated with the first component k), generates an attention profile a corresponding to the first component k k I.e. for each first component K, the c channel profile is changed to one first attention profile, i.e. for K first components K x c channel profiles are changed to K first attention profiles.
Wherein the input of each full convolution layer is c feature graphs F corresponding to c channels h×w×m The function of the full convolution layer is to pair the weight row vector W corresponding based on the first component k k =[W k1 ,W k2 ,...,W km ,...W kc ]For c feature maps F h×w×m The output of each full convolution layer is a first attention diagram corresponding to the first component k, and when in particular implementation, the first attention diagram I.e. the input of each full convolution layer comprises c channels corresponding to c feature maps F h×w×m First attention diagram A for outputting a first component k k
It will be appreciated that for the stage of use of the attention model, a weight matrix W is used in generating K first attention profiles K×c For iterative training, the values of model parameters in the layers are generated by the attention diagram, and K full convolution layers 1 to K respectively correspond to K weight row vectors W 1 To W k K weight row vectors form a weight matrix W K×c
Specifically, for the process of determining the component coefficient corresponding to each first component by using the component coefficient determining layer in the step (3), the component coefficient determining layer may include K global average pooling layers, where each global average pooling layer corresponds to one first component;
each global average pooling layer is used for carrying out average pooling processing on the first attention map of the first component corresponding to the global average pooling layer to obtain the component coefficient of the first component corresponding to the global average pooling layer;
the component coefficients output by the K global average pooling layers are K component coefficients corresponding to the K first components.
Specifically, the input of each global averaging pooling layer is a first attention map A of the corresponding first component k k The output of each global averaging pooling layer is the component coefficient p of the corresponding first component k k Each global averaging pooling layer functions as: based on c first feature maps F for corresponding first components k h×w×m And first attention attempt a k Calculating corresponding component coefficient p k I.e. for each first component k, one sheet ofAttention seeking map p k Conversion to a component coefficient p k The component coefficient p k For characterizing the extent to which the first component k contributes to identifying the first component k as a correct class.
In addition, in order to increase the component coefficients of the first component k with high importance, the input of each global average pooling layer may be c first feature maps F h×w×m And a corresponding first attention profile a of the first component k k Correspondingly, the output of each global averaging pooling layer is the component coefficient of the first component k
That is, the component coefficient p k Representing the correlation coefficient of the first component k with the first target object, i.e. p k The larger the first component k is, the more important part of the first target object is, and in the original sample image, the image subarea where the first component k is located helps to prompt the image recognition model to recognize the category of the first target object as a true label, namely in the first attention attempt a k The image sub-region where the first component k is located is a saliency region, which is an image sub-region having a key influence on an image recognition result, so that a sub-patch region can be selected from the image sub-region where at least one first component k is located, and then anti-disturbance is performed on an original sub-patch region corresponding to the at least one first component k, so that an anti-sample image is obtained.
The component coefficient determining layer may include a global average pooling layer, and correspondingly, sequentially input first attention corresponding to the K first components to the global average pooling layer, where the global average pooling layer is configured to sequentially output K component coefficients corresponding to the K first components, that is, the global average pooling layer is configured to sequentially perform average pooling processing on the K first attention attempts respectively, and sequentially obtain K component coefficients corresponding to the K first components; the component coefficient determining layer may further include K global average pooling layers, and correspondingly, the first attention corresponding to the K first components is respectively input to the corresponding global average pooling layers, where the K global average pooling layersThe layers are respectively used for outputting the component coefficients of the corresponding first components; for example, global averaging pooling layer 1 is used to output component coefficients p of component 1 1 The global averaging pooling layer k is used for outputting the component coefficients p of the component k k The global averaging pooling layer K is used for outputting the component coefficients p of the component K K Namely, first attention attempts corresponding to K first components are input to K global average pooling layers together, and the K global average pooling layers output K component coefficients corresponding to the K first components in parallel.
In a specific embodiment, taking the attention seeking to generate layer including K full convolution layers, the full convolution layers corresponding to the first component one by one, and the component coefficient determining layer including K global average pooling layers corresponding to the first component one by one as an example, if k=4, the attention seeking to generate layer includes 4 full convolution layers, and the component coefficient determining layer includes 4 global average pooling layers, as shown in fig. 2, a specific implementation procedure involved in the method for generating an countermeasure sample for determining the first attention seeking to and the component coefficient is given, including:
(1) Inputting an original sample image to be disturbed into a channel feature map extraction layer in an attention model, and carrying out feature extraction processing on the original sample image to obtain c first feature maps corresponding to c channels;
(2) C first feature maps are input to each full convolution layer, and weighted to obtain a first attention map of a first component corresponding to the full convolution layer; specifically, the full convolution layer 1 is configured to output a first attention force diagram 1 corresponding to the first component 1, i.e. a 11 The full convolution layer 2 is used for outputting a first attention map 2 corresponding to the first component 2, namely A 12 The full convolution layer 3 is used for outputting a first attention map 3, a, corresponding to the first component 3 13 The full convolution layer 4 is used for outputting a first attention force diagram 4, namely A, corresponding to the first component 4 14
Wherein, model parameter set 1 corresponding to full convolution layer 1, namely weight row vector W 1 =[W 11 ,W 12 ,...,W 1m ,...W 1c ]Thus, the first attention is directed to force 1, i.e
Model parameter set 2 corresponding to full convolution layer 2, i.e. weight row vector W 2 =[W 21 ,W 22 ,...,W 2m ,...W 2c ]Thus, the first attention is directed to 2, i.e
Model parameter set 3 corresponding to full convolution layer 3, i.e. weight row vector W 3 =[W 31 ,W 32 ,...,W 3m ,...W 3c ]Thus, the first attention is directed to FIG. 3, i.e
Model parameter sets 4 corresponding to the full convolution layer 4, i.e. weight row vectors W 4 =[W 41 ,W 42 ,...,W 4m ,...W 4c ]Thus, the first attention is directed to FIG. 4, i.e
(3) Each global averaging pooling layer inputs c first feature graphs and a first attention map corresponding to the global pooling layer into the global averaging pooling layer, and carries out global averaging pooling processing on the product of the pixel value of each pixel point (i, j) in the first attention map and the pixel value of the corresponding pixel point (i, j) in the first feature graph under the channel mObtaining a component coefficient of a first component corresponding to the global average pooling layer; specifically, the global averaging pooling layer 1 is configured to output a first component coefficient 1 corresponding to the first component 1, i.e. p 11 The global averaging pooling layer 2 is used for outputting a first component coefficient 2 corresponding to the first component 2, namely p 12 The global averaging pooling layer 3 is configured to output a first component coefficient 3, i.e. p, corresponding to the first component 3 13 The global averaging pooling layer 4 is configured to output a first component coefficient 4, i.e. p, corresponding to the first component 4 14
Wherein the inputs of the global averaging pooling layer 1 are c first feature graphs and a first attention graph 1, i.e. a 11 The output being the first component coefficient 1, i.e
The inputs to the global averaging pooling layer 2 are c first feature graphs and a first attention graph 2, a 12 The output being a first component coefficient of 2, i.e
The inputs of the global averaging pooling layer 3 are c first feature graphs and a first attention graph 3, a 13 The output being a first component coefficient 3, i.e
The inputs to the global averaging pooling layer 4 are c first feature graphs and a first attention graph 4, a 14 The output being a first component coefficient 4, i.e
It should be noted that, the training process for the attention model may refer to the specific implementation process of the following embodiment, which is not described herein.
Further, after outputting the first attention map and the component coefficients corresponding to each first component by using the attention model, as shown in fig. 3, the step S106 determines the patch countermeasure area in the original sample image based on the first attention map and the component coefficients of each first component, which specifically includes:
S1062, determining at least one target sub-patch area in the image sub-areas corresponding to the K first components based on the first attention map and the component coefficients of each first component; wherein the target sub-patch areas correspond to one first component, i.e. each target sub-patch area corresponds to a local image area in the image sub-area where a certain first component is located in the original sample image.
And S1064, determining an anti-patch area in the original sample image based on the at least one target sub-patch area.
Specifically, after determining the first attention map and the component coefficients for each first component in the first target object, a preset number of component coefficients ranked forward may be selected as target component coefficients directly based on the magnitude relation of the component coefficients of each first component; determining an image subarea with a pixel value larger than a preset threshold value in a first attention map corresponding to each target component coefficient as a target subarea, and determining an anti-patch area in an original sample image based on a preset number of target subarea; for example, if the preset number is 2, selecting the component coefficients of the top 2 as the target component coefficients;
In addition, in order to ensure the attack success rate of the finally generated challenge sample image, a simulation attack mechanism can be introduced, and then the simulation attack result is combined to determine which component coefficients are selected as target component coefficients,
specifically, for the case of introducing the simulated attack mechanism to determine the final anti-patch area, S1062 described above, determines at least one target sub-patch area in the image sub-areas corresponding to the K first components based on the first attention map and the component coefficients of each first component, which specifically includes:
based on the first attention map and the component coefficients, performing the following operations of simulating an attack target model:
selecting a target component coefficient according to the component coefficients of the K first components; the target component coefficient is the largest component coefficient in the unselected component coefficients;
performing disturbance attack on at least one attack resisting object based on a first attention map corresponding to the target component coefficient to obtain a simulated attack sample image; at least one attack resisting object corresponds to the target sub-patch area corresponding to the selected component coefficient one by one;
the target sub-patch area corresponding to the selected component coefficient may be an image sub-area in which a pixel value is greater than a preset threshold value in the corresponding first attention map, so the target sub-patch area is not necessarily an entire area of the image sub-area in which the corresponding first component is located, but may be a local area of the image sub-area in which the corresponding first component is located, that is, an image sub-area of the first component with a higher importance level is located first, and then an image local area in which the pixel value is greater than the preset threshold value is located in the image sub-area, so that an anti-patch added on the first component is more targeted, a more effective target sub-patch area is located on the first component more accurately, the problem that the target sub-patch area is easy to distinguish due to the overlarge area of the target sub-patch area is avoided, and the problem that the attack effect is poor due to the overlarge area of the target sub-patch area is avoided.
Inputting the simulated attack sample image into a target model to be attacked, and carrying out classification identification on the simulated attack sample image to obtain an image classification result of the simulated attack sample image;
if the image classification result indicates that the image classification is correct, continuing to execute the operation of the simulated attack target model, wherein the image classification correctly indicates that the simulated attack result aiming at the target model is failure; that is, the countermeasure is only performed based on the target sub-patch area corresponding to the currently selected target component coefficient, and the countermeasure effect is not ideal, so that a new target sub-patch area needs to be added to improve the countermeasure effect until the countermeasure is performed based on the target sub-patch area corresponding to the currently selected target component coefficient, so that the target model can be misidentified, or the combination of the sub-patch areas corresponding to all component coefficients cannot be misidentified, the corresponding original sample image is discarded, and the countermeasure patch area is continuously determined for other selected original sample images, wherein the simulation attack result for the target model is successfully represented by the image classification error.
If the image classification result represents an image classification error, the at least one target sub-patch area is determined based on the at least one attack-resistant object.
The target model may be an image recognition model actually required to be attacked, but considering that there may be a case where the image recognition model actually required to be attacked is unknown to the challenge sample generator (for example, the challenge sample generator is different from the image recognition model trainer), and based on mobility between classifiers in different models, the attention model may be used as the target model to be attacked used in the simulation attack stage, and then the finally generated image recognition model actually required to be attacked for the challenge sample image attack may be reused.
Further, in order to ensure the accuracy of the disturbance attack on the selected attack object, based on the first attention map corresponding to the target component coefficient, the disturbance attack is performed on at least one attack object to obtain a simulated attack sample image, which specifically includes:
in a first attention map corresponding to the target component coefficient selected at this time, carrying out shielding treatment on pixel points with pixel values smaller than or equal to a preset threshold value to obtain a target attention map corresponding to the target component coefficient;
adding the target attention as an attack object to the attack object set; the attack object set comprises a target attention map corresponding to the selected target component coefficients;
Performing disturbance attack on the latest attack object set to obtain an attack countermeasure set after disturbance attack;
and generating a simulated attack sample image based on the attack countermeasure set after the disturbance attack.
Specifically, for a first component k corresponding to the currently selected target component coefficient, a first attention map a of the first component k is drawn k The pixel points with the middle pixel value smaller than or equal to the second preset threshold value are shielded, and the target attention map corresponding to the first component k is obtainedI.e. for the first component k, the component coefficient is p k First take care to try k Target attention strives to be +.>Wherein the target attention strives to be +.>Only the pixel points with the pixel values larger than the second preset threshold value are reserved (i.e. the first attention is blocked for force A k In the non-high response area, the remaining first attention is sought A k High response area in (a).
It can be understood that the purpose of the occlusion processing on the pixel points of the non-high response area in the first attention map is to further weaken the non-high response area, so as to further highlight the high response area, and therefore, the preset image can be used for covering the non-high response area to obtain the target attention map; however, considering that in the simulation attack stage, the target attention is required to be taken as an object of attack countermeasure, in order to improve the accuracy of disturbance countermeasure on the pixel points of the high response area, the pixel values of the pixel points of the non-high response area can be directly set to be smaller, for example, the pixel values of the pixel points of the non-high response area are set to be zero, so that only the high response area corresponding to the component k is reserved in the target attention, and the disturbance attack on the pixel points of the high response area is convenient to be accurately performed;
Specifically, for the first attention map of the first component k, the occlusion processing of the target pixel point (for example, the pixel point with the pixel value smaller than or equal to the second preset threshold) satisfying the preset condition may be to subject the target pixelSetting the pixel value of the pixel point to zero, namely only reserving the pixel point with the pixel value larger than the second preset threshold value in the target attention diagram, namely setting the pixel value of the pixel point of the non-high response area corresponding to the component k in the first attention diagram to zero, and keeping the pixel value of the pixel point of the high response area unchanged; that is, the purpose of zeroing the pixel values of the pixels of the non-high response area in the first attention map is to: target attention diagramThe pixel point with the middle pixel value not being zero is the first attention map A k To highlight pixels in order to attempt a first attention k The high response area formed by the non-shielded high-brightness pixel points is used as an anti-attack object, so that disturbance attack can be more accurately carried out on the pixel points with non-zero pixel values in the process of simulating anti-disturbance, and then a simulated attack sample image is generated based on an attack anti-disturbance set after disturbance attack.
Further, considering that after the simulation attack is successful, the target sub-patch area obtained based on the at least one target sub-patch area corresponds to the first attention map and may also be considered as corresponding to the target attention map, and in the process of generating the target sample image, the perturbation processing needs to be performed on the pixel points in the original sample image, so that the at least one target sub-patch area needs to be mapped into the original sample image, and the final target sub-patch area is determined, and therefore, the step S1064 of determining the target sub-patch area in the original sample image based on the at least one target sub-patch area specifically includes:
Determining original position information corresponding to each pixel point in the at least one target sub-patch area;
based on the above-described original position information, an anti-patch area is determined in the original sample image.
Specifically, the determining the patch countermeasure area in the original sample image based on the original position information includes:
determining at least one original sub-patch area in the original sample image based on the original position information;
and determining an anti-patch area in the original sample image by combining the at least one original sub-patch area.
Further, after determining an anti-patch area including an original sub-patch area corresponding to at least one first component by using each first component included in a first target object in an original sample image as a minimum analysis unit, performing a disturbance process on the original sample image, where the step S108 is to perform the disturbance process on a corresponding image area in the original sample image based on the anti-patch area, to obtain an anti-sample image, and specifically includes:
and performing disturbance processing on a corresponding image area in the original sample image based on the disturbance resisting patch area by utilizing a projection gradient descent PGD disturbance resisting mode to obtain a disturbance resisting sample image.
The process of performing disturbance processing on the anti-patch area in the original sample image can adopt a projection gradient descent PGD iterative disturbance attack, can also adopt a fast gradient symbol method FGSM disturbance attack, and can also adopt other disturbance attack modes, which are all within the protection scope of the application.
Specifically, each first component k corresponds to a sub-patch area T k (i.e. first attention attempt A k Mid-high response area, i.e. target attention attemptImage subarea which is not blocked), aiming at the process of simulating attack, if the component coefficient with the maximum value is p x The component coefficient is p x As target component coefficients, a first attention is sought x The pixel points with the middle pixel value smaller than or equal to the second preset threshold value are subjected to shielding treatment, and the target attention diagram of the first component x is obtained>Try to get the target attentive->Added to the attack object set S, i.e. +.>Then, PGD iterative disturbance attack is carried out on the attack object set S to obtain a simulated attack sample image, namely, updating ++each time>The loss function of the target model is increased, and the purpose is to enable the classifier in the target model to be recognized by mistake; if the image class identification result aiming at the obtained simulated attack sample image represents correct identification of the classifier, continuing to select the next target component coefficient, and executing the operation to obtain the simulated attack sample image until the image class identification result aiming at the obtained simulated attack sample image represents misidentification of the classifier, and determining an anti-patch area in the original sample image based on the latest attack object set S;
That is, first attention is drawn to A corresponding to the first component x having the largest component coefficient x The high response area in (a) is an attack object (i.e. target attention attempt) Target attention seeking->I.e. to try a first attention x The middle non-high response area (namely, the image subarea except the first component x and the non-high response area in the image subarea of the component 1) is shielded, so that disturbance and interference are carried out on the middle high response area of the first component x;
if PGD iterates disturbance attack, attack object setCan cause the classifier to be misidentified, then output attack object set +.>And based on the attack object set->Determining an countermeasure patch area;
if PGD iterates disturbance attack, attack object setIf the classifier in the target model cannot be misidentified, continuing to add the next attack object in the attack object set until the latest attack object set is +.>Can enable the classifier in the target model to be misidentified, then output the attack object set +.>And based on the attack object set->An countermeasure patch area is determined.
Specifically, if the latest attack object set capable of enabling the target model to be misidentified in the simulation attack stageCorrespondingly, the target sub-patch area { T } x ,...,T y For { T }, then x ,...,T y Coordinate conversion is carried out to determine the corresponding original sub-patch area +.>Correspondingly, original sub-patch area +.>The union of (a) is the anti-patch area in the original sample image.
In a specific embodiment, as shown in fig. 4, a specific implementation process for determining an challenge patch area involved in a challenge sample generation method is provided, including:
(1) Inputting an original sample image to be disturbed into a trained attention module to obtain a first attention map and a first component coefficient which are respectively corresponding to a plurality of first components contained in a first target object in the original sample image; wherein the number of the first components still contained in the first target object is 4, and correspondingly, the first attention map A corresponding to the first component 1 is output by using the attention model 11 And a first component coefficient p 11 First attention map A corresponding to first component 2 12 And a first component coefficient p 12 First attention map A corresponding to first component 3 13 And a first component coefficient p 13 First attention map A corresponding to first component 4 14 And a first component coefficient p 14 The method comprises the steps of carrying out a first treatment on the surface of the I.e. the component coefficient vector p= { P corresponding to the first target object 11 ,p 12 ,p 13 ,p 14 };
The first attention diagrams and the first component coefficients corresponding to the first components 1 to 4 respectively may refer to the specific implementation process shown in fig. 2, which is not described herein.
(2) If the order of the first component coefficients in the component coefficient vector P from large to small is: p is p 14 >p 13 >p 12 >p 11 The first selected target component coefficient is p 14 And target component coefficient p 14 The corresponding first attention is stricken as A 14 The method comprises the steps of carrying out a first treatment on the surface of the Then, based on the first attention, strive to A 14 The simulated attack sample image 1 is generated, specifically:
(2-1) at the first selected target component coefficient p 14 Corresponding first attention is sought A 14 Wherein, the pixel points with the pixel values smaller than or equal to the preset threshold value are subjected to shielding treatment to obtain a target component coefficient p 14 Corresponding target attention force diagram 4, i.e
(2-2) target attention is soughtAdded as an attack-countering object to the set of attack objects, i.e
(2-3) for the most recent attack object setDisturbance attack is carried out to obtain an attack countermeasure set after the disturbance attack>
(2-4) based on the first attention patterns (i.e., A) 11 、A 12 、A 13 ) And attack challenge set after disturbance attackA simulated attack sample image 1 is generated.
In particular, the first attention may be directed to A 11 、A 12 、A 13 And post-disturbance target attention force diagramSynthesizing to obtain a simulated attack sample image 1;
in particular, the target attention map corresponding to the unselected first component coefficients may be obtained (i.e ) Disturbed target attention corresponding to the first component coefficient that has been selected is striven for +.>To perform combinationObtaining a simulated attack sample image 1; it is also possible to base on the latest attack object set +.>Determining a simulated patch area to be simulated disturbance in an original sample image, and performing disturbance attack on the simulated patch area to obtain a simulated attack sample image 1; the generation mode of the simulation attack sample image is within the protection scope of the application.
(3) Inputting the simulated attack sample image 1 into a target model to be attacked, and carrying out classification identification on the simulated attack sample image 1 to obtain an image classification result 1 of the simulated attack sample image 1;
(4) If the image classification result 1 indicates that the image classification is correct (i.e. the simulation attack fails), the next target component coefficient p is continuously selected 13 And target component coefficient p 13 The corresponding first attention is stricken as A 13 The method comprises the steps of carrying out a first treatment on the surface of the Then, based on the first attention, strive to A 13 Generating a simulated attack sample image 2, specifically:
(4-1) at the second selected target component coefficient p 13 Corresponding first attention is sought A 13 Wherein, the pixel points with the pixel values smaller than or equal to the preset threshold value are subjected to shielding treatment to obtain a target component coefficient p 13 Corresponding target attention is sought 3, i.e
(4-2) target attention is soughtAdded as an attack-countering object to the set of attack objects, i.e
(4-3) for the most recent attack object setPerforming disturbanceAttack, obtaining attack countermeasure set after disturbance attack>
(4-4) based on the first attention patterns (i.e., A) 11 、A 12 ) And attack challenge set after disturbance attackA simulated attack sample image 2 is generated.
In particular, the first attention may be directed to A 11 、A 12 、A 13 And post-disturbance target attention force diagramSynthesizing to obtain a simulated attack sample image 2;
in addition, the generation process of the simulated attack sample image 2 may refer to other implementations of the simulated attack sample image 1.
(5) Inputting the simulated attack sample image 2 into a target model to be attacked, and carrying out classification identification on the simulated attack sample image 2 to obtain an image classification result 2 of the simulated attack sample image 2;
(6) If the image classification result 2 represents the image classification error (i.e. the simulation attack is successful), the latest attack object set is basedDetermining a target sub-patch area { T } 4 ,T 3 };
Wherein T is 3 Representing a target attention attempt of the first component 3Image subarea formed by pixel points with middle pixel value larger than preset threshold value, T 4 Target attention seeking to represent first component 4 +.>And the middle pixel value is larger than the image subarea formed by the pixel points of the preset threshold value.
(7) Determining a sub-patch area { T } with a target in an original sample image 4 ,T 3 Original sub-patch area corresponding toAnd determining the union of the original sub-patch areas as an anti-patch area in the original sample image;
(8) The PGD is lowered by using the projection gradient to resist disturbance, and the original sub-patch area in the original sample image is subjected toAnd->And performing disturbance treatment to obtain an antagonistic sample image.
In addition, it should be noted that, if the image classification result 2 indicates that the image classification is correct (i.e. the simulation attack fails), the next target component coefficient p is continuously selected 12 And referring to the steps, generating a simulated attack sample image 3, performing simulated attack based on the simulated attack sample image 3 to obtain an image classification result 3, and judging whether the image classification result 3 represents an image classification error or not, which is not described herein.
According to the method for generating the countermeasure sample, an original sample image is input into a trained attention model, and first attention force diagrams and component coefficients corresponding to first components contained in a first target object in the original sample image are obtained; determining an anti-patch area in the original sample image based on the first attention map and the component coefficients; performing disturbance processing on the original sample image based on the countermeasure patch area to obtain a countermeasure sample image; by taking each first component contained in a first target object in an original sample image as a minimum analysis unit and fully utilizing intermediate data (namely first attention force diagrams and component coefficients of all first components) output by a trained attention model, an image sub-region corresponding to the first component with higher importance degree is selected as an anti-patch region in the original sample image, so that the anti-patch region can be positioned in the original sample image in a finer granularity, the positioned anti-patch region has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch region can be improved, the safety of a target model obtained by using the anti-sample image for anti-training is higher, potential safety hazards of the target model in an application stage are avoided, and the target model can better cope with anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
Corresponding to the above-described challenge sample generation method described in fig. 1 to fig. 4, based on the same technical concept, the embodiment of the present application further provides a method for training an attention model, fig. 5 is a schematic flowchart of the method for training an attention model provided in the embodiment of the present application, where the method in fig. 5 can be executed by a server or a specific terminal device, and as shown in fig. 5, the method at least includes the following steps:
s502, acquiring a plurality of first sample images; the first sample image comprising an image area of a second target object, the second target object comprising at least one second component;
the second target objects in the plurality of first sample images have the same number of components and one-to-one correspondence of component types, but the classification categories of the second target objects in the plurality of first sample images may be different, for example, the classification category of the second target object in one first sample image is a long table, the classification category of the second target object in the other first sample image is a round table, but the number of the second components contained in the two second target objects is the same, and the component types are the same; and the second target object is the same as the first target object in type, for example, if the first target object in the original sample image to be disturbed is a table, the second target object in the first sample image used in the attention model training stage is also a table; for another example, if the first target object in the original sample image to be disturbed is a face, the second target object in the first sample image used in the attention model training stage is also a face; correspondingly, the second components are the same as the first components in number and the types of the components are in one-to-one correspondence.
S504, inputting each first sample image into an attention model to be trained, and obtaining prediction probability sets corresponding to the first sample images respectively; wherein the set of predictive probabilities comprises a predictive probability of the second target object under each classification category, the predictive probability being determined based on component coefficients of each second component, the component coefficients being determined based on a second attention profile of the second component;
wherein the set of prediction probabilities includes the prediction probabilities of the second target object over r classification categories (i.e., classification category n, n= [1, r ]); for the case where the type of the second target object is a table, the r classification categories may include: square tables, round tables, long tables, etc.; for the case that the type of the second target object is a face, the r classification categories may include: a live face or a prosthetic face, that is, a classifier in the attention model is an identification of a subdivision class of a first target object with specified components.
S506, based on the prediction probability set and the real label of the first sample image, performing iterative training on the attention model to be trained to obtain a trained attention model.
It should be noted that, since the number of components contained in the second target objects in the plurality of first sample images is the same and the types of the components are in one-to-one correspondence, the corresponding attention model may be trained for different types of target objects (i.e., target objects containing different numbers of components), that is, the attention model is related to the type of the first target object in the original sample image to be disturbed, for example, if the first target object in the original sample image to be disturbed is a table, the trained attention model may be the attention model 1; as another example, if the first target object in the original sample image to be perturbed is a human face, the trained attention model may be the attention model 2.
In the embodiment of the application, for the training stage of the attention model, by taking each second component contained in a second target object in a first sample image as a minimum analysis unit, obtaining a second attention map and a component coefficient of each second component by using the attention model to be trained, obtaining a prediction probability set of the first sample image based on the second attention map and the component coefficient of each second component, and performing iterative training on the attention model based on the prediction probability set to obtain a trained attention model, wherein the obtained attention model can output a component attention map and a component coefficient which are accurately higher; in the using stage of the attention model, each first component contained in a first target object in an original sample image to be disturbed is taken as a minimum analysis unit, and intermediate data (namely a first attention map and a component coefficient of each first component) output by the attention model after training is fully utilized, an image subarea corresponding to a first component with higher importance degree is selected in the original sample image as an anti-patch area, so that the anti-patch area can be positioned in the original sample image in a finer granularity, the positioned anti-patch area has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch area can be improved, the safety of the target model obtained by using the anti-sample image for anti-training is higher, and potential safety hazards of the target model in the application stage are avoided, and the target model can better cope with the anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
Wherein the attention model includes: a channel feature map extraction layer, an attention attempt generation layer, a component coefficient determination layer and a classifier;
the step S504, inputting each first sample image to the attention model to be trained, and obtaining the prediction probability set corresponding to each first sample image, specifically includes:
(1) For each first sample image, the channel feature map extraction layer is used for carrying out feature extraction processing on the first sample image to obtain c second feature maps corresponding to c channels; each channel corresponds to a second feature map, c represents the number of channels in the channel feature map extraction layer, and c is an integer greater than 1;
the generating process of the second feature map may refer to the generating process of the first feature map, which is not described herein.
Also, the channel profile extraction layer includes a convolutional network layer having c channels, each for outputting a second profile F h×w×m ,m=[1,c]。
Specifically, for each first sample image, inputting the first sample image into a channel feature map extraction layer in the attention model to be trained, and performing feature extraction on the first sample image to obtain c second feature maps F corresponding to c channels h×w×m
(2) The attention map generation layer is used for carrying out weighting processing on the c second feature maps to obtain K second attention maps corresponding to the K second components; each second component corresponds to a second attention map, K represents the number of components in the second target object, and K is an integer greater than 1;
the generating process of the second attention map of the second component may refer to the generating process of the first attention map, which is not described herein.
Also, note that the sought generation layer may include K full convolution layers, each corresponding to one of the second components.
It will be appreciated that for the training phase of the attention model, a weight matrix W is used in generating K second attention profiles K×c For the attention after the previous training, the model parameters in the layers are generated, and K full convolution layers 1 to K correspond to K weight row vectors W 1 To W k Each full convolution layer corresponds to one weight row vector, and K weight row vectors form a weight matrix W K×c
(3) The component coefficient determining layer is used for carrying out average pooling processing on the K second attention attempts respectively to obtain K component coefficients corresponding to the K second components, wherein each second component corresponds to one component coefficient;
The determining process of the component coefficient of the second component may refer to the determining process of the component coefficient of the first component, which is not described herein.
Also, the component coefficient determining layer may include K global average pooling layers, each global average pooling layer corresponding to one second component.
The component coefficient determining layer may include a global average pooling layer, correspondingly, sequentially inputting second attention corresponding to the K second components to the global average pooling layer, where the global average pooling layer is configured to sequentially output K component coefficients corresponding to the K second components, so the attention model may further include a component coefficient splicing layer, inputting K component coefficients corresponding to the K second components to the component coefficient splicing layer, where the component coefficient splicing layer is configured to output a component coefficient vector p= { P containing K component coefficients corresponding to the K second components 21 ,p 22 ,...,p 2k ,...p 2K -a }; specifically, inputting K component coefficients corresponding to the K second components into a component coefficient splicing layer, and carrying out splicing treatment on the component coefficients to obtain a component coefficient vector P; the component coefficient vector P is then input to a classifier in the attention model.
(4) The classifier is used for classifying and identifying the second target object based on K component coefficients corresponding to K second components to obtain a prediction probability set corresponding to the first sample image.
In consideration of the training stage of the attention model, the prediction probability of the second target object under each classification category needs to be determined based on the K component coefficients corresponding to the K first components, and then the attention model is iteratively trained based on the prediction probability, so that the attention model is annotatedThe force intention model also comprises a classifier, wherein the component coefficient vector P corresponding to the second target object is input into the classifier, the second target object is classified and identified, and a prediction probability set Q= { Q corresponding to the first sample image is obtained 1 ,q 2 ,...,q n ,...q r -a }; specifically, the classification class of the second target object in the first sample image is determined according to the component coefficient vector P of the K second components of the second target object (for example, according to the component coefficients of the components such as the five sense organs and the cheeks in the face, whether the class of the second target object in the first sample image is a living face is determined, and for example, according to the component coefficients of the components such as the tabletop and the table legs in the table, whether the class of the second target object in the first sample image is a round table is determined).
After inputting the first sample image to the attention model to be trained to obtain a prediction probability set Q of the second target object under each classification category, model parameters of the attention model need to be continuously adjusted based on the prediction probability set Q until a model training end condition is reached, and a trained attention model is obtained, specifically, step S506 is performed, based on the prediction probability set and a real label of the first sample image, to iteratively train the attention model to be trained, and a trained attention model is obtained, and specifically includes:
Determining a loss value corresponding to the first sample image based on the prediction probability set, the real label of the first sample image and a preset loss function; wherein the preset loss function may be a commonly used cross entropy loss function;
and carrying out iterative training on the attention model to be trained based on the loss value of each first sample image to obtain a trained attention model.
In order to ensure the cohesive force of the attention force map output by the trained attention model in the using stage, that is, ensure that the trained attention model outputs the attention force map that contains more concentrated high response areas, therefore, the loss values not only consider the first loss value corresponding to the classification loss function, but also consider the second loss value corresponding to the grouping loss function corresponding to the attention force map generation layer, and then adjust the model parameters of the attention model based on the first loss value and the second loss value, specifically, determine the loss value corresponding to the first sample image based on the prediction probability set, the real label of the first sample image and the preset loss function, and specifically include:
step one, determining a first loss value based on the prediction probability set, a real label of a first sample image and a preset loss function;
Specifically, substituting the prediction probability set Q and the real label of the first target object in the sample image into a preset loss function to calculate a classification loss value L cls
Step two, determining a second loss value based on K second attention patterns corresponding to the K second components;
specifically, a second attention map and a packet loss function based on K second componentsCalculating a packet loss value L g
Wherein A is k (f, j) represents the pixel value of the pixel point (i, j) in the second attention map,second attention diagram A representing second component k k The abscissa of the maximum pixel value (i.e. second attention seeking to a k Abscissa of the brightest pixel point) of (x), x->Second attention diagram A representing second component k k The ordinate of the maximum pixel value (i.e. second attention seeking to a k Ordinate of the brightest pixel point) of (x), x->Representing a first graph for constraining pixel points (i, j) and a second graph A k Pixel point with maximum middle pixel value +.>A distance therebetween;
in particular, due toThus, the first and second substrates are bonded together,based on this, in adjusting the model parameters of the attention model based on the sum of the loss values of the respective first sample images (i.e., the total loss value), the score loss function L is made g Minimizing, solving and adjusting W (k, m) (i.e. optimizing model parameters in the striving to generate layer, i.e. weight matrix W K×c ) This enables the pixel point (i, j) to be made closer to the second attention map a k Pixel point with maximum middle pixel value +.>(i.e., the brightest pixel point) such that the second attention attempts to A k The medium-high response areas are more concentrated, so that the high response areas are more concentrated in a small area of the image subarea where the second component k is positioned; correspondingly, when the trained attention model is used for generating the first attention map of the first component k, the high response area in the first attention map can be more concentrated in a small area of the image subarea where the first component k is located.
And thirdly, determining the sum of the first loss value and the second loss value as a loss value corresponding to the first sample image. Specifically, the loss value corresponding to the first sample image is l=l cls +L g The method comprises the steps of carrying out a first treatment on the surface of the And then, optimizing and adjusting model parameters of the attention model based on the sum of the loss values (namely the total loss value) of each first sample image by using a gradient descent method until the loss function of the attention model converges or reaches the total number of iterative training rounds, so as to obtain the trained attention model.
In a specific embodiment, still taking the attention seeking generation layer as an example, the attention seeking generation layer includes K full convolution layers, the full convolution layers are in one-to-one correspondence with the first component, and the component coefficient determination layer includes K global average pooling layers, where the global average pooling layer is in one-to-one correspondence with the first component, if k=4, the attention seeking generation layer includes 4 full convolution layers, and the component coefficient determination layer includes 4 global average pooling layers, as shown in fig. 6, for a specific implementation procedure of training of the attention model described above, including:
(1) Inputting the first sample image to a channel feature map extraction layer in the attention model aiming at each first sample image, and carrying out feature extraction processing on the first sample image to obtain c second feature maps corresponding to c channels;
(2) Inputting c second feature maps to each full convolution layer, and carrying out weighting treatment on the c second feature maps to obtain a second attention map of a second component corresponding to the full convolution layer; specifically, the full convolution layer 1 is configured to output a second attention force diagram 1 corresponding to the second component 1, i.e. a 21 The full convolution layer 2 is used for outputting a corresponding second attention map 2, namely A, of the second component 2 22 The full convolution layer 3 is used for outputting a corresponding second attention force diagram 3, namely A, of the second component 3 23 The full convolution layer 4 is used for outputting a corresponding second attention force diagram 4, namely A, of the second component 4 24
Wherein, model parameter set 1 corresponding to full convolution layer 1, namely weight row vector W 1 =[W 11 ,W 12 ,...,W 1m ,...W 1c ]Thus, the second attention is directed to force 1, i.e
Model parameter set 2 corresponding to full convolution layer 2, i.e. weight row vector W 2 =[W 21 ,W 22 ,...,W 2m ,...W 2c ]Thus, the second attention is directed to force 2, i.e
Model parameter set 3 corresponding to full convolution layer 3, i.e. weight row vector W 3 =[W 31 ,W 32 ,...,W 3m ,...W 3c ]Thus, the second attention is directed to figure 3, i.e
Model parameter sets 4 corresponding to the full convolution layer 4, i.e. weight row vectors W 4 =[W 41 ,W 42 ,...,W 4m ,...W 4c ]Thus, the second attention is directed to FIG. 4, i.e
It should be noted that, for the first round of model training, the model parameter sets 1 to 4 are initial model parameters, and for the non-first round of model training, the model parameter sets 1 to 4 are model parameters after the last round of training adjustment.
(3) Each global average pooling layer inputs c second feature graphs and second attention force graphs corresponding to the global pooling layers into the global average pooling layers, and carries out global average pooling processing on products of pixel values of pixel points (i, j) in the second attention force graphs and pixel values of corresponding pixel points (i, j) in the second feature graphs under a channel m to obtain component coefficients of a second component corresponding to the global average pooling layers; specifically, the global averaging pooling layer 1 is configured to output a second component coefficient 1 corresponding to the second component 1, i.e. p 21 The global averaging pooling layer 2 is used for outputting a second component coefficient 2 corresponding to the second component 2, namely p 22 The global averaging pooling layer 3 is configured to output a second component coefficient 3, i.e. p, corresponding to the second component 3 23 The global averaging pooling layer 4 is used for outputting a secondThe second component coefficient 4 corresponding to component 4, i.e. p 24
Wherein the inputs of the global averaging pooling layer 1 are c second feature graphs and a second attention graph 1, i.e. a 21 The output being the second component coefficient 1, i.e
The inputs to the global averaging pooling layer 2 are c second feature graphs and a second attention graph 2, a 22 The output being the second component coefficient 2, i.e
The inputs of the global averaging pooling layer 3 are c second feature graphs and a second attention graph 3, a 23 The output being the second component coefficient 3, i.e
The inputs to the global averaging pooling layer 4 are c second feature graphs and a second attention graph 4, a 24 The output being the second component coefficient 4, i.e
(4) Outputting the second component coefficients 1 to 4 of the second components 1 to 4 to the classifier in the attention model based on the component coefficient vector p= { P 21 ,p 22 ,p 23 ,p 24 And performing classification and identification on the second target object to obtain a prediction probability set corresponding to the first sample image.
(5) For each first sample image, determining a first loss value L based on the corresponding prediction probability set, the real labels of the first sample images and a preset loss function cls The method comprises the steps of carrying out a first treatment on the surface of the And determining a second loss value L based on K second attention patterns corresponding to the K second components g
(6) Determining the sum of the first loss value and the second loss value as a loss value corresponding to the first sample image;
(7) Optimizing and adjusting model parameters of the attention model based on the sum of the loss values (namely the total loss value) of the first sample images, judging whether to end model training based on a training result, if not, continuing to execute the next round of model training, and if so, outputting the trained attention model; the condition for ending model training is that the loss function of the attention model converges or reaches the total number of iterative training rounds.
For simplicity of explanation, only one determination process of the loss value corresponding to the first sample image 1 is illustrated in fig. 6, and the determination processes of the loss values corresponding to the remaining first sample images 2 to N refer to the determination process of the first sample image 1.
According to the attention model training method, aiming at the training stage of the attention model, each second component contained in a second target object in a first sample image is taken as a minimum analysis unit, the attention model to be trained is utilized to obtain second attention force diagrams and component coefficients of each second component, then a prediction probability set of the first sample image is obtained based on the second attention force diagrams and the component coefficients of each second component, and then the attention model is subjected to iterative training based on the prediction probability set to obtain a trained attention model, so that the obtained attention model can output accurately higher component attention force diagrams and component coefficients; in the using stage of the attention model, each first component contained in a first target object in an original sample image to be disturbed is taken as a minimum analysis unit, and intermediate data (namely a first attention map and a component coefficient of each first component) output by the attention model after training is fully utilized, an image subarea corresponding to a first component with higher importance degree is selected in the original sample image as an anti-patch area, so that the anti-patch area can be positioned in the original sample image in a finer granularity, the positioned anti-patch area has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch area can be improved, the safety of the target model obtained by using the anti-sample image for anti-training is higher, and potential safety hazards of the target model in the application stage are avoided, and the target model can better cope with the anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
It should be noted that, in this application, the embodiment is based on the same inventive concept as the previous embodiment, so that the specific implementation of this embodiment may refer to the implementation of the method for generating the challenge sample, and the repetition is not repeated.
Corresponding to the above-described challenge sample generation method described in fig. 1 to fig. 4, based on the same technical concept, the embodiment of the present application further provides an image recognition model training method, fig. 7 is a schematic flow chart of the image recognition model training method provided in the embodiment of the present application, where the method in fig. 7 can be executed by a server or a specific terminal device, and as shown in fig. 7, the method at least includes the following steps:
s702, acquiring a countermeasure sample image; the challenge sample image is obtained based on the challenge sample generation method, specifically, based on the flow steps shown in fig. 1 to 4;
s708, performing countermeasure training on the image recognition model to be countered based on the countermeasure sample image to obtain a trained image recognition model; the image recognition model to be opposed is an image recognition model obtained by performing model training based on the second sample image; the second sample image may be a sample image that performs normal iterative training on the image recognition model, that is, the second sample image reduces a loss function corresponding to the image recognition model.
In a specific embodiment, as shown in fig. 8, a specific implementation process for training the image recognition model includes:
on the basis of fig. 4, inputting the countermeasure sample image obtained in fig. 4 into an image recognition model to be countered, and performing countermeasure training on the image recognition model to be countered to obtain a trained image recognition model.
According to the image recognition model training method, the countermeasure sample image obtained based on the countermeasure sample generation method is utilized to conduct countermeasure training on the image recognition model to be subjected to countermeasure, and a trained image recognition model is obtained; because the anti-attack effect is better on the anti-sample image obtained based on the anti-sample generation method, the obtained image recognition model is higher in safety, potential safety hazards of the image recognition model in an application stage can be avoided, and the image recognition model can better cope with the anti-attack of illegal molecules; meanwhile, the countermeasure patches in the countermeasure sample image are more hidden, so that the countermeasure effect of the countermeasure sample image is further improved, and further the countermeasure attack defending capability of the image recognition model is further improved.
It should be noted that, in this application, the embodiment is based on the same inventive concept as the previous embodiment, so that the specific implementation of this embodiment may refer to the implementation of the method for generating the challenge sample, and the repetition is not repeated.
Corresponding to the above-described challenge sample generation method described in fig. 1 to fig. 4, based on the same technical concept, the embodiment of the present application further provides an image recognition method, fig. 9 is a schematic flow chart of the image recognition method provided in the embodiment of the present application, where the method in fig. 9 can be executed by a server or a specific terminal device, and as shown in fig. 9, the method at least includes the following steps:
s902, acquiring a target image to be identified;
s904, inputting the target image into the trained image recognition model, and recognizing the target image to obtain a recognition result of the target image; the trained image recognition model is obtained by training through the flow steps shown in fig. 7.
In a specific embodiment, as shown in fig. 10, for the specific implementation process of image recognition, the implementation process includes:
on the basis of fig. 8, the target image to be identified is input into the image identification model after the training, and the target image is classified and identified to obtain the classification and identification result of the target image, specifically, the image identification model is taken as the living body detection model, the corresponding target image can be the target face image acquired for the target user, and the classification and identification result of the target image is whether the living body face.
According to the image recognition method, aiming at the use stage of the image recognition model, the safety of the image recognition model trained on the countermeasure sample image obtained by the countermeasure sample generation method is higher, so that potential safety hazards of the image recognition model in the application stage can be avoided, and the image recognition model can better cope with countermeasure attack of illegal molecules; meanwhile, the countermeasure patches in the countermeasure sample image are more hidden, so that the countermeasure effect of the countermeasure sample image is further improved, and further the countermeasure attack defending capability of the image recognition model is further improved.
It should be noted that, in this application, the embodiment is based on the same inventive concept as the previous embodiment, so that the specific implementation of this embodiment may refer to the implementation of the method for generating the challenge sample, and the repetition is not repeated.
Corresponding to the challenge sample generating method described in fig. 1 to 4, based on the same technical concept, the embodiment of the present application further provides a challenge sample generating device, and fig. 11 is a schematic block diagram of the challenge sample generating device provided in the embodiment of the present application, where the device is configured to perform the challenge sample generating method described in fig. 1 to 4, and as shown in fig. 11, the device includes:
A first acquisition module 1102 configured to acquire an original sample image to be perturbed; the original sample image includes an image region of a first target object, the first target object including at least one first component;
a first output module 1104 configured to input the original sample image to a trained attention model, resulting in a first attention map and component coefficients respectively corresponding to the first component; the component coefficients are used for representing the importance degree of the image subareas corresponding to the first component to the first target object identified as the real label;
a patch area determination module 1106 configured to determine an anti-patch area in the original sample image based on the first attention map and the component coefficients;
an antagonistic sample generation module 1108 is configured to perform disturbance processing on a corresponding image area in the original sample image based on the antagonistic patch area, so as to obtain an antagonistic sample image.
According to the countermeasure sample generating device, an original sample image is input into a trained attention model, and first attention force diagrams and component coefficients corresponding to first components contained in a first target object in the original sample image are obtained; determining an anti-patch area in the original sample image based on the first attention map and the component coefficients; performing disturbance processing on the original sample image based on the countermeasure patch area to obtain a countermeasure sample image; by taking each first component contained in a first target object in an original sample image as a minimum analysis unit and fully utilizing intermediate data (namely first attention force diagrams and component coefficients of all first components) output by a trained attention model, an image sub-region corresponding to the first component with higher importance degree is selected as an anti-patch region in the original sample image, so that the anti-patch region can be positioned in the original sample image in a finer granularity, the positioned anti-patch region has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch region can be improved, the safety of a target model obtained by using the anti-sample image for anti-training is higher, potential safety hazards of the target model in an application stage are avoided, and the target model can better cope with anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
It should be noted that, the embodiments of the challenge sample generating device and the embodiments of the challenge sample generating method in the present application are based on the same inventive concept, so that the specific implementation of the embodiments may refer to the implementation of the corresponding challenge sample generating method, and the repetition is omitted.
Corresponding to the attention model training method described in fig. 5, based on the same technical concept, the embodiment of the present application further provides an attention model training device, and fig. 12 is a schematic block diagram of the attention model training device provided in the embodiment of the present application, where the device is used to execute the attention model training method described in fig. 5, and as shown in fig. 12, the device includes:
a second acquisition module 1202 configured to acquire a plurality of first sample images; the first sample image includes an image region of a second target object, the second target object including at least one second component;
the second output module 1204 is configured to input each of the first sample images to an attention model to be trained, and obtain a prediction probability set corresponding to each of the first sample images; the set of prediction probabilities includes a prediction probability of the second target object under each classification category, the prediction probability being determined based on component coefficients of each of the second components, the component coefficients being determined based on a second attention profile of the second component;
The first model training module 1206 is configured to iteratively train the attention model to be trained based on the prediction probability set and the real label of the first sample image, so as to obtain a trained attention model.
In the attention model training device in the embodiment of the application, aiming at the training stage of the attention model, by taking each second component contained in the second target object in the first sample image as a minimum analysis unit, the second attention map and the component coefficient of each second component are obtained by using the attention model to be trained, obtaining a prediction probability set of the first sample image based on the second attention map and the component coefficients of each second component, and performing iterative training on the attention model based on the prediction probability set to obtain a trained attention model, wherein the obtained attention model can output the component attention map and the component coefficients with higher accuracy; in the using stage of the attention model, each first component contained in a first target object in an original sample image to be disturbed is taken as a minimum analysis unit, and intermediate data (namely a first attention map and a component coefficient of each first component) output by the attention model after training is fully utilized, an image subarea corresponding to a first component with higher importance degree is selected in the original sample image as an anti-patch area, so that the anti-patch area can be positioned in the original sample image in a finer granularity, the positioned anti-patch area has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch area can be improved, the safety of the target model obtained by using the anti-sample image for anti-training is higher, and potential safety hazards of the target model in the application stage are avoided, and the target model can better cope with the anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
It should be noted that, the embodiments of the attention model training device in the present application and the embodiments of the challenge sample generating method in the present application are based on the same inventive concept, so that the specific implementation of the embodiments may refer to the implementation of the corresponding challenge sample generating method, and the repetition is omitted.
Corresponding to the image recognition model training method described in fig. 7, based on the same technical concept, the embodiment of the present application further provides an image recognition model training device, and fig. 13 is a schematic diagram of module composition of the image recognition model training device provided in the embodiment of the present application, where the device is used to execute the image recognition model training method described in fig. 7, and as shown in fig. 13, the device includes:
a third acquisition module 1302 configured to acquire a challenge sample image; the challenge sample image is obtained based on the flow steps shown in fig. 1 to 4;
a second model training module 1304 configured to perform countermeasure training on an image recognition model to be countered based on the countermeasure sample image, resulting in a trained image recognition model; the image recognition model to be opposed is an image recognition model obtained by performing model training based on the second sample image.
According to the image recognition model training device, the countermeasure sample image obtained based on the countermeasure sample generation method is utilized to perform countermeasure training on the image recognition model to be subjected to countermeasure, and a trained image recognition model is obtained; because the anti-attack effect is better on the anti-sample image obtained based on the anti-sample generation method, the obtained image recognition model is higher in safety, potential safety hazards of the image recognition model in an application stage can be avoided, and the image recognition model can better cope with the anti-attack of illegal molecules; meanwhile, the countermeasure patches in the countermeasure sample image are more hidden, so that the countermeasure effect of the countermeasure sample image is further improved, and further the countermeasure attack defending capability of the image recognition model is further improved.
It should be noted that, the embodiments of the image recognition model training apparatus in the present application and the embodiments of the challenge sample generating method in the present application are based on the same inventive concept, so the specific implementation of the embodiments may refer to the implementation of the corresponding challenge sample generating method, and the repetition is omitted.
Corresponding to the image recognition method described in fig. 9, based on the same technical concept, the embodiment of the present application further provides an image recognition device, and fig. 14 is a schematic block diagram of the image recognition device provided in the embodiment of the present application, where the device is configured to perform the image recognition method described in fig. 9, and as shown in fig. 14, the device includes:
a fourth acquisition module 1402 configured to acquire a target image to be recognized;
an image recognition module 1404 configured to input the target image into a trained image recognition model, and recognize the target image to obtain a recognition result of the target image; the trained image recognition model is trained using the flow steps shown in fig. 7.
According to the image recognition device, aiming at the use stage of the image recognition model, the safety of the image recognition model trained on the countermeasure sample image obtained by utilizing the countermeasure sample generation method is higher, so that potential safety hazards of the image recognition model in the application stage can be avoided, and the image recognition model can better cope with countermeasure attack of illegal molecules; meanwhile, the countermeasure patches in the countermeasure sample image are more hidden, so that the countermeasure effect of the countermeasure sample image is further improved, and further the countermeasure attack defending capability of the image recognition model is further improved.
It should be noted that, the embodiments of the image recognition apparatus in the present application and the embodiments of the challenge sample generation method in the present application are based on the same inventive concept, so the specific implementation of this embodiment may refer to the implementation of the corresponding challenge sample generation method, and the repetition is not repeated.
Further, according to the methods shown in fig. 1 to 10, based on the same technical concept, the embodiment of the present application further provides a computer device, where the computer device is configured to perform any one of the above-mentioned challenge sample generation method, attention model training method, image recognition model training method, and image recognition method, as shown in fig. 15.
The computer devices may vary widely in configuration or performance and may include one or more processors 1501 and memory 1502, with one or more stored applications or data stored in the memory 1502. Wherein the memory 1502 may be transient storage or persistent storage. The application programs stored in the memory 1502 may include one or more modules (not shown in the figures), each of which may include a series of computer-executable instructions for use in a computer device. Still further, the processor 1501 may be provided in communication with a memory 1502, executing a series of computer executable instructions in the memory 1502 on a computer device. The computer device may also include one or more power supplies 1503, one or more wired or wireless network interfaces 1504, one or more input/output interfaces 1505, one or more keyboards 1506, and the like.
In a particular embodiment, a computer device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the computer device, and configured to be executed by one or more processors, the one or more programs comprising computer-executable instructions for:
acquiring an original sample image to be disturbed; the original sample image includes an image region of a first target object, the first target object including at least one first component;
inputting the original sample image into a trained attention model to obtain a first attention map and a component coefficient respectively corresponding to the first component; the component coefficients are used for corresponding to the importance degree of the image subareas of the first component to the first target object identified as the real label;
determining an anti-patch area in the original sample image based on the first attention map and the component coefficients;
and carrying out disturbance processing on the corresponding image area in the original sample image based on the countermeasure patch area to obtain a countermeasure sample image.
The computer equipment in the embodiment of the application inputs an original sample image into the trained attention model to obtain first attention force diagrams and component coefficients respectively corresponding to first components contained in a first target object in the original sample image; determining an anti-patch area in the original sample image based on the first attention map and the component coefficients; performing disturbance processing on the original sample image based on the countermeasure patch area to obtain a countermeasure sample image; by taking each first component contained in a first target object in an original sample image as a minimum analysis unit and fully utilizing intermediate data (namely first attention force diagrams and component coefficients of all first components) output by a trained attention model, an image sub-region corresponding to the first component with higher importance degree is selected as an anti-patch region in the original sample image, so that the anti-patch region can be positioned in the original sample image in a finer granularity, the positioned anti-patch region has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch region can be improved, the safety of a target model obtained by using the anti-sample image for anti-training is higher, potential safety hazards of the target model in an application stage are avoided, and the target model can better cope with anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
In another particular embodiment, a computer device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the computer device, and configured to be executed by one or more processors, the one or more programs comprising computer-executable instructions for:
acquiring a plurality of first sample images; the first sample image includes an image region of a second target object, the second target object including at least one second component;
inputting each first sample image into an attention model to be trained, and obtaining a prediction probability set corresponding to each first sample image; the set of prediction probabilities includes a prediction probability of the second target object under each classification category, the prediction probability being determined based on component coefficients of each of the second components, the component coefficients being determined based on a second attention profile of the second component;
and carrying out iterative training on the attention model to be trained based on the prediction probability set and the real label of the first sample image to obtain a trained attention model.
Aiming at a training stage of an attention model, the computer equipment in the embodiment of the application obtains a second attention map and a component coefficient of each second component by taking each second component contained in a second target object in a first sample image as a minimum analysis unit and using the attention model to be trained, obtains a prediction probability set of the first sample image based on the second attention map and the component coefficient of each second component, and carries out iterative training on the attention model based on the prediction probability set to obtain a trained attention model, so that the obtained attention model can output a component attention map and a component coefficient which are higher accurately; in the using stage of the attention model, each first component contained in a first target object in an original sample image to be disturbed is taken as a minimum analysis unit, and intermediate data (namely a first attention map and a component coefficient of each first component) output by the attention model after training is fully utilized, an image subarea corresponding to a first component with higher importance degree is selected in the original sample image as an anti-patch area, so that the anti-patch area can be positioned in the original sample image in a finer granularity, the positioned anti-patch area has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch area can be improved, the safety of the target model obtained by using the anti-sample image for anti-training is higher, and potential safety hazards of the target model in the application stage are avoided, and the target model can better cope with the anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
In yet another particular embodiment, a computer device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the computer device, and configured to be executed by one or more processors, the one or more programs comprising computer-executable instructions for:
acquiring a challenge sample image; the challenge sample image is derived based on the method as described in the first aspect;
performing countermeasure training on the image recognition model to be countered based on the countermeasure sample image to obtain a trained image recognition model; the image recognition model to be opposed is an image recognition model obtained by performing model training based on the second sample image.
The computer equipment in the embodiment of the application performs countermeasure training on the image recognition model to be countered by using the countermeasure sample image obtained based on the countermeasure sample generation method to obtain a trained image recognition model; because the anti-attack effect is better on the anti-sample image obtained based on the anti-sample generation method, the obtained image recognition model is higher in safety, potential safety hazards of the image recognition model in an application stage can be avoided, and the image recognition model can better cope with the anti-attack of illegal molecules; meanwhile, the countermeasure patches in the countermeasure sample image are more hidden, so that the countermeasure effect of the countermeasure sample image is further improved, and further the countermeasure attack defending capability of the image recognition model is further improved.
In yet another particular embodiment, a computer device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the computer device, and configured to be executed by one or more processors, the one or more programs comprising computer-executable instructions for:
acquiring a target image to be identified;
inputting the target image into a trained image recognition model, and recognizing the target image to obtain a recognition result of the target image; the trained image recognition model is trained using the method as described in the third aspect.
According to the computer equipment in the embodiment of the application, aiming at the use stage of the image recognition model, as the safety of the image recognition model based on the challenge sample image training obtained by using the challenge sample generation method is higher, potential safety hazards of the image recognition model in the application stage can be avoided, so that the image recognition model can better cope with challenge attack of illegal molecules; meanwhile, the countermeasure patches in the countermeasure sample image are more hidden, so that the countermeasure effect of the countermeasure sample image is further improved, and further the countermeasure attack defending capability of the image recognition model is further improved.
It should be noted that, the embodiments related to the computer device and the embodiments related to the challenge sample generating method in the present application are based on the same inventive concept, so the specific implementation of this embodiment may refer to the implementation of the corresponding challenge sample generating method, and the repetition is not repeated.
Further, corresponding to the methods shown in fig. 1 to 10, based on the same technical concept, the embodiments of the present application further provide a storage medium, which is used to store computer executable instructions, and in a specific embodiment, the storage medium may be a U disc, an optical disc, a hard disk, etc., where the computer executable instructions stored in the storage medium can implement the following flow when executed by a processor:
acquiring an original sample image to be disturbed; the original sample image includes an image region of a first target object, the first target object including at least one first component;
inputting the original sample image into a trained attention model to obtain a first attention map and a component coefficient respectively corresponding to the first component; the component coefficients are used for representing the importance degree of the image subareas corresponding to the first component to the first target object identified as the real label;
Determining an anti-patch area in the original sample image based on the first attention map and the component coefficients;
and carrying out disturbance processing on the corresponding image area in the original sample image based on the countermeasure patch area to obtain a countermeasure sample image.
When being executed by a processor, the computer executable instructions stored in the storage medium in the embodiment of the application input an original sample image into a trained attention model to obtain first attention force diagrams and component coefficients respectively corresponding to first components contained in a first target object in the original sample image; determining an anti-patch area in the original sample image based on the first attention map and the component coefficients; performing disturbance processing on the original sample image based on the countermeasure patch area to obtain a countermeasure sample image; by taking each first component contained in a first target object in an original sample image as a minimum analysis unit and fully utilizing intermediate data (namely first attention force diagrams and component coefficients of all first components) output by a trained attention model, an image sub-region corresponding to the first component with higher importance degree is selected as an anti-patch region in the original sample image, so that the anti-patch region can be positioned in the original sample image in a finer granularity, the positioned anti-patch region has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch region can be improved, the safety of a target model obtained by using the anti-sample image for anti-training is higher, potential safety hazards of the target model in an application stage are avoided, and the target model can better cope with anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
In another specific embodiment, the storage medium may be a usb disk, an optical disc, a hard disk, or the like, where the computer executable instructions stored in the storage medium when executed by the processor implement the following procedures:
acquiring a plurality of first sample images; the first sample image includes an image region of a second target object, the second target object including at least one second component;
inputting each first sample image into an attention model to be trained, and obtaining a prediction probability set corresponding to each first sample image; the set of prediction probabilities includes a prediction probability of the second target object under each classification category, the prediction probability being determined based on component coefficients of each of the second components, the component coefficients being determined based on a second attention profile of the second component;
and carrying out iterative training on the attention model to be trained based on the prediction probability set and the real label of the first sample image to obtain a trained attention model.
When the computer executable instructions stored in the storage medium in the embodiment of the application are executed by a processor, aiming at a training stage of an attention model, by taking each second component contained in a second target object in a first sample image as a minimum analysis unit, obtaining second attention force diagrams and component coefficients of each second component by using the attention model to be trained, obtaining a prediction probability set of the first sample image based on the second attention force diagrams and the component coefficients of each second component, and performing iterative training on the attention model based on the prediction probability set to obtain a trained attention model, wherein the obtained attention model can output accurately higher component attention force diagrams and component coefficients; in the using stage of the attention model, each first component contained in a first target object in an original sample image to be disturbed is taken as a minimum analysis unit, and intermediate data (namely a first attention map and a component coefficient of each first component) output by the attention model after training is fully utilized, an image subarea corresponding to a first component with higher importance degree is selected in the original sample image as an anti-patch area, so that the anti-patch area can be positioned in the original sample image in a finer granularity, the positioned anti-patch area has higher pertinence and higher accuracy, the anti-attack effect on the anti-sample image generated based on the anti-patch area can be improved, the safety of the target model obtained by using the anti-sample image for anti-training is higher, and potential safety hazards of the target model in the application stage are avoided, and the target model can better cope with the anti-attack of illegal molecules; meanwhile, each sub-patch area is located in one component image area, namely, each countermeasure patch is formed on a single component, so that the countermeasure patches in the countermeasure sample image are more hidden, and the countermeasure effect of the countermeasure sample image is further improved.
In yet another specific embodiment, the storage medium may be a usb disk, an optical disc, a hard disk, or the like, where the computer executable instructions stored in the storage medium, when executed by the processor, implement the following procedures:
acquiring a challenge sample image; the challenge sample image is derived based on the method as described in the first aspect;
performing countermeasure training on the image recognition model to be countered based on the countermeasure sample image to obtain a trained image recognition model; the image recognition model to be opposed is an image recognition model obtained by performing model training based on the second sample image.
When being executed by a processor, the computer executable instructions stored by the storage medium in the embodiment of the application perform countermeasure training on an image recognition model to be subjected to countermeasure by using the countermeasure sample image obtained based on the countermeasure sample generation method, so as to obtain a trained image recognition model; because the anti-attack effect is better on the anti-sample image obtained based on the anti-sample generation method, the obtained image recognition model is higher in safety, potential safety hazards of the image recognition model in an application stage can be avoided, and the image recognition model can better cope with the anti-attack of illegal molecules; meanwhile, the countermeasure patches in the countermeasure sample image are more hidden, so that the countermeasure effect of the countermeasure sample image is further improved, and further the countermeasure attack defending capability of the image recognition model is further improved.
In another specific embodiment, the storage medium may be a usb disk, an optical disc, a hard disk, or the like, where the computer executable instructions stored in the storage medium, when executed by the processor, implement the following procedures:
acquiring a target image to be identified;
inputting the target image into a trained image recognition model, and recognizing the target image to obtain a recognition result of the target image; the trained image recognition model is trained using the method as described in the third aspect.
When the computer executable instructions stored in the storage medium in the embodiment of the application are executed by the processor, aiming at the use stage of the image recognition model, as the safety of the image recognition model trained on the challenge sample image obtained by utilizing the challenge sample generation method is higher, potential safety hazards of the image recognition model in the application stage can be avoided, so that the image recognition model can better cope with challenge attacks of illegal molecules; meanwhile, the countermeasure patches in the countermeasure sample image are more hidden, so that the countermeasure effect of the countermeasure sample image is further improved, and further the countermeasure attack defending capability of the image recognition model is further improved.
It should be noted that, the embodiments related to the storage medium and the embodiments related to the challenge sample generating method in the present application are based on the same inventive concept, so the specific implementation of this embodiment may refer to the implementation of the corresponding challenge sample generating method, and the repetition is not repeated.
The foregoing describes specific embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
Embodiments of the application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
All embodiments in the application are described in a progressive manner, and identical and similar parts of all embodiments are mutually referred, so that each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. The foregoing description is by way of example only and is not intended to limit the present disclosure. Various modifications and changes may occur to those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present document are intended to be included within the scope of the claims of the present document.

Claims (20)

1. A method of challenge sample generation, the method comprising:
acquiring an original sample image to be disturbed; the original sample image includes an image region of a first target object, the first target object including at least one first component;
inputting the original sample image into a trained attention model to obtain a first attention map and a component coefficient respectively corresponding to the first component; the component coefficients are used for representing the importance degree of the image subareas corresponding to the first component to the first target object identified as the real label; wherein the attention model comprises: a channel feature map extraction layer whose input is the original sample image and output is a first feature map under a plurality of channels, an attention map generation layer whose input is a first feature map under the plurality of channels and output is a first attention map corresponding to the at least one first component, respectively, and a component coefficient determination layer; the input of the component coefficient determining layer is a first attention map corresponding to the first components respectively and the output is a component coefficient corresponding to the first components respectively;
Determining an anti-patch area in the original sample image based on the first attention map and the component coefficients;
and carrying out disturbance processing on the corresponding image area in the original sample image based on the countermeasure patch area to obtain a countermeasure sample image.
2. The method of claim 1, wherein said inputting the original sample image into the trained attention model results in first attention patterns and component coefficients respectively corresponding to the first component, comprising:
the channel feature map extraction layer is used for carrying out feature extraction processing on the original sample image to obtain c first feature maps corresponding to c channels; each channel corresponds to a first feature map, c represents the number of channels in a channel feature map extraction layer, and c is an integer greater than 1;
the attention force diagram generating layer is used for carrying out weighting processing on the c first feature diagrams to obtain K first attention force diagrams corresponding to the K first components; each first component corresponds to a first attention map, K represents the number of components in a first target object, and K is an integer greater than 1;
the component coefficient determining layer is used for carrying out average pooling processing on the K first attention attempts to obtain K component coefficients corresponding to the K first components, wherein each first component corresponds to one component coefficient.
3. The method of claim 2, wherein the attention attempt generation layer comprises K full convolution layers, each of the full convolution layers corresponding to one of the first components;
each full convolution layer is used for carrying out weighting processing on c first feature graphs to obtain a first attention map of a first component corresponding to the full convolution layer;
wherein said first attention map of K of said full convolutional layer outputs is a corresponding first attention map of K first components.
4. The method of claim 2, wherein the component coefficient determination layer comprises K global average pooling layers, each global average pooling layer corresponding to one of the first components;
each global average pooling layer is used for carrying out average pooling processing on a first attention map of a first component corresponding to the global average pooling layer to obtain a component coefficient of the first component corresponding to the global average pooling layer;
the component coefficients output by the K global average pooling layers are component coefficients corresponding to the K first components.
5. The method of any of claims 1 to 4, wherein the determining an anti-patch area in the original sample image based on the first attention map and the component coefficients comprises:
Determining at least one target sub-patch area in image sub-areas corresponding to K first components based on the first attention map and the component coefficients;
an anti-patch area is determined in the original sample image based on the at least one target sub-patch area.
6. The method of claim 5, wherein determining at least one target sub-patch area among K image sub-areas corresponding to the first component based on the first attention map and the component coefficients comprises:
based on the first attention map and the component coefficients, performing the following operations of simulating an attack target model:
selecting a target component coefficient according to the sizes of the component coefficients of the K first components; the target component coefficient is the largest component coefficient in the unselected component coefficients;
based on a first attention map corresponding to the target component coefficient, performing disturbance attack on at least one attack resisting object to obtain a simulated attack sample image; the at least one attack resisting object corresponds to the target sub-patch areas corresponding to the selected component coefficients one by one;
inputting the simulated attack sample image into the target model, and carrying out classification recognition on the simulated attack sample image to obtain an image classification result of the simulated attack sample image;
If the image classification result indicates that the image classification is correct, continuing to execute the operation of the simulated attack target model;
and if the image classification result represents the image classification error, determining at least one target sub-patch area based on the at least one attack resistant object.
7. The method of claim 6, wherein performing a perturbation attack on at least one challenge object based on the first attention map corresponding to the target component coefficients, obtaining a simulated attack sample image, comprises:
in the first attention map corresponding to the target component coefficient selected at this time, carrying out shielding treatment on pixel points with pixel values smaller than or equal to a preset threshold value to obtain a target attention map corresponding to the target component coefficient;
adding the target attention attempt as a counterattack object to a set of attack objects; the attack object set comprises a target attention map corresponding to the selected target component coefficient;
performing disturbance attack on the latest attack object set to obtain an attack countermeasure set after disturbance attack;
and generating a simulated attack sample image based on the disturbance attack countermeasure set.
8. The method of claim 5, wherein the determining an anti-patch area in the original sample image based on the at least one target sub-patch area comprises:
determining original position information corresponding to each pixel point in the at least one target sub-patch area;
based on the original position information, an anti-patch area is determined in the original sample image.
9. The method of claim 8, wherein the determining an anti-patch area in the original sample image based on the original location information comprises:
determining at least one original sub-patch area in the original sample image based on the original location information;
a union of the at least one original sub-patch areas is determined as the counterpatch area.
10. The method according to any one of claims 1 to 4, wherein the performing, based on the challenge patch area, a perturbation process on a corresponding image area in the original sample image to obtain a challenge sample image includes:
and performing disturbance processing on a corresponding image area in the original sample image based on the patch countering area by utilizing a projection gradient descent PGD counterdisturbance mode to obtain a countersample image.
11. A method of training an attention model according to any one of claims 1 to 10, wherein the training method comprises:
acquiring a plurality of first sample images; the first sample image includes an image region of a second target object, the second target object including at least one second component;
inputting each first sample image into an attention model to be trained, and obtaining a prediction probability set corresponding to each first sample image; the set of prediction probabilities includes a prediction probability of the second target object under each classification category, the prediction probability being determined based on component coefficients of each of the second components, the component coefficients being determined based on a second attention profile of the second component;
and carrying out iterative training on the attention model to be trained based on the prediction probability set and the real label of the first sample image to obtain a trained attention model.
12. The method of claim 11, wherein the attention model comprises: a channel feature map extraction layer, an attention attempt generation layer, a component coefficient determination layer and a classifier;
Inputting each first sample image to an attention model to be trained to obtain a prediction probability set corresponding to each first sample image, wherein the prediction probability set comprises:
for each first sample image, the channel feature map extraction layer is used for carrying out feature extraction processing on the first sample image to obtain c second feature maps corresponding to c channels; each channel corresponds to a second feature map, c represents the number of channels in the channel feature map extraction layer, and c is an integer greater than 1;
the attention force diagram generating layer is used for carrying out weighting processing on the c second feature diagrams to obtain K second attention force diagrams corresponding to the K second components; each second component corresponds to a second attention map, K represents the number of components in the second target object, and K is an integer greater than 1;
the component coefficient determining layer is used for carrying out average pooling processing on the K second attention attempts to obtain K component coefficients corresponding to the K second components, wherein each second component corresponds to one component coefficient;
the classifier is used for classifying and identifying the second target object based on K component coefficients to obtain a prediction probability set corresponding to the first sample image.
13. The method of claim 11, wherein iteratively training the attention model to be trained based on the set of predictive probabilities and the true labels of the first sample image to obtain a trained attention model, comprising:
determining a loss value corresponding to the first sample image based on the prediction probability set, the real label of the first sample image and a preset loss function;
and carrying out iterative training on the attention model to be trained based on the loss value of each first sample image to obtain a trained attention model.
14. The method of claim 13, wherein the determining the loss value corresponding to the first sample image based on the set of prediction probabilities, the true tags of the first sample image, and a preset loss function comprises:
determining a first loss value based on the prediction probability set, the real label of the first sample image and a preset loss function; the method comprises the steps of,
determining a second loss value based on K second attention profiles corresponding to the K second components;
and determining the sum of the first loss value and the second loss value as a loss value corresponding to the first sample image.
15. A method for training an image recognition model, the method comprising:
acquiring a challenge sample image; the challenge sample image is obtained based on the method of any of claims 1 to 10;
performing countermeasure training on the image recognition model to be countered based on the countermeasure sample image to obtain a trained image recognition model; the image recognition model to be opposed is an image recognition model obtained by performing model training based on the second sample image.
16. An image recognition method, the method comprising:
acquiring a target image to be identified;
inputting the target image into a trained image recognition model, and recognizing the target image to obtain a recognition result of the target image; the trained image recognition model is trained using the method of claim 15.
17. An challenge sample generating device, the device comprising:
the first acquisition module is configured to acquire an original sample image to be disturbed; the original sample image includes an image region of a first target object, the first target object including at least one first component;
The first output module is configured to input the original sample image into a trained attention model to obtain first attention force diagrams and component coefficients respectively corresponding to the first components; the component coefficients are used for representing the importance degree of the image subareas corresponding to the first component to the first target object identified as the real label; wherein the attention model comprises: a channel feature map extraction layer whose input is the original sample image and output is a first feature map under a plurality of channels, an attention map generation layer whose input is a first feature map under the plurality of channels and output is a first attention map corresponding to the at least one first component, respectively, and a component coefficient determination layer; the input of the component coefficient determining layer is a first attention map corresponding to the first components respectively and the output is a component coefficient corresponding to the first components respectively;
a patch area determination module configured to determine an anti-patch area in the original sample image based on the first attention map and the component coefficients;
and the countermeasure sample generation module is configured to perform disturbance processing on a corresponding image area in the original sample image based on the countermeasure patch area to obtain a countermeasure sample image.
18. An image recognition apparatus, the apparatus comprising:
a fourth acquisition module configured to acquire a target image to be identified;
the image recognition module is configured to input the target image into a trained image recognition model, recognize the target image and obtain a recognition result of the target image; the trained image recognition model is trained using the method of claim 15.
19. A computer device, the device comprising:
a processor; and
a memory arranged to store computer executable instructions configured for execution by the processor, the executable instructions comprising steps for performing the method of any one of claims 1 to 10, any one of claims 11 to 14, claim 15, or claim 16.
20. A storage medium storing computer executable instructions for causing a computer to perform the method of any one of claims 1 to 10, any one of claims 11 to 14, claim 15, or claim 16.
CN202210426651.1A 2022-04-22 2022-04-22 Countermeasure sample generation method, model training method, image recognition method and device Active CN114742170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210426651.1A CN114742170B (en) 2022-04-22 2022-04-22 Countermeasure sample generation method, model training method, image recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210426651.1A CN114742170B (en) 2022-04-22 2022-04-22 Countermeasure sample generation method, model training method, image recognition method and device

Publications (2)

Publication Number Publication Date
CN114742170A CN114742170A (en) 2022-07-12
CN114742170B true CN114742170B (en) 2023-07-25

Family

ID=82284694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210426651.1A Active CN114742170B (en) 2022-04-22 2022-04-22 Countermeasure sample generation method, model training method, image recognition method and device

Country Status (1)

Country Link
CN (1) CN114742170B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671409A (en) * 2023-10-20 2024-03-08 北京百度网讯科技有限公司 Sample generation, model training and image processing methods, devices, equipment and media

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN111199233A (en) * 2019-12-30 2020-05-26 四川大学 Improved deep learning pornographic image identification method
CN111488886A (en) * 2020-03-12 2020-08-04 上海交通大学 Panorama image significance prediction method and system with attention feature arrangement and terminal
CN111899191A (en) * 2020-07-21 2020-11-06 武汉工程大学 Text image restoration method and device and storage medium
CN112085069A (en) * 2020-08-18 2020-12-15 中国人民解放军战略支援部队信息工程大学 Multi-target countermeasure patch generation method and device based on integrated attention mechanism
CN112287978A (en) * 2020-10-07 2021-01-29 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN112836798A (en) * 2021-01-29 2021-05-25 华中科技大学 Non-directional white-box attack resisting method aiming at scene character recognition
US11049605B1 (en) * 2020-06-30 2021-06-29 Cortery AB Computer-implemented systems and methods for generating tailored medical recipes for mental health disorders
CN113255816A (en) * 2021-06-10 2021-08-13 北京邮电大学 Directional attack countermeasure patch generation method and device
CN113487506A (en) * 2021-07-06 2021-10-08 杭州海康威视数字技术股份有限公司 Countermeasure sample defense method, device and system based on attention denoising
CN113947704A (en) * 2021-10-09 2022-01-18 北京建筑大学 Confrontation sample defense system and method based on attention ranking
CN114239685A (en) * 2021-11-18 2022-03-25 北京墨云科技有限公司 Method and device for evaluating robustness of neural network image classification model
CN114299590A (en) * 2021-12-31 2022-04-08 中国科学技术大学 Training method of face completion model, face completion method and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN111199233A (en) * 2019-12-30 2020-05-26 四川大学 Improved deep learning pornographic image identification method
CN111488886A (en) * 2020-03-12 2020-08-04 上海交通大学 Panorama image significance prediction method and system with attention feature arrangement and terminal
US11049605B1 (en) * 2020-06-30 2021-06-29 Cortery AB Computer-implemented systems and methods for generating tailored medical recipes for mental health disorders
CN111899191A (en) * 2020-07-21 2020-11-06 武汉工程大学 Text image restoration method and device and storage medium
CN112085069A (en) * 2020-08-18 2020-12-15 中国人民解放军战略支援部队信息工程大学 Multi-target countermeasure patch generation method and device based on integrated attention mechanism
CN112287978A (en) * 2020-10-07 2021-01-29 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN112836798A (en) * 2021-01-29 2021-05-25 华中科技大学 Non-directional white-box attack resisting method aiming at scene character recognition
CN113255816A (en) * 2021-06-10 2021-08-13 北京邮电大学 Directional attack countermeasure patch generation method and device
CN113487506A (en) * 2021-07-06 2021-10-08 杭州海康威视数字技术股份有限公司 Countermeasure sample defense method, device and system based on attention denoising
CN113947704A (en) * 2021-10-09 2022-01-18 北京建筑大学 Confrontation sample defense system and method based on attention ranking
CN114239685A (en) * 2021-11-18 2022-03-25 北京墨云科技有限公司 Method and device for evaluating robustness of neural network image classification model
CN114299590A (en) * 2021-12-31 2022-04-08 中国科学技术大学 Training method of face completion model, face completion method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于生成对抗网络的零样本图像分类研究;张越;《中国优秀硕士学位论文全文数据库信息科技辑》(第01期);I138-882 *

Also Published As

Publication number Publication date
CN114742170A (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN109948658B (en) Feature diagram attention mechanism-oriented anti-attack defense method and application
CN109840531B (en) Method and device for training multi-label classification model
CN108140032B (en) Apparatus and method for automatic video summarization
Yang et al. Deepfake network architecture attribution
CN110633745A (en) Image classification training method and device based on artificial intelligence and storage medium
CN111414946B (en) Artificial intelligence-based medical image noise data identification method and related device
CN108805016B (en) Head and shoulder area detection method and device
CN109214973A (en) For the confrontation safety barrier generation method of steganalysis neural network
Akhtar et al. Attack to fool and explain deep networks
CN112966685B (en) Attack network training method and device for scene text recognition and related equipment
CN114742170B (en) Countermeasure sample generation method, model training method, image recognition method and device
CN113435264A (en) Face recognition attack resisting method and device based on black box substitution model searching
CN112818774A (en) Living body detection method and device
Yang et al. Random subspace supervised descent method for regression problems in computer vision
CN115601629A (en) Model training method, image recognition method, medium, device and computing equipment
CN114118303B (en) Face key point detection method and device based on prior constraint
CN118350436A (en) Multimode invisible back door attack method, system and medium based on disturbance countermeasure
CN115311550A (en) Method and device for detecting semantic change of remote sensing image, electronic equipment and storage medium
CN109121133B (en) Location privacy protection method and device
Phoka et al. Image based phishing detection using transfer learning
CN117115883A (en) Training method of biological detection model, biological detection method and related products
CN116630749A (en) Industrial equipment fault detection method, device, equipment and storage medium
CN113673581B (en) Hard tag black box depth model countermeasure sample generation method and storage medium
CN113516182B (en) Visual question-answering model training and visual question-answering method and device
CN115880546A (en) Confrontation robustness evaluation method based on class activation mapping chart and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant