CN114861893B - Multi-channel aggregated countermeasure sample generation method, system and terminal - Google Patents

Multi-channel aggregated countermeasure sample generation method, system and terminal Download PDF

Info

Publication number
CN114861893B
CN114861893B CN202210793780.4A CN202210793780A CN114861893B CN 114861893 B CN114861893 B CN 114861893B CN 202210793780 A CN202210793780 A CN 202210793780A CN 114861893 B CN114861893 B CN 114861893B
Authority
CN
China
Prior art keywords
model
neural network
image
representing
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210793780.4A
Other languages
Chinese (zh)
Other versions
CN114861893A (en
Inventor
郑德生
吴欣隆
刘忠慧
周永
陈继鑫
尹相东
朱星丞
牟蜚声
温冬
李政禹
刘建超
柯武平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202210793780.4A priority Critical patent/CN114861893B/en
Publication of CN114861893A publication Critical patent/CN114861893A/en
Application granted granted Critical
Publication of CN114861893B publication Critical patent/CN114861893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Abstract

The invention discloses a method, a system and a terminal for generating a multi-channel aggregated confrontation sample, belonging to the technical field of deep learning and establishing a plurality of model channels; random disturbance information is added to the original images respectively to obtain a plurality of first disturbance images; inputting the original image into a first model passage, simultaneously respectively inputting a plurality of first disturbance images into other model passages, calculating the gradient of each neural network model, carrying out self-adaptive weight aggregation processing on the gradient of each neural network model, updating an image sample generated by each neural network model according to the gradient obtained by the self-adaptive weight aggregation processing, circulating the step for multiple times, and outputting a final countermeasure sample. According to the method, the first disturbance image integrates external disturbance factors, and the generalization performance is strong; through self-adaptive weight aggregation processing, multiple disturbance factors of the image are fitted, and the generalization of the antagonistic sample is further improved.

Description

Multi-channel aggregated countermeasure sample generation method, system and terminal
Technical Field
The invention relates to the technical field of deep learning, in particular to a method, a system and a terminal for generating a multi-channel aggregated confrontation sample.
Background
In recent years, deep learning develops rapidly, is widely applied to the technical fields of image recognition, voice recognition and the like, and has a positive promoting effect on the development of scientific technology, so that the deep learning method has very important significance on the research of deep learning. At present, a deep learning model is vulnerable to an attack of a confrontation sample, so that the model outputs an inaccurate prediction result; the confrontation sample is made by adding imperceptible subtle noise to a legal example based on a neural network model. The process of resisting the sample attack deep learning model is called as resisting attack, the generated resisting sample has high attack success rate in the model attack, the safety research work of the current neural network model is facilitated, and the neural network model with high protection capability can be trained.
In the neural network model training process in the current antagonistic sample network model safety research field, in order to enable the model to have enough generalization capability and further deal with new characteristics which do not appear in the training process, the model is generally trained by the antagonistic sample with enough generalization capability so as to improve the generalization capability of the model, however, the currently generated antagonistic sample lacks comprehensive consideration of external disturbance factors, so that when the model trained by the antagonistic sample faces the external complex and changeable disturbance factors, the generalization capability is low, and the defense capability is weak.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides a method, a system and a terminal for generating a countercheck sample of multi-channel aggregation.
The purpose of the invention is realized by the following technical scheme: a multi-channel aggregated confrontation sample generation method comprises the following steps:
establishing a plurality of model passages, wherein each model passage comprises a plurality of neural network models which are connected in sequence, nodes at the same level in each model passage adopt the same neural network model, and the neural network models corresponding to the nodes at the same level are connected in an adjacent mode;
random disturbance information is added to the original images respectively to obtain a plurality of first disturbance images;
inputting an original image into a first model passage, simultaneously respectively inputting a plurality of first disturbance images into other model passages, calculating the gradient of each neural network model, carrying out self-adaptive weight aggregation processing on the gradient of each neural network model according to the proportion of the predicted similarity of the current neural network model to the total similarity of all the neural network models of the same node, updating an image sample generated by each neural network model according to the gradient obtained by the self-adaptive weight aggregation processing, and circulating the step for multiple times to enable the first model passage to output a final countermeasure sample.
In one example, adding random perturbation information to the original image includes:
and performing geometric transformation processing and/or color processing and/or image fusion processing and/or filtering processing on the original image to obtain a first disturbance image.
In one example, the calculation formula of the adaptive weight aggregation process is:
Figure 890121DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,prepresenting model path number;nan upper limit value representing a model path;trepresenting node serial numbers in the model path;
Figure 898528DEST_PATH_IMAGE002
to representtGradient aggregation of all model paths of the level nodes;
Figure 889618DEST_PATH_IMAGE003
denotes the firstpFirst of strip model passagetSimilarity of level nodes;
Figure 729398DEST_PATH_IMAGE004
is composed oftThe sum of the similarity of all model paths of the level nodes;
Figure 413320DEST_PATH_IMAGE005
representing a gradientA symbol;
Figure 705761DEST_PATH_IMAGE006
is shown aspFirst of the strip pattern pathtAn image input by a stage node;CErepresents the cross-loss entropy;y ture image representing input neural network modelxThe corresponding true tag value;Mrepresenting a neural network model;
Figure DEST_PATH_IMAGE007
representing neural network modelsMA prediction value for the input image.
In one example, the cross-loss entropy is calculated by the formula:
Figure 999077DEST_PATH_IMAGE008
wherein, the first and the second end of the pipe are connected with each other,
Figure 631046DEST_PATH_IMAGE009
is the first in the real label
Figure 282608DEST_PATH_IMAGE010
Encoding;
Figure 265607DEST_PATH_IMAGE011
is the first in the predicted value
Figure 660816DEST_PATH_IMAGE010
Encoding;Cindicating the number of categories in the label.
In an example, the calculation formula of the ratio of the predicted similarity of the current neural network model to the total similarity of all the neural network models of the nodes at the same level is as follows:
Figure 350555DEST_PATH_IMAGE012
wherein, the first and the second end of the pipe are connected with each other,prepresenting model path number;trepresenting node sequence numbers in the model path;
Figure 438597DEST_PATH_IMAGE003
is shown aspFirst of the strip pattern pathtSimilarity of level nodes;y ture image representing input neural network modelxA corresponding true tag value;Mrepresenting a neural network model class;
Figure 705630DEST_PATH_IMAGE013
is shown aspFirst of strip model passagetAn image input by a stage node; | represents norm, | | indicates the non-conducting light 1 Representing the sum of the absolute values of the vector elements.
In an example, the calculation formula for updating the image samples generated by each neural network model according to the gradient obtained by the adaptive weight aggregation process is as follows:
Figure 75169DEST_PATH_IMAGE014
wherein the content of the first and second substances,prepresenting model path number;trepresenting node serial numbers in the model path;
Figure 744048DEST_PATH_IMAGE015
is shown aspFirst of strip model passaget+1An image input by a stage node;
Figure 940674DEST_PATH_IMAGE016
denotes the firstpFirst of strip model passagetAn image input by a stage node;σrepresenting the magnitude of the disturbance;
Figure 429424DEST_PATH_IMAGE002
to representtGradient aggregation of all model paths of the stage nodes.
In an example, after updating the image samples generated by the neural network models according to the gradient obtained by the adaptive weight aggregation process, the method further includes:
and carrying out constraint processing on the image sample to obtain a temporary countermeasure sample.
It should be further noted that the technical features corresponding to the above-mentioned method examples can be combined with each other or replaced to form a new technical solution.
The invention also includes a multi-pass aggregated challenge sample generation system, the system comprising:
the model unit comprises a plurality of model passages, each model passage comprises a plurality of neural network models which are connected in sequence, the nodes at the same level in each model passage adopt the same neural network model, and the neural network models corresponding to the nodes at the same level are connected in an adjacent mode; a first model passage receives an original image, and simultaneously other model passages respectively receive a first disturbed image; adding random disturbance information to an original image to obtain a first disturbance image;
and the iterative calculation unit is used for calculating the gradient of each neural network model, performing self-adaptive weight aggregation processing on the gradient of each neural network model according to the proportion of the prediction similarity of the current neural network model to the total similarity of all the neural network models of the same node, updating an image sample generated by each neural network model according to the gradient obtained by the self-adaptive weight aggregation processing, and performing multiple times of cyclic calculation so as to output a final confrontation sample by the first model passage.
In an example, the system further comprises a model pool unit to store a neural network model of the ImageNet dataset.
It should be further noted that the technical features corresponding to the above-mentioned system examples can be combined with each other or replaced to form a new technical solution.
The present invention also includes a storage medium having stored thereon computer instructions which when executed perform the steps of a multi-pass aggregated countermeasure sample generation method as described in any one or more of the above examples.
The present invention also includes a terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, the processor executing the computer instructions to perform the steps of the multi-pass aggregated countermeasure sample generation method of any one or more of the above examples.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, the original image is disturbed to obtain a plurality of first disturbed images with different disturbance information, namely the generated first disturbed image integrates external disturbance factors and is strong in generalization; the gradient information of a plurality of model paths is subjected to self-adaptive weight aggregation processing and acts on each model path and each node image update, so that various disturbance factors are fitted, the generalization of the generated final antagonistic sample is greatly improved, and the attack capacity of the antagonistic sample is improved; meanwhile, the original image and the first disturbance image are input into a plurality of model paths to train a generation model of a countermeasure sample, so that overfitting caused by sample simplification is effectively reduced while different types of disturbances with different sizes are considered; meanwhile, the self-adaptive weight gradient aggregation processing can enable disturbance factors of different types and sizes to obtain self-disturbance-based weights, so that the antagonistic sample can be integrated with various disturbance factors under the condition of not damaging the image quality of visual perception, and the generalization of the antagonistic sample is further improved; and training the attacked model based on the highly-generalized confrontation sample to obtain a model with high defense force.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention.
FIG. 1 is a flow chart of a method in an example of the invention;
FIG. 2 is a flow chart of a method in a preferred example of the invention;
FIG. 3 is a block diagram of the system of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that directions or positional relationships indicated by "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like are directions or positional relationships described based on the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, ordinal words (e.g., "first and second," "first through fourth," etc.) are used to distinguish between objects, and are not limited to the order, but rather are to be construed to indicate or imply relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected" and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Furthermore, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In one example, as shown in fig. 1, a countercheck sample generation method for multi-pass aggregation specifically includes the following steps:
s1: establishing a plurality of model passages, wherein each model passage comprises a plurality of neural network models which are connected in sequence, nodes at the same level in each model passage adopt the same neural network model, and the neural network models corresponding to the nodes at the same level are connected in an adjacent mode;
s2: random disturbance information is added to the original images respectively to obtain a plurality of first disturbance images;
s3: inputting an original image into a first model passage, simultaneously respectively inputting a plurality of first disturbance images into other model passages, calculating the gradient of each neural network model, carrying out self-adaptive weight aggregation processing on the gradient of each neural network model according to the proportion of the predicted similarity of the current neural network model to the total similarity of all the neural network models of the same node, updating an image sample generated by each neural network model according to the gradient obtained by the self-adaptive weight aggregation processing, and circulating the step for multiple times to enable the first model passage to output a final countermeasure sample.
As an option, step S2 may be performed prior to step S1, or steps S1, S2 may be performed simultaneously.
Further, in step S1, the neural network model is specifically a neural network model capable of implementing image processing, and the neural network model is randomly selected from a model pool in the process of establishing the model path, where the model pool in this example is a classifier of the ImageNet dataset, and includes neural network models such as but not limited to inclusion v3, inclusion v4, inclusion resenetv 2, Xception, ResNetv2-101, and the like. Each neural network model corresponds to a node of the current model path, and each node is an iteration of the neural network model; the nodes at the same level, that is, the neural network models at the same position in each model passage, for example, the first model passage includes a neural network model a and a neural network model b which are connected in sequence, the second model passage includes a neural network model c and a neural network model d which are connected in sequence, the third model passage includes a neural network model e and a neural network model f which are connected in sequence, at this time, the neural network model a, the neural network model c and the neural network model e are at the nodes at the same level, and the neural network model b, the neural network model d and the neural network model f are at the nodes at the same level. Further, the neural network models corresponding to the nodes at the same level are connected adjacently, that is, the neural network models are connected with the neural network models at the same level in the adjacent model passage, for example, the neural network model a, the neural network model c and the neural network model e are connected in sequence, and the neural network model b, the neural network model d and the neural network model f are connected in sequence. Preferably, the types of the plurality of neural network models in each model path are different, and of course, as an option, the types of the plurality of neural network models in each model path may be the same. In this example, the more model paths and the more paths (model paths) that include neural network model nodes, the stronger the generalization of the resulting challenge sample generated.
Further, in step S2, each disturbance information is added to the original image, and a first disturbance image is obtained correspondingly. Preferably, random disturbance information added to the original image is different, so that more samples of the original image are obtained, more external disturbance factors are integrated to the maximum extent, and the generalization of the image samples is improved.
Further, in step S3, the prediction similarity is calculated according to the difference between the prediction value of the current neural network model and the true tag value; in the self-adaptive weight aggregation processing process, the weight is specifically the proportion of the similarity of the node neural network model to the total similarity of all the neural network models of the same node, and through the self-adaptive weight aggregation processing step, the disturbance information of the generated image sample is updated, so that the disturbance factors of different types and different sizes can be weighted based on self disturbance, and further, the countermeasure sample can be integrated with various disturbance factors under the condition of not damaging the image quality of visual impression, and the generalization of the countermeasure sample is further improved.
Further, the final countermeasure sample with the strongest generalization performance is output through the first model path (the model path receiving the original image) in step S3, because the perturbation information update of each level node in each model path is performed on the basis of the previous level (the level 1 node is updated on the input image), the first perturbed image inputted by the neural network model in the other paths may be visually different from the original image, therefore, the generated countersample cannot achieve better attack effect, such as clipping in geometric disturbance, the first disturbed image obtained after clipping the original image can still be identified by the classifier, however, the cut trace is already visually observed, so that the countermeasure sample generated by the path for inputting the first disturbance image has an obvious cut trace, and the condition that the countermeasure sample is similar to the original image in visual perception is not achieved.
According to the method, the images are processed by adopting various types of disturbances, so that the generated countermeasure sample has stronger generalization, the attack force of the countermeasure sample to different models is improved, the anti-interference capability of the model to various external disturbance factors can be improved by utilizing the countermeasure sample to train the attacked model, the model with high defense force is obtained, and compared with the gradient aggregate disturbance of adding different models on the original images, the countermeasure sample is more sensitive to the external disturbance factors. Meanwhile, the original image and the multiple samples (first disturbance images) thereof are input into a plurality of model paths to train a generation model of the countermeasure sample, so that overfitting caused by sample simplification is effectively reduced while different types of disturbances with different sizes are considered. Further, the adaptive weight aggregation is carried out on the gradient information of the multiple paths, the image update of each node of each path is acted, multiple disturbance factors are fitted, the generalization of the generated countermeasure sample is improved, and the defense capability of the model generated and trained by the countermeasure sample to the disturbances of different types and different sizes is further improved due to the adoption of the adaptive weight aggregation gradient. And finally, performing gradient iteration on the image by adopting a multi-node neural network model on a plurality of model paths, so that the generalization of the generated final confrontation sample is further improved.
It should be noted that, in the present application, only disturbance information with a small proportion is added to an original image, and a first disturbance image is input to a neural network model, and the recognition accuracy of the model for the input sample (the first disturbance image) is still not greatly reduced, so that the model is generally not attacked by using the first disturbance image as a countermeasure sample, and therefore, the present application further fits multiple disturbance factors of the original image and the first disturbance image through multiple model paths to generate a final countermeasure sample with high attack force.
In one example, adding random perturbation information to the original image includes:
and performing geometric transformation processing and/or color processing and/or image fusion processing and/or filtering processing on the original image to obtain a first disturbance image. Wherein, the geometric transformation comprises clipping, amplifying, reducing, translating, cutting error transformation, rotating transformation and the like; the color processing is performed by modifying the RGBA values of the image, wherein R represents red, G represents green, B represents blue, and a represents a transparency value; image fusion is to fuse images by an algorithm, such as a weighted average method, an image fusion method based on wavelet transform, and the like; the image filtering includes mean filtering, gaussian filtering, sharpening methods, and the like.
In one example, the calculation formula of the adaptive weight aggregation process is:
Figure 369698DEST_PATH_IMAGE017
wherein the content of the first and second substances,prepresenting model path number;nan upper limit value representing a model path;trepresenting node sequence numbers in the model path;
Figure 893084DEST_PATH_IMAGE002
to representtGradient aggregation of all model paths of the level nodes;
Figure 995032DEST_PATH_IMAGE003
is shown aspFirst of strip model passagetSimilarity of level nodes;
Figure 502236DEST_PATH_IMAGE004
is composed oftThe sum of the similarity of all model paths of the level nodes;
Figure 42939DEST_PATH_IMAGE005
represents a gradient sign;
Figure 857049DEST_PATH_IMAGE018
is shown aspFirst of the strip pattern pathtAn image input by a stage node;CErepresents a crossLoss of entropy;y ture image representing input neural network modelxA corresponding true tag value;Mrepresenting a neural network model;
Figure 457795DEST_PATH_IMAGE019
representing neural network modelsMA prediction value for the input image.
In an example, the model algorithm parameters need to be defined, and the size of the perturbation in this application is 16, that is, the maximum infinite norm difference epsilon between the reactance sample and the original image is generated to be 16, and the iteration number is 16TI.e. byTThe level node, as an option,T=10, learning rate at this time
Figure 858820DEST_PATH_IMAGE020
. On the basis, the loss of the model to the input image is further calculated, and the specific calculation formula is as follows:
Figure DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 140897DEST_PATH_IMAGE022
is the first in the real label
Figure 842137DEST_PATH_IMAGE010
A code;
Figure 348204DEST_PATH_IMAGE023
is the first in the predicted value
Figure 640DEST_PATH_IMAGE010
Encoding;Cindicating the number of categories in the label.
In one example, the calculation formula of the ratio of the predicted similarity of the current neural network model to the total similarity of all neural network models of the same node is as follows:
Figure 883146DEST_PATH_IMAGE024
wherein | | | represents a norm, | | | | cir 1 Representing the sum of the absolute values of the vector elements.
In an example, the calculation formula for updating the image samples generated by each neural network model according to the gradient obtained by the adaptive weight aggregation process is as follows:
Figure 438892DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 115861DEST_PATH_IMAGE026
is shown aspFirst of strip model passaget+1An image input by a level node;σrepresenting the size of the disturbance;
Figure 553796DEST_PATH_IMAGE002
to representtGradient aggregation of all model paths of the level nodes.
In an example, after updating the image samples generated by the neural network models according to the gradient obtained by the adaptive weight aggregation process, the method further includes:
carrying out constraint processing on the image sample to obtain a temporary countermeasure sample, wherein the specific calculation formula is as follows:
Figure 177675DEST_PATH_IMAGE027
wherein the content of the first and second substances,cliprepresenting a range of constraints.
Combining the above examples to obtain the preferred example of the present application, as shown in fig. 2, a countercheck sample generation method based on multi-pass aggregation includes the following steps:
s1': randomly selecting an original image, adding random disturbance information, and obtaining a first disturbance image by adding every disturbance information, wherein the original image and the disturbed image are the samenOpening;
s2': building upnStrip pattern passage: each model passage consists of 10-level nodes, each node neural network model is randomly selected from the model pool, and the same neural network model is adopted by the nodes at the same level of all the model passages;
s3': will be provided withnRespectively inputting imagesnA new level node of the strip path, and calculating the gradient;
s4': will be provided withnCarrying out self-adaptive weight aggregation processing on the gradients of nodes at the same level in the same path;
s5': updating according to the gradient after polymerizationnOpening an image;
s6': and executing S3 '-S5', circulating for 10 times, reaching 10-level nodes, and outputting a final confrontation sample of the original image in the first model path.
The invention also comprises a multi-channel aggregation-based antagonistic sample model training method, which has the same inventive concept as the high-transferability antagonistic sample generation method, and specifically comprises the following steps:
and training the neural network model according to the confrontation sample, so that the model can learn the characteristics of the confrontation sample and accurately classify the confrontation sample to obtain the model with high defense. The model can learn the disturbance characteristics of the confrontation samples different from the original images, so that the classification result is corrected, accurate classification is realized, and the safety performance of the neural network model is improved. The method generates the final confrontation sample attack black box model with high generalization, and the generated confrontation sample also has high attack success rate on the black box model.
The invention also comprises a multi-path aggregated confrontation sample generation system, which comprises a model unit and an iterative computation unit.
As shown in FIG. 3, the model element includesnEach model passage comprises 10 neural network models which are connected in sequence, nodes at the same level in each model passage adopt the same neural network model, and the neural network models corresponding to the nodes at the same level are connected in an adjacent mode; the first model passage receives the original image, and other model passages respectively receive the first disturbed image, and finally, the first disturbed image is assisted by aggregating all the model passagesGenerating a final confrontation sample by the bar model path; adding random disturbance information to an original image to obtain a first disturbance image;
and the iterative calculation unit is used for calculating the gradient of each neural network model, performing self-adaptive weight aggregation processing on the gradient of each neural network model according to the proportion of the prediction similarity of the current neural network model to the total similarity of all the neural network models of the same node, updating an image sample generated by each neural network model according to the gradient obtained by the self-adaptive weight aggregation processing, and performing repeated cyclic iterative calculation to further generate a final confrontation sample.
In an example, the system further comprises a model pool unit for storing neural network models of the ImageNet dataset, including, but not limited to, Inceptionv3, Inceptionv4, InceptionResNet 2, Xception, ResNet 2-101, etc.
The present application further includes a storage medium having the same inventive concept as the multi-pass aggregated countermeasure sample generation method of any one or more of the above examples, and storing thereon computer instructions that, when executed, perform the steps of the multi-pass aggregated countermeasure sample generation method.
Based on such understanding, the technical solution of the present embodiment or parts of the technical solution may be essentially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The application further includes a terminal, which has the same inventive concept as the multi-pass aggregated countermeasure sample generation method formed by any one or more of the above examples, and the terminal includes a memory and a processor, where the memory stores computer instructions executable on the processor, and the processor executes the computer instructions to perform the steps of the multi-pass aggregated countermeasure sample generation method. The processor may be a single or multi-core central processing unit or a specific integrated circuit, or one or more integrated circuits configured to implement the present invention.
Each functional unit in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above detailed description is for the purpose of describing the invention in detail, and it should not be construed that the detailed description is limited to the description, and it will be apparent to those skilled in the art that various modifications and substitutions can be made without departing from the spirit of the invention.

Claims (8)

1. A multi-channel aggregated confrontation sample generation method is characterized by comprising the following steps: the method comprises the following steps:
s1: establishing a plurality of model passages, wherein each model passage comprises a plurality of neural network models which are connected in sequence, nodes at the same level in each model passage adopt the same neural network model, and the neural network models corresponding to the nodes at the same level are connected in an adjacent mode;
s2: random disturbance information is added to the original images respectively to obtain a plurality of first disturbance images;
s3: inputting an original image into a first model passage, simultaneously inputting a plurality of first disturbance images into other model passages respectively, calculating the gradient of each neural network model, carrying out self-adaptive weight aggregation processing on the gradient of each neural network model according to the proportion of the predicted similarity of the current neural network model to the total similarity of all the neural network models of the same node, updating an image sample generated by each neural network model according to the gradient obtained by the self-adaptive weight aggregation processing, and circulating the step S3 for a plurality of times to ensure that the first model passage outputs a final antagonistic sample;
the calculation formula of the self-adaptive weight aggregation processing is as follows:
Figure 430462DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,prepresenting model path sequence numbers;nan upper limit value representing a model passage;trepresenting node sequence numbers in the model path;
Figure 222313DEST_PATH_IMAGE002
representtGradient aggregation of all model paths of the level nodes;
Figure 854283DEST_PATH_IMAGE003
is shown aspFirst of strip model passagetSimilarity of level nodes;
Figure 443527DEST_PATH_IMAGE004
is composed oftThe sum of the similarity of all model paths of the level nodes;
Figure 692106DEST_PATH_IMAGE005
represents a gradient sign;
Figure 24998DEST_PATH_IMAGE006
is shown aspFirst of the strip pattern pathtAn image input by a stage node;CErepresents the cross-loss entropy;y ture image representing input neural network modelxA corresponding true tag value;Mrepresenting a neural network model;
Figure 42633DEST_PATH_IMAGE007
representing neural network modelsMA prediction value for the input image;
the ratio of the predicted similarity of the current neural network model to the total similarity of all the neural network models of the same node, i.e. the firstpFirst of strip model passagetSimilarity of level nodes
Figure 333937DEST_PATH_IMAGE003
Andtsum of similarity of all model paths of level nodes
Figure 804232DEST_PATH_IMAGE004
In the ratio of (A) to (B)pFirst of strip model passagetSimilarity of level nodes
Figure 940815DEST_PATH_IMAGE003
The calculation formula of (c) is:
Figure 812957DEST_PATH_IMAGE008
wherein | | | represents a norm, | | | | cir 1 Representing the sum of the absolute values of the vector elements.
2. The multi-pass aggregated challenge sample generation method of claim 1, wherein: adding random perturbation information to the original image includes:
and performing geometric transformation processing and/or color processing and/or image fusion processing and/or filtering processing on the original image to obtain a first disturbance image.
3. The multi-pass aggregated challenge sample generation method of claim 1, wherein: the calculation formula of the cross loss entropy is as follows:
Figure 278092DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 704525DEST_PATH_IMAGE010
as the first in the genuine label
Figure 910378DEST_PATH_IMAGE011
A code;
Figure 371447DEST_PATH_IMAGE012
is the first in the predicted value
Figure 738974DEST_PATH_IMAGE011
Encoding;Cindicating the number of categories in the label.
4. The method for generating a multi-pass aggregated countermeasure sample according to claim 1, wherein: the calculation formula for updating the image samples generated by each neural network model according to the gradient obtained by the adaptive weight aggregation processing is as follows:
Figure 449441DEST_PATH_IMAGE013
wherein the content of the first and second substances,prepresenting model path sequence numbers;trepresenting node sequence numbers in the model path;
Figure 927827DEST_PATH_IMAGE014
denotes the firstpFirst of the strip pattern patht+1An image input by a stage node;
Figure 508981DEST_PATH_IMAGE015
is shown aspFirst of strip model passagetAn image input by a stage node;σrepresenting the size of the disturbance;
Figure 312989DEST_PATH_IMAGE002
to representtGradient aggregation of all model paths of the stage nodes.
5. The method for generating a multi-pass aggregated countermeasure sample according to claim 1, wherein: after the image samples generated by each neural network model are updated according to the gradient obtained by the adaptive weight aggregation processing, the method further comprises the following steps:
and carrying out constraint processing on the image sample to obtain a temporary countermeasure sample.
6. A multi-pass aggregated challenge sample generation system, characterized by: the system comprises:
the model unit comprises a plurality of model passages, each model passage comprises a plurality of neural network models which are connected in sequence, the nodes at the same level in each model passage adopt the same neural network model, and the neural network models corresponding to the nodes at the same level are connected adjacently and mutually; a first model passage receives an original image, and simultaneously other model passages respectively receive a first disturbed image; adding random disturbance information to an original image to obtain a first disturbance image;
the iterative calculation unit is used for calculating the gradient of each neural network model, performing self-adaptive weight aggregation processing on the gradient of each neural network model according to the proportion of the prediction similarity of the current neural network model to the total similarity of all the neural network models of the same node, updating an image sample generated by each neural network model according to the gradient obtained by the self-adaptive weight aggregation processing, and performing multiple times of cyclic calculation so as to output a final antagonistic sample by the first model path;
the calculation formula of the self-adaptive weight aggregation processing is as follows:
Figure 979594DEST_PATH_IMAGE001
wherein the content of the first and second substances,prepresenting model path sequence numbers;nan upper limit value representing a model passage;trepresenting node sequence numbers in the model path;
Figure 524320DEST_PATH_IMAGE002
to representtGradient aggregation of all model paths of the level nodes;
Figure 225560DEST_PATH_IMAGE003
is shown aspFirst of strip model passagetSimilarity of level nodes;
Figure 934890DEST_PATH_IMAGE004
is composed oftThe sum of the similarity of all model paths of the level nodes;
Figure 354370DEST_PATH_IMAGE005
represents a gradient sign;
Figure 440137DEST_PATH_IMAGE016
is shown aspFirst of strip model passagetAn image input by a level node;CErepresents the cross-loss entropy;y ture image representing input neural network modelxA corresponding true tag value;Mrepresenting a neural network model;
Figure 995884DEST_PATH_IMAGE017
representing neural network modelsMA prediction value for the input image;
the prediction similarity of the current neural network model accounts for the total similarity of all the neural network models of the same node, i.e. the firstpFirst of the strip pattern pathtSimilarity of level nodes
Figure 876115DEST_PATH_IMAGE003
And withtSum of similarity of all model paths of level nodes
Figure 517312DEST_PATH_IMAGE004
In the ratio of (A) to (B)pFirst of strip model passagetSimilarity of level nodes
Figure 406770DEST_PATH_IMAGE003
The calculation formula of (2) is as follows:
Figure 817023DEST_PATH_IMAGE018
whereinDenotes a norm, | | indicates the luminance 1 Representing the sum of the absolute values of the vector elements.
7. The multi-pass aggregated challenge sample generating system of claim 6, wherein: the system also includes a model pool unit to store a neural network model of the ImageNet dataset.
8. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, the terminal comprising: the processor executing the computer instructions performs the steps of the method for generating a countercheck sample for multi-pass aggregation as claimed in any one of claims 1 to 5.
CN202210793780.4A 2022-07-07 2022-07-07 Multi-channel aggregated countermeasure sample generation method, system and terminal Active CN114861893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210793780.4A CN114861893B (en) 2022-07-07 2022-07-07 Multi-channel aggregated countermeasure sample generation method, system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210793780.4A CN114861893B (en) 2022-07-07 2022-07-07 Multi-channel aggregated countermeasure sample generation method, system and terminal

Publications (2)

Publication Number Publication Date
CN114861893A CN114861893A (en) 2022-08-05
CN114861893B true CN114861893B (en) 2022-09-23

Family

ID=82625925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210793780.4A Active CN114861893B (en) 2022-07-07 2022-07-07 Multi-channel aggregated countermeasure sample generation method, system and terminal

Country Status (1)

Country Link
CN (1) CN114861893B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116543268B (en) * 2023-07-04 2023-09-15 西南石油大学 Channel enhancement joint transformation-based countermeasure sample generation method and terminal
CN117540791B (en) * 2024-01-03 2024-04-05 支付宝(杭州)信息技术有限公司 Method and device for countermeasure training

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276377A (en) * 2019-05-17 2019-09-24 杭州电子科技大学 A kind of confrontation sample generating method based on Bayes's optimization
US10783401B1 (en) * 2020-02-23 2020-09-22 Fudan University Black-box adversarial attacks on videos
CN112101470A (en) * 2020-09-18 2020-12-18 上海电力大学 Guide zero sample identification method based on multi-channel Gauss GAN
CN113449783A (en) * 2021-06-17 2021-09-28 广州大学 Countermeasure sample generation method, system, computer device and storage medium
CN114299313A (en) * 2021-12-24 2022-04-08 北京瑞莱智慧科技有限公司 Method and device for generating anti-disturbance and storage medium
CN114663665A (en) * 2022-02-28 2022-06-24 华南理工大学 Gradient-based confrontation sample generation method and system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086884B (en) * 2018-07-17 2020-09-01 上海交通大学 Neural network attack defense method based on gradient reverse countermeasure sample restoration
US11373093B2 (en) * 2019-06-26 2022-06-28 International Business Machines Corporation Detecting and purifying adversarial inputs in deep learning computing systems
CN111368908B (en) * 2020-03-03 2023-12-19 广州大学 HRRP non-target countermeasure sample generation method based on deep learning
CN111428071B (en) * 2020-03-26 2022-02-01 电子科技大学 Zero-sample cross-modal retrieval method based on multi-modal feature synthesis
CN111581405B (en) * 2020-04-26 2021-10-26 电子科技大学 Cross-modal generalization zero sample retrieval method for generating confrontation network based on dual learning
CN112183717A (en) * 2020-08-28 2021-01-05 北京航空航天大学 Neural network training method and device based on critical path
CN112364885B (en) * 2020-10-12 2022-10-11 浙江大学 Confrontation sample defense method based on interpretability of deep neural network model
CN112396123A (en) * 2020-11-30 2021-02-23 上海交通大学 Image recognition method, system, terminal and medium based on convolutional neural network
CN113269239B (en) * 2021-05-13 2024-04-19 河南大学 Relation network node classification method based on multichannel convolutional neural network
CN114143040B (en) * 2021-11-08 2024-03-22 浙江工业大学 Antagonistic signal detection method based on multichannel characteristic reconstruction
CN114549933A (en) * 2022-02-21 2022-05-27 南京大学 Countermeasure sample generation method based on target detection model feature vector migration
CN114283341B (en) * 2022-03-04 2022-05-17 西南石油大学 High-transferability confrontation sample generation method, system and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276377A (en) * 2019-05-17 2019-09-24 杭州电子科技大学 A kind of confrontation sample generating method based on Bayes's optimization
US10783401B1 (en) * 2020-02-23 2020-09-22 Fudan University Black-box adversarial attacks on videos
CN112101470A (en) * 2020-09-18 2020-12-18 上海电力大学 Guide zero sample identification method based on multi-channel Gauss GAN
CN113449783A (en) * 2021-06-17 2021-09-28 广州大学 Countermeasure sample generation method, system, computer device and storage medium
CN114299313A (en) * 2021-12-24 2022-04-08 北京瑞莱智慧科技有限公司 Method and device for generating anti-disturbance and storage medium
CN114663665A (en) * 2022-02-28 2022-06-24 华南理工大学 Gradient-based confrontation sample generation method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
图对抗攻击研究综述;翟正利等;《计算机工程与应用》;20211231;第57卷(第7期);第14-21页 *

Also Published As

Publication number Publication date
CN114861893A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN114861893B (en) Multi-channel aggregated countermeasure sample generation method, system and terminal
Gao et al. Global second-order pooling convolutional networks
CN112308133A (en) Modulation identification method based on convolutional neural network
CN114283341B (en) High-transferability confrontation sample generation method, system and terminal
CN114708479B (en) Self-adaptive defense method based on graph structure and characteristics
Wu et al. Genetic algorithm with multiple fitness functions for generating adversarial examples
CN112926661A (en) Method for enhancing image classification robustness
CN116563410A (en) Electrical equipment electric spark image generation method based on two-stage generation countermeasure network
Deng et al. A multi-objective examples generation approach to fool the deep neural networks in the black-box scenario
CN116545764B (en) Abnormal data detection method, system and equipment of industrial Internet
CN116306780B (en) Dynamic graph link generation method
CN116051924B (en) Divide-and-conquer defense method for image countermeasure sample
CN114757189B (en) Event extraction method and device, intelligent terminal and storage medium
CN116188516A (en) Training method of defect data generation model
CN115620342A (en) Cross-modal pedestrian re-identification method, system and computer
CN115270891A (en) Method, device, equipment and storage medium for generating signal countermeasure sample
CN115238134A (en) Method and apparatus for generating a graph vector representation of a graph data structure
CN114596464A (en) Multi-feature interactive unsupervised target detection method and system, electronic device and readable storage medium
CN111723864A (en) Method and device for performing countermeasure training by using internet pictures based on active learning
CN111797732A (en) Video motion identification anti-attack method insensitive to sampling
Zhang et al. An efficient general black-box adversarial attack approach based on multi-objective optimization for high dimensional images
CN116702876B (en) Image countermeasure defense method based on preprocessing
CN117058493B (en) Image recognition security defense method and device and computer equipment
CN113344181B (en) Neural network structure searching method and device, computer equipment and storage medium
CN117807237B (en) Paper classification method, device, equipment and medium based on multivariate data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant