CN108257116A - A kind of method for generating confrontation image - Google Patents

A kind of method for generating confrontation image Download PDF

Info

Publication number
CN108257116A
CN108257116A CN201711487948.4A CN201711487948A CN108257116A CN 108257116 A CN108257116 A CN 108257116A CN 201711487948 A CN201711487948 A CN 201711487948A CN 108257116 A CN108257116 A CN 108257116A
Authority
CN
China
Prior art keywords
iteration
image
neural network
network model
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711487948.4A
Other languages
Chinese (zh)
Inventor
朱军
董胤蓬
廖方舟
庞天宇
苏航
胡晓林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201711487948.4A priority Critical patent/CN108257116A/en
Publication of CN108257116A publication Critical patent/CN108257116A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The present invention provides a kind of method for generating confrontation image, including:Using gradient algorithm, according to the image that last round of iteration obtains, the loss of the first deep neural network model is obtained, and according to loss generation when the momentum term of front-wheel iteration;Using when the momentum term of front-wheel iteration, according to the image that the image that last round of iteration obtains, generation are obtained when front-wheel iteration, until iteration reaches preset iteration wheel number, the image that last wheel iteration is obtained is as confrontation image.A kind of method for generating confrontation image provided by the invention, the iteration to original image is carried out by using momentum term, obtain the confrontation image that can be attacked deep neural network model, effectively mitigate the coupling between white-box attack success rate and migration performance, all there is higher success attack rate to whitepack and black-box model, the accuracy for the carry out image classification that dual training is improved using deep neural network model can be used for, can be used for attack deep neural network model.

Description

A kind of method for generating confrontation image
Technical field
The present invention relates to machine learning techniques field, more particularly, to a kind of method for generating confrontation image.
Background technology
Deep neural network is as a kind of method in machine learning method, due in speech recognition, image classification, object The remarkable result that the numerous areas such as detection obtain, obtained people and widely paid close attention in recent years.It but can be in many tasks The deep neural network model for reaching very high-accuracy is but highly susceptible to attack in Antagonistic Environment.It is deep in Antagonistic Environment Degree neural network can be entered some based on normal sample malice construction to resisting sample, such as image or voice messaging. These are easy to resisting sample by deep learning model errors to be classified, but are but difficult to find confrontation for human viewer Difference between sample and normal sample.Since the robustness of the different systems based on deep learning can be weighed to resisting sample Quality, so generation of the research to resisting sample, becomes an important field of research.Therefore, these can be with to resisting sample As a kind of mode of data enhancing, for carrying out dual training to deep neural network, more robust neural network is obtained, Make deep neural network that can also reach higher accuracy rate under Resisting Condition.
It needs to generate and the scene of resisting sample is included to carry out dual training and to depth nerve net to deep neural network The attack of network.Dual training is broadly divided into two kinds:White-box attack and black box attack.For white-box attack, attacker knows target The structure and parameter of neural network can utilize the Fast Field symbolic method based on single-step iteration, multi-Step Iterations method, based on optimization Method generate to resisting sample.There is certain migration performance to resisting sample by being generated, so it can be used to attack The black-box model of unknown structure and parameter is hit, i.e. black box is attacked.
However, in practical application process, one black-box model of attack is very difficult, particularly with certain defence The black-box model of measure is more difficult to success attack.For example, integrated dual training, by the way that training process will be added to resisting sample In, the robustness of trained deep neural network can be promoted, existing black box attack method is difficult to obtain success attack high rate To resisting sample.The basic reason of this phenomenon is caused to be between the white-box attack success rate of existing attack method and migration performance Coupling and limitation so that can not reach good white-box attack success rate and the method for migration performance simultaneously.
Specifically, for the Fast Field symbolic method of single-step iteration, although the migration to resisting sample of this method construct Performance is fine, and the success rate of attack whitepack model is limited by very large, it is impossible to effectively attack black-box model;The opposing party Face for multi-Step Iterations and the method based on optimization, although whitepack model can be attacked well, is constructed to resisting sample Migration performance it is very poor, can not effectively attack black-box model.
More than phenomenon is based on, can also be corresponded to by the model of one replacement of training to be fitted the input and output of black-box model Relationship.Thus by a black-box model conversion in order to which whitepack model is attacked.But this method needs to know black box mould The probability distribution of type prediction, while the access times of huge size are needed, it is difficult successfully in practical application scenarios to lead to it.
Therefore, it is to deep neural network model to resisting sample for being used for the deep neural network model of image classification The image of dual training is carried out, becomes confrontation image.Fight image, for deep neural network model carry out dual training or Attack.For the method for the confrontation image of deep neural network model, the image of generation is difficult to whitepack and black for existing generation BOX Model all has higher success attack rate, so as to make deep neural network that can also reach higher under Resisting Condition Accuracy rate.
Invention content
For overcome the shortcomings of it is of the existing technology be difficult to that all there is higher success attack rate to whitepack and black-box model, The present invention provides a kind of method for generating confrontation image.
The present invention provides a kind of method for generating confrontation image, including:
S1, using gradient algorithm, according to the image that last round of iteration obtains, obtain the damage of the first deep neural network model It loses, and according to the loss generation when the momentum term of front-wheel iteration;
S2, using described when the momentum term of front-wheel iteration, according to the image that the last round of iteration obtains, front-wheel is worked as in generation The image that iteration obtains, until iteration reaches preset iteration wheel number, the image that last wheel iteration is obtained is schemed as confrontation Picture.
Preferably, when first deep neural network model is single model, the first deep neural network mould Cross entropy of the loss of type for first deep neural network model.
Preferably, it is further included before the step S1:
S0, multiple third deep neural network models are subjected to Artificial neural network ensemble, using the integrated model of acquisition as institute State the first deep neural network model, according to the multiple deep neural network model do not normalize determine the probability described in first The loss of deep neural network model;
Wherein, the third deep neural network model is single model.
Preferably, the loss J (x, y) of first deep neural network model is
J (x, y)=- 1y·(softmax(l(x)))
Wherein, x represents original image, and y represents the true classification of x, and l (x) is multiple third deep neural network models The weighted average of probability is not normalized;
Wherein, lk(x) probability is not normalized for k-th third deep neural network model;wkFor k-th of third depth The weight of neural network model, wk>=0 and
Preferably, the step S1 is specifically included:
Using Fast Field symbolic method, the first deep neural network model is obtained according to the image that last round of iteration obtains Loss, and when the momentum term of front-wheel iteration according to the loss generation.
Preferably, the step S2 is specifically included:
Using the momentum term for often taking turns iteration, using the quotient of preset noise threshold and the iteration wheel number as step-length, to described Original image carries out the iteration of the iteration wheel number, obtains the confrontation image so that by the confrontation image input described the After one neural network model, the output of the first nerves network model is not equal to the true classification of the original image.
Preferably, the step S2 is specifically included:
Using the momentum term for often taking turns iteration, using the quotient of the noise threshold and the iteration wheel number as step-length, to the original Beginning image carries out the iteration of the iteration wheel number, obtains the confrontation image so that by the confrontation image input described first After neural network model, the output of the first nerves network model is preset first category;
Wherein, the first category is different from the true classification of the original image.
Preferably, the step S1 is specifically included:
Using Fast Field method, the damage of the first deep neural network model is obtained according to the image that last round of iteration obtains It loses, and works as the momentum term of front-wheel iteration according to the loss generation.
Preferably, the step S2 is specifically included:
Using the momentum term for often taking turns iteration, using the quotient of the noise threshold and the iteration wheel number as step-length, to the original Beginning image carries out the iteration of the iteration wheel number, obtains the confrontation image so that by the confrontation image input described first After neural network model, the output of the first nerves network model is not equal to the true classification of the original image.
Preferably, the step S2 is specifically included:
Using the momentum term for often taking turns iteration, using the quotient of the noise threshold and the iteration wheel number as step-length, to the original Beginning image carries out the iteration of the iteration wheel number, obtains the confrontation image so that by the confrontation image input described first After neural network model, the output of the first nerves network model is preset first category;
Wherein, the first category is different from the true classification of the original image.
A kind of method for generating confrontation image provided by the invention, by using momentum term change to original image In generation, white-box attack success rate and migration can effectively be mitigated to the confrontation image that deep neural network model is attacked by obtaining Coupling between performance all has higher success attack rate to whitepack and black-box model, can be to various deep neural network moulds Type carries out dual training or attack, can be used for the sample set of various original images, make deep neural network mould by dual training Type has better robustness, so as to improve the accuracy of the carry out image classification using deep neural network model.
Description of the drawings
Fig. 1 is a kind of flow chart for the method for generating confrontation image of the embodiment of the present invention.
Specific embodiment
With reference to the accompanying drawings and examples, the specific embodiment of the present invention is described in further detail.Implement below Example is used to illustrate the present invention, but be not limited to the scope of the present invention.
Before embodiments of the present invention are described, first to the present embodiments relate to technical term explain.
Single model, refers in model only that there are one the models of deep neural network.
Integrated model refers to the model that multiple single models are carried out Artificial neural network ensemble and obtained.
Artificial neural network ensemble is that same problem is learnt with limited a neural network, is integrated under certain input example Output codetermined by forming output of the integrated each neural network under the example.
It should be noted that during being trained to the deep neural network model for being used for image classification, with pair Anti- image attacks deep neural network model.The attack carried out with confrontation image to deep neural network model includes Target attack and without target attack.
There is target attack, deep neural network model is instigated to be classified with the error image that higher confidence level output is specified and is tied Fruit.
Without target attack, deep neural network model is instigated to export arbitrary error image classification knot with higher confidence level Fruit.
Fig. 1 is a kind of flow chart for the method for generating confrontation image of the embodiment of the present invention.As shown in Figure 1, a kind of generation pair The method of anti-image includes:Step S1, using gradient algorithm, according to the image that last round of iteration obtains, the first depth god is obtained Loss through network model, and according to loss generation when the momentum term of front-wheel iteration;Step S2, using when the momentum of front-wheel iteration , according to the image that the image that last round of iteration obtains, generation are obtained when front-wheel iteration, until iteration reaches preset iteration wheel Number, the image that last wheel iteration is obtained is as confrontation image.
Specifically, using alternative manner, original image is iterated, obtains confrontation image.
Original image can be the image or sample image acquired.Original image can be used for depth Neural network is trained.
Step S1 when in front-wheel iteration, according to the image that last round of iteration obtains, obtains the first deep neural network mould The loss of type;After the loss for obtaining the first deep neural network model, using gradient algorithm, to the first deep neural network model Loss carry out operation, using operation result as work as front-wheel iteration momentum term.
In gradient algorithm, momentum term for avoid the update being likely to occur in the iterative process of gradient algorithm concussion and Fall into poor local extremum.
Step S2 when in front-wheel iteration, will be added to when the momentum term of front-wheel iteration on the image that last round of iteration obtains, The image that generation is obtained when front-wheel iteration.
By above-mentioned iterative process, by several wheels of original image iteration, until reaching preset iteration wheel number, by last The image that wheel iteration obtains, as confrontation image.
Several wheels refer to a wheel or more wheels.
Since the difference of confrontation image and original image is less than preset noise threshold, i.e. the mankind can not have found confrontation image With the difference of original image.
Specifically, the difference for fighting image and original image can be less than to the concrete form of preset noise threshold, as The constraints of generation confrontation image.
Common constraints includes:
LConstraint, the Infinite Norm for referring to the difference of confrontation image and original image are less than preset noise threshold.
L2Constraint, two norms for referring to the difference of confrontation image and original image are less than preset noise threshold.
Momentum term can accelerate to restrain, avoid poor local extremum while so that more new direction is more steady, therefore energy Dialogue BOX Model has higher success attack rate;And momentum term participates in the process of iteration, has confrontation image and preferably moves Characteristic is moved, so as to which there is higher success attack rate to black-box model.
Therefore, in image classification, confrontation image can be used for carrying out dual training to the first deep neural network model.Due to The image used during dual training is image to resisting sample, is had to migration performance possessed by resisting sample, accordingly it is also possible to It can be used to carry out dual training to any second deep neural network model for being different from the first deep neural network model.
Before carrying out dual training, the image that by dual training when uses inputs the first deep neural network model, can cheat To the first deep neural network model, make the classification results of the first deep neural network model output error.Confrontation image also can The deep neural network model of other positions is moved to, as in the second deep neural network model, led to the second depth nerve net Network category of model mistake.
The embodiment of the present invention carries out the iteration to original image by using momentum term, and acquisition can be to deep neural network mould Type carries out the image of dual training, effectively mitigates the coupling between white-box attack success rate and migration performance, to whitepack and black BOX Model all has higher success attack rate, and various deep neural network models can be trained, can be used for various original The sample set of image makes deep neural network model have better robustness, depth is utilized so as to improve by dual training The accuracy of the carry out image classification of neural network model.
Based on above-described embodiment, as a preferred embodiment, when the first deep neural network model is single model, Cross entropy of the loss of first deep neural network model for the first deep neural network model.
Loss function (loss function) is also cost function (cost function).It is the mesh of Neural Network Optimization Scalar functions.Neural metwork training or the process of optimization are exactly to minimize the process of loss function.Loss function value is smaller, corresponding The result of prediction and the value of legitimate reading are with regard to closer.
Common loss function includes secondary cost function, cross entropy cost function and log-likelihood function.
Preferably, when the first deep neural network model is single model, cross entropy cost function is selected as first The loss function of deep neural network model;Correspondingly, the loss of the first deep neural network model is cross entropy.
Cross entropy (cross-entropy) cost function derives from the concept of entropy in information theory.It is current neural network point In class problem, for example, image classification, common cost function.
The embodiment of the present invention carries out the iteration to original image by using momentum term, and acquisition can be to deep neural network mould Type carries out the image of attack and dual training, effectively mitigates the coupling between white-box attack success rate and migration performance, for Single model can largely improve success attack rate.
Based on above-described embodiment, as an alternative embodiment, step S1 is specifically included:Using Fast Field symbolic method, The loss of the first deep neural network model is obtained according to the image that last round of iteration obtains, and according to loss generation when front-wheel changes The momentum term in generation.
Step S2 is specifically included:Using the momentum term for often taking turns iteration, using the quotient of preset noise threshold and iteration wheel number as Step-length is iterated original image the iteration of wheel number, obtains confrontation image so that by confrontation image input first nerves network After model, the output of first nerves network model is not equal to the true classification of original image.
It should be noted that it is that generation meets L that the embodiment of the present invention is correspondingThe confrontation for no target attack of constraint Image.
For first nerves network model f (x), generation meets LConstraint for no target attack
Dual training when the image that uses, i.e., so that f (x*) ≠ y and | | x*-x||≤ε。
Wherein, x represents original image, and y represents the true classification of original image x, x*Represent the figure used during dual training Picture, ε are preset noise threshold.Preset noise threshold refers to the maximum noise threshold value of permission.
x*Meets.t.||x*-x||≤ε
Wherein, J represents the loss function of first nerves network model.
The round of iteration is represented with t, μ represents the attenuation coefficient of momentum term, gtRepresent momentum term when t takes turns iteration,Refer to The image that t wheel iteration obtains.
Generation meets LThe detailed process of the confrontation image for no target attack of constraint is:
Set the initial value of iterative imageThe initial value g of momentum term0=0;
Iteration is taken turns for t+1,
Wherein, α is the step-length of iteration.
In order to make | | x*-x||≤ ε, by noise threshold and the quotient of iteration wheel number.When carrying out T wheel iteration in total,
It, will as T=t+1The image x used during dual training as generation*
The embodiment of the present invention is iterated original image, gradually addition is made an uproar in an iterative process by using momentum term Sound, deep neural network model can be attacked for acquisition and the image of dual training, effectively mitigates white-box attack success rate Coupling between migration performance carries out all having higher success attack rate without target attack to whitepack and black-box model.
Based on above-described embodiment, as an alternative embodiment, step S1 is specifically included:Using Fast Field symbolic method, The loss of the first deep neural network model is obtained according to the image that last round of iteration obtains, and according to loss generation when front-wheel changes The momentum term in generation.
Step S2 is specifically included:It is right using the quotient of noise threshold and iteration wheel number as step-length using the momentum term for often taking turns iteration Original image is iterated the iteration of wheel number, obtains confrontation image so that after confrontation image input first nerves network model, The output of first nerves network model is preset first category;Wherein, first category is different from the true classification of original image.
It should be noted that it is that generation meets L that the embodiment of the present invention is correspondingThe confrontation for being used to have target attack of constraint Image.
For first nerves network model f (x), generation meets LThe confrontation image for being used to have target attack of constraint, i.e., Cause f (x*)=y*And | | x*-x||≤ε。
Wherein, x represents original image, and y represents the true classification of original image x, x*Represent the figure used during dual training Picture, y*Represent preset first category, y*≠ y, ε are preset noise threshold.Preset noise threshold, the maximum for referring to permission are made an uproar Sound threshold value.
x*Meets.t.||x*-x||≤ε
Wherein, J represents the loss function of first nerves network model.
The round of iteration is represented with t, μ represents the attenuation coefficient of momentum term, gtRepresent momentum term when t takes turns iteration,Refer to The image that t wheel iteration obtains.
Generation meets LThe detailed process of the confrontation image for no target attack of constraint is:
Set the initial value of iterative imageThe initial value g of momentum term0=0;
Iteration is taken turns for t+1,
Wherein, α is the step-length of iteration.
In order to make | | x*-x||≤ ε, by noise threshold and the quotient of iteration wheel number.When carrying out T wheel iteration in total,
It, will as T=t+1Confrontation image x as generation*
The embodiment of the present invention is iterated original image, gradually addition is made an uproar in an iterative process by using momentum term Sound, deep neural network model can be attacked for acquisition and the image of dual training, effectively mitigates white-box attack success rate Coupling between migration performance carries out whitepack and black-box model have target all to have higher success attack rate.
Based on above-described embodiment, as an alternative embodiment, step S1 is specifically included:Using Fast Field method, according to The image that last round of iteration obtains obtains the loss of the first deep neural network model, and according to loss generation when front-wheel iteration Momentum term.
Step S2 is specifically included:It is right using the quotient of noise threshold and iteration wheel number as step-length using the momentum term for often taking turns iteration Original image is iterated the iteration of wheel number, obtains confrontation image so that after confrontation image input first nerves network model, The output of first nerves network model is not equal to the true classification of original image.
It should be noted that it is that generation meets L that the embodiment of the present invention is corresponding2The confrontation for no target attack of constraint Image.
For first nerves network model f (x), generation meets L2The confrontation image for no target attack of constraint, i.e., Cause f (x*) ≠ y and | | x*-x||2≤ε。
Wherein, x represents original image, and y represents the true classification of original image x, x*Represent the figure used during dual training Picture, ε are preset noise threshold.Preset noise threshold refers to the maximum noise threshold value of permission.
x*Meets.t.||x*-x||2≤ε
Wherein, J represents the loss function of first nerves network model.
The round of iteration is represented with t, μ represents the attenuation coefficient of momentum term, gtRepresent momentum term when t takes turns iteration,Refer to The image that t wheel iteration obtains.
Generation meets L2The detailed process of the confrontation image for no target attack of constraint is:
Set the initial value of iterative imageThe initial value g of momentum term0=0;
Iteration is taken turns for t+1,
Wherein, α is the step-length of iteration.
In order to make | | x*-x||2≤ ε, by noise threshold and the quotient of iteration wheel number.When carrying out T wheel iteration in total,
It, will as T=t+1The image x used during dual training as generation*
The embodiment of the present invention is iterated original image, gradually addition is made an uproar in an iterative process by using momentum term Sound, deep neural network model can be attacked for acquisition and the image of dual training, effectively mitigates white-box attack success rate Coupling between migration performance carries out all having higher success attack rate without target attack to whitepack and black-box model.
Based on above-described embodiment, as an alternative embodiment, step S1 is specifically included:Using Fast Field method, according to The image that last round of iteration obtains obtains the loss of the first deep neural network model, and according to loss generation when front-wheel iteration Momentum term.
Step S2 is specifically included:It is right using the quotient of noise threshold and iteration wheel number as step-length using the momentum term for often taking turns iteration Original image is iterated the iteration of wheel number, obtains confrontation image so that after confrontation image input first nerves network model, The output of first nerves network model is preset first category;Wherein, first category is different from the true classification of original image.
It should be noted that it is that generation meets L that the embodiment of the present invention is corresponding2The confrontation for being used to have target attack of constraint Image.
For first nerves network model f (x), generation meets L2The confrontation image for being used to have target attack of constraint, i.e., Cause f (x*)=y*And | | x*-x||2≤ε。
Wherein, x represents original image, and y represents the true classification of original image x, x*Represent the figure used during dual training Picture, y*Represent preset first category, y*≠ y, ε are preset noise threshold.Preset noise threshold, the maximum for referring to permission are made an uproar Sound threshold value.
x*Meets.t.||x*-x||2≤ε
Wherein, J represents the loss function of first nerves network model.
The round of iteration is represented with t, μ represents the attenuation coefficient of momentum term, gtRepresent momentum term when t takes turns iteration,Refer to The image that t wheel iteration obtains.
Generation meets L2The detailed process of the confrontation image for no target attack of constraint is:
Set the initial value of iterative imageThe initial value g of momentum term0=0;
Iteration is taken turns for t+1,
Wherein, α is the step-length of iteration.
In order to make | | x*-x||2≤ ε, by noise threshold and the quotient of iteration wheel number.When carrying out T wheel iteration in total,
It, will as T=t+1The image x used during dual training as generation*
The embodiment of the present invention is iterated original image, gradually addition is made an uproar in an iterative process by using momentum term Sound, deep neural network model can be attacked for acquisition and the image of dual training, effectively mitigates white-box attack success rate Coupling between migration performance carries out whitepack and black-box model have target all to have higher success attack rate.
Based on above-described embodiment, further included before step S1:Step S0, multiple third deep neural network models are carried out Artificial neural network ensemble, using the integrated model of acquisition as the first deep neural network model, according to multiple third depth nerve nets The loss for not normalizing the first deep neural network model of determine the probability of network model;Wherein, third deep neural network model For single model.
It should be noted that the confrontation image of method generation provided by the invention, can be used for carrying out integrated model Attack and dual training.
Specifically, before step S1, it is generated as the first deep neural network model of integrated model.
Multiple third deep neural network models are subjected to Artificial neural network ensemble, it is deep using the integrated model of generation as first Spend neural network model.
The method of Artificial neural network ensemble mainly includes method and Statistics-Based Method based on ballot.
Method based on ballot includes absolute majority ballot method and relative majority ballot method.
Statistics-Based Method includes simple average method and weighted mean method.
Based on above-described embodiment, the loss J (x, y) of the first deep neural network model is
J (x, y)=- 1y·(softmax(l(x)))
Wherein, x represents original image, and y represents the true classification of x, and l (x) is multiple third deep neural network models The weighted average of probability is not normalized;
Wherein, lk(x) probability is not normalized for k-th third deep neural network model;wkFor k-th of third depth The weight of neural network model, wk>=0 and
Specifically, as a preferred embodiment, in step S0, with wkFor k-th third deep neural network model Weight is generated as the first deep neural network model of integrated model by weighted mean method.Wherein, k=1,2 ..., K;K is The quantity of third deep neural network model;wk>=0 and
After obtaining the first deep neural network model for integrated model, to K third deep neural network model not Normalization probability is weighted average.
The weighted average for not normalizing probability of K third deep neural network model is
Wherein, lk(x) probability is not normalized for k-th third deep neural network model, refer to k-th of third depth god Through the last one softmax layers of input of network model.
After the weighted average l (x) for not normalizing probability for obtaining K third deep neural network model, according to l (x) Obtain the loss J (x, y) of the first deep neural network model.
J (x, y)=- 1y·(softmax(l(x)))
Wherein, y represents the output of the first deep neural network model
Softmax represents normalized function.
The prediction probability of K third deep neural network model or loss can also be weighted it is average, according to K the The prediction probability of three deep neural network models or loss are weighted average value, are generated as the first depth nerve of integrated model The loss of network model.
According to the loss J (x, y) of the first deep neural network model, can respectively be given birth to by the method in above-described embodiment Into meeting LConstraint for no target attack confrontation image, meet LThe confrontation image for being used to have target attack of constraint, Meet L2The confrontation image for no target attack of constraint with meet L2The confrontation image for being used to have target attack of constraint.
Above-mentioned four kinds of confrontation image of generation, is used equally for the first deep neural network model progress for integrated model Attack passes through the attack training integrated model.
The embodiment of the present invention carries out the iteration to original image by using momentum term, and acquisition can be to deep neural network mould Type carries out the image of attack and dual training, effectively mitigates the coupling between white-box attack success rate and migration performance, can carry Height, can be to various deep neural networks to integrated model with higher success attack rate to the success attack rate of integrated model Model is trained, and can be used for the sample set of various original images, deep neural network model is made to have more by dual training Good robustness, so as to improve the accuracy of the carry out image classification using deep neural network model.
Illustrate the method for the image of generation confrontation provided by the invention below by example.
7 deep neural network models are chosen as research object, they be respectively Inception V3 (Inc-v3), Inception V4(Inc-v4)、Inception Resnet V2(IncRes-v2)、Resnet v2-152(Res-152)、 Inc-v3ens3、Inc-v3ens4And IncRes-v2ens.These models are trained on large-scale image data collection ImageNet It arrives, wherein rear three models are integrated model, has certain defence capability.
1000 pictures of ImageNet verification concentrations are chosen as original image.
Example one, based on Inc-v3, Inc-v4, IncRes-v2 and Res-152 model, generation meets LConstraint is used for Confrontation image without target attack.
Method provided by the invention can be described as the iterative Fast Field symbolic method (Momentum based on momentum Iteration Fast Gradient Sign Method, abbreviation MI-FGSM).Generate the mistake of image used during dual training Cheng Zhong, noise threshold ε=16, iteration wheel number T are 10, attenuation coefficient mu=1.0 of momentum term.
By the Fast Field symbolic method (Fast Gradient Sign Method, abbreviation FGSM) of no momentum term, iterative Fast Field symbolic method (Iteration Fast Gradient Sign Method, abbreviation I-FGSM), it is and provided by the invention Method is compared.
MI-FGSM, FGSM and I-FGSM are alternatively referred to as the attack method of deep neural network.
Meet L using based on the generation of Inc-v3, Inc-v4, IncRes-v2 and Res-152 modelConstraint for no mesh The image used during the dual training for marking attack, attack Inc-v3, Inc-v4, IncRes-v2 and Res-152, Inc-v3ens3、 Inc-v3ens4And IncRes-v2ensModel, obtained success attack rate are as shown in table 1.
Table 1 meets LThe success attack rate of the confrontation image for no target attack of constraint
As can be seen that for single model or integrated model, pair of MI-FGSM methods generation provided by the invention The success attack rate of anti-image is substantially better than FGSM methods and I-FGSM methods.
Example two, based on Inc-v3, Inc-v4, IncRes-v2 and Res-152 model, generation meets L2Constraint is used for Confrontation image without target attack.
Method provided by the invention can be described as iterative Fast Field method (the Momentum Iteration based on momentum Fast Gradient Method, abbreviation MI-FGM).During generation confrontation image, noise thresholdN is original The dimension of beginning image, iteration wheel number T are 10, attenuation coefficient mu=1.0 of momentum term.
By Fast Field method (Fast Gradient Method, abbreviation FGM), the iterative Fast Field method of no momentum term (Iteration Fast Gradient Method, abbreviation I-FGM), is compared with method provided by the invention.
MI-FGM, FGM and I-FGM are alternatively referred to as the attack method of deep neural network.
Meet L using based on the generation of Inc-v3, Inc-v4, IncRes-v2 and Res-152 model2Constraint for no mesh The image used during the dual training for marking attack, attack Inc-v3, Inc-v4, IncRes-v2 and Res-152, Inc-v3ens3、 Inc-v3ens4And IncRes-v2ensModel, obtained success attack rate are as shown in table 2.
Table 2 meets L2The success attack rate of the confrontation image for no target attack of constraint
As can be seen that for single model or integrated model, pair of MI-FGM methods generation provided by the invention The success attack rate of anti-image is substantially better than FGM methods and I-FGM methods.
Example three, by arbitrary three progress neural network in Inc-v3, Inc-v4, IncRes-v2 and Res-152 model Integrated to obtain integrated model, the weight of each model is equal, using not integrated model as the corresponding black box mould of the integrated model Type.Respectively according to the loss for not normalizing probability, prediction probability and loss acquisition integrated model, and generation meets LThe use of constraint In the confrontation image of no target attack.During generation confrontation image, noise threshold ε=16, iteration wheel number T is 20, momentum Attenuation coefficient mu=1.0 of item.
Using Inc-v3, Inc-v4, IncRes-v2 and Res-152 model as black-box model, corresponding training is utilized Sample attacks integrated model and black-box model, obtained success attack rate are as shown in table 3.
Table 3 meets LThe success attack rate of the confrontation image for no target attack of constraint
As can be seen that for black-box model or integrated model, pair of MI-FGM methods generation provided by the invention Anti- image carries out the success attack rate without target attack and is substantially better than FGM methods and I-FGM methods.
Example four, by Inc-v3, Inc-v4, IncRes-v2 and Res-152, Inc-v3ens3、Inc-v3ens4And IncRes- v2ensArbitrary six in model carry out Artificial neural network ensemble and obtain integrated model, and the weight of each model is equal, will not integrate Model as the corresponding black-box model of the integrated model.The loss of integrated model is obtained, and generate according to probability is not normalized Meet LThe confrontation image for being used to have target attack of constraint.During generation confrontation image, noise threshold ε=16, iteration It is 20 to take turns number T, attenuation coefficient mu=1.0 of momentum term.
By Inc-v3, Inc-v4, IncRes-v2 and Res-152, Inc-v3ens3、Inc-v3ens4And IncRes-v2ensMould Type attacks integrated model using corresponding training sample and black-box model, obtained success attack rate is as shown in table 4.
Table 4 meets LThe success attack rate of the confrontation image for being used to have target attack of constraint
As can be seen that for black-box model or integrated model, pair of MI-FGM methods generation provided by the invention The success attack rate that anti-image carries out target attack is substantially better than FGM methods and I-FGM methods.
Finally, the above embodiment of the present invention is only preferable embodiment, is not intended to limit the protection model of the present invention It encloses.All within the spirits and principles of the present invention, any modification, equivalent replacement, improvement and so on should be included in the present invention Protection domain within.

Claims (10)

  1. A kind of 1. method for generating confrontation image, which is characterized in that including:
    S1, using gradient algorithm, according to the image that last round of iteration obtains, obtain the loss of the first deep neural network model, And according to the loss generation when the momentum term of front-wheel iteration;
    S2, using described when the momentum term of front-wheel iteration, according to the image that the last round of iteration obtains, generation is when front-wheel iteration The image of acquisition, until iteration reaches preset iteration wheel number, the image that last wheel iteration is obtained is as confrontation image;
    Wherein, the momentum term of first round iteration, according to the loss of the first deep neural network model obtained based on original image Generation.
  2. 2. the method for generation confrontation image according to claim 1, which is characterized in that
    When first deep neural network model is single model, the loss of first deep neural network model is institute State the cross entropy of the first deep neural network model.
  3. 3. the method for generation confrontation image according to claim 1, which is characterized in that further included before the step S1:
    S0, multiple third deep neural network models are subjected to Artificial neural network ensemble, using the integrated model of acquisition as described the One deep neural network model, according to the multiple third deep neural network model do not normalize determine the probability described in first The loss of deep neural network model;
    Wherein, the third deep neural network model is single model.
  4. 4. the method for generation confrontation image according to claim 3, which is characterized in that the first deep neural network mould The loss J (x, y) of type is
    J (x, y)=- 1y·(softmax(l(x)))
    Wherein, x represents original image, and y represents the true classification of x, and l (x) is not returning for multiple third deep neural network models One changes the weighted average of probability;
    Wherein, lk(x) probability is not normalized for k-th third deep neural network model;wkFor k-th of third depth nerve The weight of network model, wk>=0 and
  5. 5. the method for generation confrontation image according to any one of claims 1 to 4, which is characterized in that the step S1 is specific Including:
    Using Fast Field symbolic method, the damage of the first deep neural network model is obtained according to the image that last round of iteration obtains It loses, and works as the momentum term of front-wheel iteration according to the loss generation.
  6. 6. the method for generation confrontation image according to claim 5, which is characterized in that the step S2 is specifically included:
    Using the momentum term for often taking turns iteration, using the quotient of preset noise threshold and the iteration wheel number as step-length, to described original Image carries out the iteration of the iteration wheel number, obtains the confrontation image so that by the confrontation image input first god After network model, the output of the first nerves network model is not equal to the true classification of the original image.
  7. 7. the method for generation confrontation image according to claim 5, which is characterized in that the step S2 is specifically included:
    Using the momentum term for often taking turns iteration, using the quotient of the noise threshold and the iteration wheel number as step-length, to the original graph Iteration as carrying out the iteration wheel number, obtains the confrontation image so that the confrontation image is inputted the first nerves After network model, the output of the first nerves network model is preset first category;
    Wherein, the first category is different from the true classification of the original image.
  8. 8. the method for generation confrontation image according to any one of claims 1 to 4, which is characterized in that the step S1 is specific Including:
    Using Fast Field method, the loss of the first deep neural network model is obtained according to the image that last round of iteration obtains, and Work as the momentum term of front-wheel iteration according to the loss generation.
  9. 9. the method for generation confrontation image according to claim 8, which is characterized in that the step S2 is specifically included:
    Using the momentum term for often taking turns iteration, using the quotient of the noise threshold and the iteration wheel number as step-length, to the original graph Iteration as carrying out the iteration wheel number, obtains the confrontation image so that the confrontation image is inputted the first nerves After network model, the output of the first nerves network model is not equal to the true classification of the original image.
  10. 10. the method for generation confrontation image according to claim 8, which is characterized in that the step S2 is specifically included:
    Using the momentum term for often taking turns iteration, using the quotient of the noise threshold and the iteration wheel number as step-length, to the original graph Iteration as carrying out the iteration wheel number, obtains the confrontation image so that the confrontation image is inputted the first nerves After network model, the output of the first nerves network model is preset first category;
    Wherein, the first category is different from the true classification of the original image.
CN201711487948.4A 2017-12-30 2017-12-30 A kind of method for generating confrontation image Pending CN108257116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711487948.4A CN108257116A (en) 2017-12-30 2017-12-30 A kind of method for generating confrontation image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711487948.4A CN108257116A (en) 2017-12-30 2017-12-30 A kind of method for generating confrontation image

Publications (1)

Publication Number Publication Date
CN108257116A true CN108257116A (en) 2018-07-06

Family

ID=62725307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711487948.4A Pending CN108257116A (en) 2017-12-30 2017-12-30 A kind of method for generating confrontation image

Country Status (1)

Country Link
CN (1) CN108257116A (en)

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034632A (en) * 2018-08-03 2018-12-18 哈尔滨工程大学 A kind of deep learning model safety methods of risk assessment based on to resisting sample
CN109190760A (en) * 2018-08-06 2019-01-11 北京市商汤科技开发有限公司 Neural network training method and device and environmental treatment method and device
CN109492582A (en) * 2018-11-09 2019-03-19 杭州安恒信息技术股份有限公司 A kind of image recognition attack method based on algorithm confrontation sexual assault
CN109523018A (en) * 2019-01-08 2019-03-26 重庆邮电大学 A kind of picture classification method based on depth migration study
CN109599109A (en) * 2018-12-26 2019-04-09 浙江大学 For the confrontation audio generation method and system of whitepack scene
CN109902723A (en) * 2019-01-31 2019-06-18 北京市商汤科技开发有限公司 Image processing method and device
CN109948663A (en) * 2019-02-27 2019-06-28 天津大学 A kind of confrontation attack method of the adaptive step based on model extraction
CN109992931A (en) * 2019-02-27 2019-07-09 天津大学 A kind of transportable non-black box attack countercheck based on noise compression
CN110020593A (en) * 2019-02-03 2019-07-16 清华大学 Information processing method and device, medium and calculating equipment
CN110021049A (en) * 2019-03-29 2019-07-16 武汉大学 A kind of highly concealed type antagonism image attack method based on space constraint towards deep neural network
CN110222578A (en) * 2019-05-08 2019-09-10 腾讯科技(深圳)有限公司 The method and apparatus of confrontation test picture talk system
CN110222831A (en) * 2019-06-13 2019-09-10 百度在线网络技术(北京)有限公司 Robustness appraisal procedure, device and the storage medium of deep learning model
CN110245598A (en) * 2019-06-06 2019-09-17 北京瑞莱智慧科技有限公司 It fights sample generating method, device, medium and calculates equipment
CN110276377A (en) * 2019-05-17 2019-09-24 杭州电子科技大学 A kind of confrontation sample generating method based on Bayes's optimization
CN110379418A (en) * 2019-06-28 2019-10-25 西安交通大学 A kind of voice confrontation sample generating method
CN110633570A (en) * 2019-07-24 2019-12-31 浙江工业大学 Black box attack defense method for malicious software assembly format detection model
CN110851835A (en) * 2019-09-23 2020-02-28 平安科技(深圳)有限公司 Image model detection method and device, electronic equipment and storage medium
CN110941824A (en) * 2019-12-12 2020-03-31 支付宝(杭州)信息技术有限公司 Method and system for enhancing anti-attack capability of model based on confrontation sample
CN111104982A (en) * 2019-12-20 2020-05-05 电子科技大学 Label-independent cross-task confrontation sample generation method
CN111177757A (en) * 2019-12-27 2020-05-19 支付宝(杭州)信息技术有限公司 Processing method and device for protecting privacy information in picture
CN111340180A (en) * 2020-02-10 2020-06-26 中国人民解放军国防科技大学 Countermeasure sample generation method and device for designated label, electronic equipment and medium
CN111476228A (en) * 2020-04-07 2020-07-31 海南阿凡题科技有限公司 White-box confrontation sample generation method for scene character recognition model
CN111651762A (en) * 2020-04-21 2020-09-11 浙江大学 Convolutional neural network-based PE (provider edge) malicious software detection method
CN111932646A (en) * 2020-07-16 2020-11-13 电子科技大学 Image processing method for resisting attack
CN109376556B (en) * 2018-12-17 2020-12-18 华中科技大学 Attack method for EEG brain-computer interface based on convolutional neural network
CN112329931A (en) * 2021-01-04 2021-02-05 北京智源人工智能研究院 Countermeasure sample generation method and device based on proxy model
CN112488172A (en) * 2020-11-25 2021-03-12 北京有竹居网络技术有限公司 Method, device, readable medium and electronic equipment for resisting attack
CN112750067A (en) * 2019-10-29 2021-05-04 爱思开海力士有限公司 Image processing system and training method thereof
CN113302605A (en) * 2019-01-16 2021-08-24 谷歌有限责任公司 Robust and data efficient black box optimization
CN113313132A (en) * 2021-07-30 2021-08-27 中国科学院自动化研究所 Determination method and device for confrontation sample image, electronic equipment and storage medium
CN113469330A (en) * 2021-06-25 2021-10-01 中国人民解放军陆军工程大学 Method for enhancing sample mobility resistance by bipolar network corrosion
CN113487545A (en) * 2021-06-24 2021-10-08 广州玖的数码科技有限公司 Method for generating disturbance image facing to attitude estimation depth neural network
CN113537494A (en) * 2021-07-23 2021-10-22 江南大学 Image countermeasure sample generation method based on black box scene
CN113591975A (en) * 2021-07-29 2021-11-02 中国人民解放军战略支援部队信息工程大学 Countermeasure sample generation method and system based on Adam algorithm
CN114005168A (en) * 2021-12-31 2022-02-01 北京瑞莱智慧科技有限公司 Physical world confrontation sample generation method and device, electronic equipment and storage medium
US11288408B2 (en) 2019-10-14 2022-03-29 International Business Machines Corporation Providing adversarial protection for electronic screen displays
CN114363509A (en) * 2021-12-07 2022-04-15 浙江大学 Triggerable countermeasure patch generation method based on sound wave triggering
CN114359672A (en) * 2022-01-06 2022-04-15 云南大学 Adam-based iterative rapid gradient descent anti-attack method
CN116543268A (en) * 2023-07-04 2023-08-04 西南石油大学 Channel enhancement joint transformation-based countermeasure sample generation method and terminal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103590A (en) * 2017-03-22 2017-08-29 华南理工大学 A kind of image for resisting generation network based on depth convolution reflects minimizing technology

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103590A (en) * 2017-03-22 2017-08-29 华南理工大学 A kind of image for resisting generation network based on depth convolution reflects minimizing technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YINPENG DONG,ET AL.: "Discovering Adversarial Examples with Momentum", 《ARXIV》 *
YINPENG DONG等: "Boosting Adversarial Attacks with Momentum", 《ARXIV》 *

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034632A (en) * 2018-08-03 2018-12-18 哈尔滨工程大学 A kind of deep learning model safety methods of risk assessment based on to resisting sample
CN109190760B (en) * 2018-08-06 2021-11-30 北京市商汤科技开发有限公司 Neural network training method and device and environment processing method and device
CN109190760A (en) * 2018-08-06 2019-01-11 北京市商汤科技开发有限公司 Neural network training method and device and environmental treatment method and device
CN109492582A (en) * 2018-11-09 2019-03-19 杭州安恒信息技术股份有限公司 A kind of image recognition attack method based on algorithm confrontation sexual assault
CN109492582B (en) * 2018-11-09 2022-02-11 杭州安恒信息技术股份有限公司 Image recognition attack method based on algorithm adversarial attack
CN109376556B (en) * 2018-12-17 2020-12-18 华中科技大学 Attack method for EEG brain-computer interface based on convolutional neural network
CN109599109A (en) * 2018-12-26 2019-04-09 浙江大学 For the confrontation audio generation method and system of whitepack scene
CN109599109B (en) * 2018-12-26 2022-03-25 浙江大学 Confrontation audio generation method and system for white-box scene
CN109523018A (en) * 2019-01-08 2019-03-26 重庆邮电大学 A kind of picture classification method based on depth migration study
CN109523018B (en) * 2019-01-08 2022-10-18 重庆邮电大学 Image classification method based on deep migration learning
CN113302605A (en) * 2019-01-16 2021-08-24 谷歌有限责任公司 Robust and data efficient black box optimization
CN109902723A (en) * 2019-01-31 2019-06-18 北京市商汤科技开发有限公司 Image processing method and device
CN110020593A (en) * 2019-02-03 2019-07-16 清华大学 Information processing method and device, medium and calculating equipment
CN110020593B (en) * 2019-02-03 2021-04-13 清华大学 Information processing method and device, medium and computing equipment
CN109948663B (en) * 2019-02-27 2022-03-15 天津大学 Step-length self-adaptive attack resisting method based on model extraction
CN109992931B (en) * 2019-02-27 2023-05-30 天津大学 Noise compression-based migratable non-black box attack countermeasure method
CN109948663A (en) * 2019-02-27 2019-06-28 天津大学 A kind of confrontation attack method of the adaptive step based on model extraction
CN109992931A (en) * 2019-02-27 2019-07-09 天津大学 A kind of transportable non-black box attack countercheck based on noise compression
CN110021049B (en) * 2019-03-29 2022-08-30 武汉大学 Deep neural network-oriented high-concealment antagonistic image attack method based on spatial constraint
CN110021049A (en) * 2019-03-29 2019-07-16 武汉大学 A kind of highly concealed type antagonism image attack method based on space constraint towards deep neural network
CN110222578A (en) * 2019-05-08 2019-09-10 腾讯科技(深圳)有限公司 The method and apparatus of confrontation test picture talk system
CN110222578B (en) * 2019-05-08 2022-12-27 腾讯科技(深圳)有限公司 Method and apparatus for challenge testing of speak-with-picture system
CN110276377A (en) * 2019-05-17 2019-09-24 杭州电子科技大学 A kind of confrontation sample generating method based on Bayes's optimization
CN110276377B (en) * 2019-05-17 2021-04-06 杭州电子科技大学 Confrontation sample generation method based on Bayesian optimization
CN110245598A (en) * 2019-06-06 2019-09-17 北京瑞莱智慧科技有限公司 It fights sample generating method, device, medium and calculates equipment
CN110222831A (en) * 2019-06-13 2019-09-10 百度在线网络技术(北京)有限公司 Robustness appraisal procedure, device and the storage medium of deep learning model
CN110379418A (en) * 2019-06-28 2019-10-25 西安交通大学 A kind of voice confrontation sample generating method
CN110379418B (en) * 2019-06-28 2021-08-13 西安交通大学 Voice confrontation sample generation method
CN110633570A (en) * 2019-07-24 2019-12-31 浙江工业大学 Black box attack defense method for malicious software assembly format detection model
CN110633570B (en) * 2019-07-24 2021-05-11 浙江工业大学 Black box attack defense method for malicious software assembly format detection model
CN110851835A (en) * 2019-09-23 2020-02-28 平安科技(深圳)有限公司 Image model detection method and device, electronic equipment and storage medium
US11288408B2 (en) 2019-10-14 2022-03-29 International Business Machines Corporation Providing adversarial protection for electronic screen displays
CN112750067B (en) * 2019-10-29 2024-05-07 爱思开海力士有限公司 Image processing system and training method thereof
CN112750067A (en) * 2019-10-29 2021-05-04 爱思开海力士有限公司 Image processing system and training method thereof
CN110941824B (en) * 2019-12-12 2022-01-28 支付宝(杭州)信息技术有限公司 Method and system for enhancing anti-attack capability of model based on confrontation sample
CN110941824A (en) * 2019-12-12 2020-03-31 支付宝(杭州)信息技术有限公司 Method and system for enhancing anti-attack capability of model based on confrontation sample
CN111104982B (en) * 2019-12-20 2021-09-24 电子科技大学 Label-independent cross-task confrontation sample generation method
CN111104982A (en) * 2019-12-20 2020-05-05 电子科技大学 Label-independent cross-task confrontation sample generation method
CN111177757A (en) * 2019-12-27 2020-05-19 支付宝(杭州)信息技术有限公司 Processing method and device for protecting privacy information in picture
CN111340180B (en) * 2020-02-10 2021-10-08 中国人民解放军国防科技大学 Countermeasure sample generation method and device for designated label, electronic equipment and medium
CN111340180A (en) * 2020-02-10 2020-06-26 中国人民解放军国防科技大学 Countermeasure sample generation method and device for designated label, electronic equipment and medium
CN111476228A (en) * 2020-04-07 2020-07-31 海南阿凡题科技有限公司 White-box confrontation sample generation method for scene character recognition model
CN111651762A (en) * 2020-04-21 2020-09-11 浙江大学 Convolutional neural network-based PE (provider edge) malicious software detection method
CN111932646B (en) * 2020-07-16 2022-06-21 电子科技大学 Image processing method for resisting attack
CN111932646A (en) * 2020-07-16 2020-11-13 电子科技大学 Image processing method for resisting attack
CN112488172A (en) * 2020-11-25 2021-03-12 北京有竹居网络技术有限公司 Method, device, readable medium and electronic equipment for resisting attack
CN112329931A (en) * 2021-01-04 2021-02-05 北京智源人工智能研究院 Countermeasure sample generation method and device based on proxy model
CN113487545A (en) * 2021-06-24 2021-10-08 广州玖的数码科技有限公司 Method for generating disturbance image facing to attitude estimation depth neural network
CN113469330A (en) * 2021-06-25 2021-10-01 中国人民解放军陆军工程大学 Method for enhancing sample mobility resistance by bipolar network corrosion
CN113469330B (en) * 2021-06-25 2022-12-02 中国人民解放军陆军工程大学 Method for enhancing sample mobility resistance by bipolar network corrosion
CN113537494A (en) * 2021-07-23 2021-10-22 江南大学 Image countermeasure sample generation method based on black box scene
CN113591975A (en) * 2021-07-29 2021-11-02 中国人民解放军战略支援部队信息工程大学 Countermeasure sample generation method and system based on Adam algorithm
CN113313132A (en) * 2021-07-30 2021-08-27 中国科学院自动化研究所 Determination method and device for confrontation sample image, electronic equipment and storage medium
CN114363509A (en) * 2021-12-07 2022-04-15 浙江大学 Triggerable countermeasure patch generation method based on sound wave triggering
CN114363509B (en) * 2021-12-07 2022-09-20 浙江大学 Triggerable countermeasure patch generation method based on sound wave triggering
CN114005168A (en) * 2021-12-31 2022-02-01 北京瑞莱智慧科技有限公司 Physical world confrontation sample generation method and device, electronic equipment and storage medium
CN114359672A (en) * 2022-01-06 2022-04-15 云南大学 Adam-based iterative rapid gradient descent anti-attack method
CN114359672B (en) * 2022-01-06 2023-04-07 云南大学 Adam-based iterative rapid gradient descent anti-attack method
CN116543268A (en) * 2023-07-04 2023-08-04 西南石油大学 Channel enhancement joint transformation-based countermeasure sample generation method and terminal
CN116543268B (en) * 2023-07-04 2023-09-15 西南石油大学 Channel enhancement joint transformation-based countermeasure sample generation method and terminal

Similar Documents

Publication Publication Date Title
CN108257116A (en) A kind of method for generating confrontation image
CN108875807B (en) Image description method based on multiple attention and multiple scales
Qian et al. Learning and transferring representations for image steganalysis using convolutional neural network
CN105512289B (en) Image search method based on deep learning and Hash
CN108230278B (en) Image raindrop removing method based on generation countermeasure network
CN109948663A (en) A kind of confrontation attack method of the adaptive step based on model extraction
CN105279554B (en) The training method and device of deep neural network based on Hash coding layer
CN110490227B (en) Feature conversion-based few-sample image classification method
CN106326288A (en) Image search method and apparatus
CN110362997B (en) Malicious URL (Uniform resource locator) oversampling method based on generation countermeasure network
CN108765512B (en) Confrontation image generation method based on multi-level features
CN104517274B (en) Human face portrait synthetic method based on greedy search
CN108121975A (en) A kind of face identification method combined initial data and generate data
CN107729311A (en) A kind of Chinese text feature extracting method of the fusing text tone
CN106339753A (en) Method for effectively enhancing robustness of convolutional neural network
CN113627543B (en) Anti-attack detection method
CN109978021A (en) A kind of double-current method video generation method based on text different characteristic space
CN109960755B (en) User privacy protection method based on dynamic iteration fast gradient
CN109815496A (en) Based on capacity adaptive shortening mechanism carrier production text steganography method and device
CN111047054A (en) Two-stage countermeasure knowledge migration-based countermeasure sample defense method
CN111222583B (en) Image steganalysis method based on countermeasure training and critical path extraction
CN109740057A (en) A kind of strength neural network and information recommendation method of knowledge based extraction
CN112597993A (en) Confrontation defense model training method based on patch detection
Yang et al. Adversarial attacks on brain-inspired hyperdimensional computing-based classifiers
Tripathi et al. Real time object detection using CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180706