CN108446765A - The multi-model composite defense method of sexual assault is fought towards deep learning - Google Patents
The multi-model composite defense method of sexual assault is fought towards deep learning Download PDFInfo
- Publication number
- CN108446765A CN108446765A CN201810141253.9A CN201810141253A CN108446765A CN 108446765 A CN108446765 A CN 108446765A CN 201810141253 A CN201810141253 A CN 201810141253A CN 108446765 A CN108446765 A CN 108446765A
- Authority
- CN
- China
- Prior art keywords
- model
- attack
- sample
- deep learning
- category
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/96—Management of image or video recognition tasks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A kind of multi-model composite defense method for fighting sexual assault towards deep learning, method include the following steps:1) unified Modeling is carried out to the attack based on gradient and proposes ρ loss models;2) according to the design of unified model, for object module fpre(x) to attack resistance, according to the generation to resisting sample as a result, the basic expressions form of attack is divided into two classes;3) parametric evolving of model is analyzed, is optimized for the step-size in search of the parameter optimization and disturbance solving model that generate model ρ loss models to resisting sample;4) it is directed to the mysteriousness of black box attack, it is tested based on adaboost conceptual designs, the different type alternative model for generating multiple realization same tasks is integrated, by training generator to the attack of the integrated model with strong defending performance, one multi-model composite defense method with stronger defence capability of design proposes a kind of multi-model cooperation detection attack of weight optimum allocation.Safety of the present invention is higher, attack of effective defence confrontation sexual assault to deep learning model.
Description
Technical field
The invention belongs to the security fields of the machine learning method of artificial intelligence field, for depth in current machine learning
There is the threat attacked resisting sample in learning method, it is proposed that a kind of multi-model composite defense method effectively improves its safety.
Background technology
Deep neural network model relies on its good learning performance, is used widely in real world, including meter
Calculation machine vision, natural language processing, analysis of biological information etc..There is recognition of face, automatic in especially computer vision field
Driving etc. completes the images automatic understandings such as road sign, therefore deep learning is universal by deep learning model automatic identification face
It can succeed one of the key technology promoted using being recognition of face and automatic Pilot technology.
With the continuous expansion of deep learning application range, in face of the fragility to being showed when resisting sample, there is an urgent need for solutions
Certainly.Essence to resisting sample is explained, explanation is mainly the following, linear model is it is assumed that nonlinearity structure, son are empty
Between cross theory, marginal interference, prediction is uncertain and evolves uncertain etc..For the different understanding to attack resistance essence, research
Persons propose many defence methods.According to the realization effect of defence, confrontation defence can be divided into defence completely and only detect.
When " defence completely " refers in face of to resisting sample, the complete robust of influence that object module can be to disturbance, correctly predicted sample
Original category;The purpose of " only detect " be find it is potentially dangerous to resisting sample, and by they exclude process range it
It is interior.
Based on above with respect at present to resisting sample attack and the introduction of attack defending it is recognised that for deep learning model
Attack with defence just as soldier lance and shield, realize that the final purpose of attack is not configured to that the system based on deep learning is allowed to collapse
It bursts, but is repaired to find out loophole present in current system, improve the defending performance of deep learning system.
It to attack resistance is attacked the object module based on deep learning resisting sample using generation, makes its function
A kind of disorderly novel attack pattern.There are many methods generated to resisting sample at present, but are a lack of to existing methods essence
Understand and superiority-inferiority effective evaluation.
Invention content
In order to overcome, the safety that existing confrontation amounts to mode is relatively low, confrontation sexual assault can not be defendd to deep learning model
Attack deficiency, a kind of higher, effective defence that the present invention provides safeties confrontation sexual assault attacks deep learning model
The multi-model composite defense method hit.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of multi-model composite defense method for fighting sexual assault towards deep learning, the described method comprises the following steps:
1) unified Modeling is carried out to the attack based on gradient and proposes ρ-loss models, process is as follows:
1.1) optimization ρ-loss models are unified into resisting sample generating mode based on gradient by all, be defined as follows:
arg min λ1||ρ||p+λ2Loss(xadv,fpre(xadv)) s.t. ρ=xnor-xadv (1)
In formula (1), ρ is indicated to resisting sample xadvWith normal sample xnorBetween existing disturbance;fpre() indicates deep learning
The prediction of model exports;||·||pIndicate the p norms of disturbance;Loss () indicates loss function;λ1And λ2It is scale ginseng
Number, the order of magnitude for balancing disturbance norm and loss function, value range are [10-1, 10], and carried out just according to optimization aim
Negative transformation;
1.2) analysis generates, by itself normal sample between existing disturbance whether effective to the final effect of resisting sample
ρ makes the desired output f of deep learning model realization attackerpre(xadv);
1.3) it is directed to resisting sample xadvSolution, disturbance solving model ρ-iter that will be unitized are expressed as:
In formula (2), ε indicates the feasible solution section of disturbance search;T indicates iterations when disturbance calculates;fρ(·,·)
Indicate the direction of search of disturbance;θ indicates the parameter that deep learning model is trained about input sample;Indicate confrontation
Sample xadvPrediction category;Loss (,) indicates the loss function of neural network model;Indicate loss function about after x derivationsThe gradient direction at place;κdirIndicate gradient
The correction factor in direction, to reach the search effect of higher performance;
2) according to the design of unified model, for object module fpre(x) to attack resistance, according to the generation to resisting sample
As a result, the basic expressions form of attack is divided into two classes;
2.1) first kind Ιatt:Search for solution space Rm, find and be similar to normal sample xnorTo resisting sample xadv∈Xm,
Input object module obtains prediction result and isIt realizesDefine probability function Pr () simultaneously;
The optimization aim of prediction result is expressed asI.e. to the prediction category of resisting sample and just
The often different probability of the prediction category of sample, optimization aim at this time is that disturbed value is minimum, and penalty values are maximum, i.e. λ1> 0, λ2<
0, it realizesFor the output category to resisting sampleThere is no specific directive property, referred to as without target attack;
2.2) the second class IIatt:Search for solution space Rm, find and be totally different from normal sample xnorTo resisting sampleInputting the prediction result obtained after object module isRealize defining with high confidence level
Prediction result;
For no target attack, the clear prediction result with high confidence level is construed to xadvIt is predicted to beClass output
Probability is far longer than the probability for being predicted to be other classifications, and mathematical notation is
3) parametric evolving of model is analyzed, for parameter optimization and the disturbance for generating model ρ-loss models to resisting sample
The step-size in search of solving model optimizes;
4) it is directed to the mysteriousness of black box attack, is tested based on adaboost conceptual designs, generates multiple realization same tasks
Different type alternative model integrated, by training generator to the attack of the integrated model with strong defending performance, if
One multi-model composite defense method with stronger defence capability of meter proposes a kind of multi-model collaboration inspection of weight optimum allocation
Survey attack.
Further, in the step 1), for the object module f based on deep learningpreThe loss function of () is defined as
Cross entropy is for weighing sample xnorPrediction category in object moduleWith the true category y of sample itselfnorClose to journey
Degree:
For the object module f based on deep learningpre(xnor):Its function is root
According to input sample xnorPredict that it exports categoryAnd realize the output category that prediction obtainsIt is true with input sample
Category ynorIt is identical, i.e.,
Further, in the step 2.1), for attacker, if its desired output isIf realizingIt is then to have target attack, has under target attack, the loss function of deep learning model can be expressed asThe optimization aim of loss function becomes the prediction category to resisting sampleWith the phase of attacker
Hope output categoryIdentical maximization.Optimization aim at this time is that disturbed value is minimum, and penalty values are minimum, i.e. λ1> 0,
λ2> 0.
In the step 2.2), when deep learning model only allows to be predicted to beThe sample of class passes through, then attacks
Person conscious will improve and is predicted to be to resisting sampleProbabilityCarry out having target
Attack, the optimization task of loss function become the prediction category to resisting sampleWith object module allow sample by prediction
CategoryIdentical maximization, optimization aim at this time is that disturbed value is maximum, and penalty values are minimum, i.e. λ1< 0, λ2> 0.
It is as follows to the parameter optimisation procedure of ρ-loss models in the step 3):
3.1) work as λ1=1, λ2When=0, p=2, ρ-loss models are described exactly to resisting sample to being based on deep learning
The machine vision task of model is deceived for the first time;
3.2) it is that the minimal disturbances ρ under conditions of searching minimum c > 0 meets prediction expectation f that will optimize Task Switchingpre
(Ic+ ρ)=l, works as λ at this time1=c > 0, λ2When=1, p=1, the progress of box constrained procedure is namely based on described in ρ-loss models
The L-BFGS methods of approximate solution search;
3.3) the step length searching direction of ρ-iter models is optimized:
3.3.1) as iterations T=1, gradient direction corrected parameter κdirWhen=0, step-size in search becomes ε, ρ-at this time
It is that the disturbance based on single -step method solves described in iter models;
By fρWhen () is defined as the direction of search represented by sign function sign (), described by ρ-iter models
Be the Fast Field symbolic method obtained by expanding " linear " property in higher dimensional space of deep neural network model;
3.3.2) work as definitionWhen to have the gradient direction of unit length under p' norms,
It is Fast Field L described in ρ-iter modelsp'Method;;It particularly,, being capable of pixel again by limiting zero norm as p'=0
Rank carries out disturbance calculating, is the notable figure attack based on Jacobi described in ρ-iter models at this time;
3.3.3) as iterations T > 1, κdir=0, ρ-iter models are extended to Simple iterative method from single -step method, every time repeatedly
The step-size in search in generation is defined as α=ε/T;As iterations T > 1, adjustment in direction parameter κdir=μ gtIt indicates to carry attenuation rate μ
Momentum memory correction, ρ-iter models upgrade to momentum iterative method from Simple iterative method;
3.3.4) when disturbing there are one number of pixels, described ρ-iter models are exactly single pixel point attack.
In the step 4), the generation method includes the following steps:
4.1) unified model generates attack under single model and the attack effect of basic attack model is evaluated, for normal sample
xnor, attack resistance generation method is handled using different, is obtained to resisting sampleWith
4.2) concept based on adaboost carries out the defence of more deep neural networks, by forming weight to object moduleValue adjustment integrated model in face of to resisting sample when defence capability, wherein i=1,2 ..., n indicate single in integrated model
The number of a model, j=1,2 ..., g indicate the group number of weight, i.e., the number of the integrated model with the combination of different weights, base
In adaboost methods, imitated by attack of the integrated model under the conditions of the different weights compositions of evaluation under different attack modes
Fruit, the quality of comparison attacks method;
4.3) first, to the integrated model optimal weights under basic attack modelIt automatically determines, forming face pair
The integrated model TMs with most strong defence capability combined by n object module when the attackbaseline;Then, by unified model
What is generated is applied to the TMs with most strong defence capability to resisting samplebaseline, AG-GAN is relative to basic attack model for verification
The attack transfer ability of attacking ability;In addition, for integrated model TMsbaselineWeight it is adaptively anti-using entropy trapping is intersected
The variation of imperial ability, weightNormalization after adaptive formula be:
In formula (4), ωiInitial value be set as 1/n,Indicate integrated model TMsbaselineIn i-th
|I=1,2 ..., mThe cross entropy loss function of a model, Θ indicate to calculate the parameter of function;When generation is to resisting sample xadvIt is normal
Sample xnorBelong to lnorWhen class, if by object module TMiPrediction result category afterwards is equal to lnor, show the model for this
The defence capability of secondary attack is stronger, then the corresponding weighing factor ω of the object moduleiCan accordingly it increase;If to xadvPrediction knot
Fruit is not lnor, show that the model fails to the defence that this is attacked, corresponding model weighing factor can reduce.
In the present invention, from the essence generated to resisting sample, a kind of confrontation sample generating method of parametrization is designed, it will
Its object to resisting sample as defence generated;Subsequent this patent proposes a kind of multi-model composite defense method, by more
It cooperates with and differentiates between kind deep learning model, the weight optimum allocation of multiple deep learning models is realized based on evolutionary computation, from
And improve defence capability of the multi-model to confrontation sexual assault.Application is unfolded in the multiple fields of image recognition in final this patent, tests
Demonstrate,prove the validity of its method.
Beneficial effects of the present invention are mainly manifested in:Safety is higher, effective defence fights sexual assault to deep learning mould
The attack of type.
Description of the drawings
Fig. 1 is the schematic block diagram of ρ-loss models.
Fig. 2 is the performance analysis chart based on challenge model.
Specific implementation mode
The invention will be further described below in conjunction with the accompanying drawings.
Referring to Fig.1, a kind of multi-model composite defense method for fighting sexual assault towards deep learning, includes the following steps:
1) unified Modeling is carried out to the attack based on gradient and proposes ρ-loss models, fought based on gradient deeper into analysis
The principle of attack, process are as follows:
1.1) optimization ρ-loss models are unified into resisting sample generating mode based on gradient by all, be defined as follows:
arg min λ1||ρ||p+λ2Loss(xadv,fpre(xadv)) s.t. ρ=xnor-xadv (1)
In formula (1), ρ is indicated to resisting sample xadvWith normal sample xnorBetween existing disturbance;fpre() indicates deep learning
The prediction of model exports;||·||pIndicate the p norms of disturbance;Loss () indicates loss function;λ1And λ2It is scale ginseng
Number, the order of magnitude for balancing disturbance norm and loss function, value range are [10-1, 10], and carried out just according to optimization aim
Negative transformation;
1.2) analysis generates, by itself normal sample between existing disturbance whether effective to the final effect of resisting sample
ρ makes the desired output f of deep learning model realization attackerpre(xadv).The schematic block diagram of ρ-loss models is as shown in Figure 1;
1.3) it is directed to resisting sample xadvSolution, unitized disturbance solving model ρ-iter can be expressed as:
In formula (2), ε indicates the feasible solution section of disturbance search;T indicates iterations when disturbance calculates;fρ(·,·)
Indicate the direction of search of disturbance;θ indicates the parameter that deep learning model is trained about input sample;Indicate confrontation
Sample xadvPrediction category;Loss (,) indicates the loss function of neural network model;Indicate loss function about after x derivationsThe gradient direction at place;κdirIndicate gradient
The correction factor in direction, to reach the search effect of higher performance.
For convenience of subsequent explanation, this patent is directed to the object module f based on deep learningpreThe loss letter of ()
Number is defined as cross entropy for weighing sample xnorPrediction category in object moduleWith the true category y of sample itselfnor's
Degree of closeness:
As can be seen that when prediction category is closer to true category, the value of loss function more levels off to zero.
For the object module f based on deep learningpre(xnor):Its function is root
According to input sample xnorPredict that it exports categoryAnd realize the output category that prediction obtainsIt is true with input sample
Category ynorIt is identical, i.e.,
2) according to the design of unified model, for object module fpre(x) to attack resistance, according to the generation to resisting sample
As a result, the basic expressions form of attack can be divided into two classes;
2.1) first kind Ιatt:Search for solution space Rm, find and be similar to normal sample xnorTo resisting sample xadv∈Xm,
Input object module obtains prediction result and isIt realizesDefine probability function Pr () simultaneously;
The optimization aim of prediction result can be expressed asI.e. to the prediction category of resisting sample
The probability different from the prediction category of normal sample.Optimization aim at this time is that disturbed value is minimum, and penalty values are maximum, i.e. λ1> 0,
λ2< 0 is realizedFor the output category to resisting sampleThere is no specific directive property, referred to as without target attack;
Further, for attacker, if its desired output isIf realizingIt is then to have target
Attack.Have under target attack, the loss function of deep learning model can be expressed asLose letter
Several optimization aims becomes the prediction category to resisting sampleWith the desired output category of attackerIdentical maximum probability
Change.Optimization aim at this time is that disturbed value is minimum, and penalty values are minimum, i.e. λ1> 0, λ2> 0;
2.2) the second class IIatt:Search for solution space Rm, find and be totally different from normal sample xnorTo resisting sampleInputting the prediction result obtained after object module isIt realizes clear pre- with high confidence level
Survey result.
For no target attack, the clear prediction result with high confidence level is construed to xadvIt is predicted to beClass output
Probability is far longer than the probability for being predicted to be other classifications, and mathematical notation is
Further, when deep learning model only allows to be predicted to beThe sample of class passes through, then attacker will
Conscious raising is predicted to be resisting sampleProbabilityThe attack for having target is carried out,
The optimization task of loss function becomes the prediction category to resisting sampleWith object module allow sample by prediction categoryIdentical maximization.Optimization aim at this time is that disturbed value is maximum, and penalty values are minimum, i.e. λ1< 0, λ2> 0;
3) parametric evolving of model is analyzed, show that current various attacks method is the particular value that this model takes in parameter
In the case of special case.Therefore the unified model proposed can not only generate existing methods attack sample, can also be taken in parameter
Multifarious candidate attack sample is obtained under different value.
Current research is mainly for the parameter optimization and disturbance solving model for generating model ρ-loss models to resisting sample
Step-size in search optimization.
To the parameter optimization of ρ-loss models:
3.1) work as λ1=1, λ2When=0, p=2, ρ-loss models are described exactly to resisting sample to being based on deep learning
The machine vision task of model is deceived for the first time.
3.2) it is that the minimal disturbances ρ under conditions of searching minimum c > 0 meets prediction expectation f that will optimize Task Switchingpre
(Ic+ ρ)=l, works as λ at this time1=c > 0, λ2When=1, p=1, the progress of box constrained procedure is namely based on described in ρ-loss models
The L-BFGS methods of approximate solution search.
3.3) the step length searching direction of ρ-iter models is optimized:
3.3.1) as iterations T=1, gradient direction corrected parameter κdirWhen=0, step-size in search becomes ε, ρ-at this time
It is that the disturbance based on single -step method (one-step) solves described in iter models;
Further, by fρWhen () is defined as the direction of search represented by sign function sign (), ρ-iter moulds
It is the Fast Field symbol obtained by expanding " linear " property of deep neural network model in higher dimensional space described in type
Method (Fast Gradient Sign Method, FGSM).
3.3.2) work as definitionWhen to have the gradient direction of unit length under p' norms,
It is Fast Field L described in ρ-iter modelsp'Method (Fast Gradient Lp'Method, FGM);Particularly, work as p'=0
When, by limiting zero norm, pixel scale can carry out disturbance calculating again, described in ρ-iter models be at this time based on it is refined can
The notable figure attack (Jacobian-based Saliency Map Attack, JSMA) of ratio;
3.3.3) as iterations T > 1, κdir=0, ρ-iter models are extended to Simple iterative method from single -step method, every time repeatedly
The step-size in search in generation is defined as α=ε/T;As iterations T > 1, adjustment in direction parameter κdir=μ gtIt indicates to carry attenuation rate μ
Momentum memory correction, ρ-iter models upgrade to momentum iterative method (MI-FGSM) from Simple iterative method.
3.3.4) when disturbing there are one number of pixels, described ρ-iter models are exactly single pixel point attack;
4) it is directed to the mysteriousness of black box attack, is tested based on adaboost conceptual designs, generates multiple realization same tasks
Different type alternative model integrated, by training generator to the attack of the integrated model with strong defending performance, reach
The purpose attacked to black box.Multi-model composite defense method of the design one with stronger defence capability proposes a kind of weight most
The multi-model cooperation detection attack that optimal sorting is matched, ensures it to having stronger defence capability to attack resistance.For the property of challenge model
It can analyze as shown in Fig. 2, can specifically be divided into three parts.The generation method includes the following steps:
4.1) unified model generates attack under single model and the attack effect of basic attack model is evaluated.For normal sample
xnor, attack resistance generation method is handled using different, is obtained to resisting sampleWith
4.2) concept based on adaboost carries out the defence of more deep neural networks, by forming weight to object moduleValue adjustment integrated model in face of to resisting sample when defence capability, wherein i=1,2 ..., n indicate single in integrated model
The number of a model, j=1,2 ..., g indicate the group number of weight, i.e., the number of the integrated model with the combination of different weights.Base
In adaboost methods, imitated by attack of the integrated model under the conditions of the different weights compositions of evaluation under different attack modes
Fruit, the quality of comparison attacks method.
4.3) first, to the integrated model optimal weights under basic attack modelIt automatically determines, forming face pair
The integrated model TMs with most strong defence capability combined by n object module when the attackbaseline.Then, by unified model
What is generated is applied to the TMs with most strong defence capability to resisting samplebaseline, AG-GAN is relative to basic attack model for verification
The attack transfer ability of attacking ability.In addition, for integrated model TMsbaselineWeight it is adaptively anti-using entropy trapping is intersected
The variation of imperial ability, weightNormalization after adaptive formula be:
In formula (4), ωiInitial value be set as 1/n,Indicate integrated model TMsbaselineIn i-th |i
=1,2,...,mThe cross entropy loss function of a model, Θ indicate to calculate the parameter of function.When generation is to resisting sample xadvNormal sample
This xnorBelong to lnorWhen class, if by object module TMiPrediction result category afterwards is equal to lnor, show the model for this
The defence capability of attack is stronger, then the corresponding weighing factor ω of the object moduleiCan accordingly it increase;If to xadvPrediction result
It is not lnor, show that the model fails to the defence that this is attacked, corresponding model weighing factor can reduce.
Claims (6)
1. a kind of multi-model composite defense method for fighting sexual assault towards deep learning, it is characterised in that:The method includes
Following steps:
1) unified Modeling is carried out to the attack based on gradient and proposes ρ-loss models, process is as follows:
1.1) optimization ρ-loss models are unified into resisting sample generating mode based on gradient by all, be defined as follows:
argmin λ1||ρ||p+λ2Loss(xadv,fpre(xadv)) s.t. ρ=xnor-xadv (1)
In formula (1), ρ is indicated to resisting sample xadvWith normal sample xnorBetween existing disturbance;fpre() indicates deep learning model
Prediction output;||·||pIndicate the p norms of disturbance;Loss () indicates loss function;λ1And λ2It is scale parameter, uses
In the order of magnitude of balance disturbance norm and loss function, value range is [10-1, 10], and positive and negative change is carried out according to optimization aim
It changes;
1.2) analysis generate it is whether effective to the final effect of resisting sample, by itself between normal sample existing disturbance ρ, make
The desired output f of deep learning model realization attackerpre(xadv);
1.3) it is directed to resisting sample xadvSolution, disturbance solving model ρ-iter that will be unitized are expressed as:
In formula (2), ε indicates the feasible solution section of disturbance search;T indicates iterations when disturbance calculates;fρ() indicates
The direction of search of disturbance;θ indicates the parameter that deep learning model is trained about input sample;It indicates to resisting sample
xadvPrediction category;Loss (,) indicates the loss function of neural network model;
Indicate loss function about after x derivationsThe gradient direction at place;κdirThe correction factor for indicating gradient direction, to reach higher
The search effect of performance;
2) according to the design of unified model, for object module fpre(x) to attack resistance, according to the generation to resisting sample as a result,
The basic expressions form of attack is divided into two classes;
2.1) first kind Ιatt:Search for solution space Rm, find and be similar to normal sample xnorTo resisting sample xadv∈Xm, input
Object module obtains prediction resultIt realizesDefine probability function Pr () simultaneously;
The optimization aim of prediction result is expressed asI.e. to the prediction category and normal sample of resisting sample
The different probability of prediction category originally, optimization aim at this time is that disturbed value is minimum, and penalty values are maximum, i.e. λ1> 0, λ2< 0, it is real
It is existingFor the output category to resisting sampleThere is no specific directive property, referred to as without target attack;
2.2) the second class IIatt:Search for solution space Rm, find and be totally different from normal sample xnorTo resisting sampleInputting the prediction result obtained after object module isIt realizes clear pre- with high confidence level
Survey result;
For no target attack, the clear prediction result with high confidence level is construed to xadvIt is predicted to beThe probability of class output
It is far longer than the probability for being predicted to be other classifications, mathematical notation is
3) parametric evolving of model is analyzed, is solved for the parameter optimization and disturbance for generating model ρ-loss models to resisting sample
The step-size in search of model optimizes;
4) it is directed to the mysteriousness of black box attack, is tested based on adaboost conceptual designs, generates multiple realization same tasks not
Same type alternative model is integrated, by training generator, design one to the attack of the integrated model with strong defending performance
A multi-model composite defense method with stronger defence capability proposes that a kind of multi-model cooperation detection of weight optimum allocation is attacked
It hits.
2. a kind of multi-model composite defense method for fighting sexual assault towards deep learning as described in claim 1, feature
It is:In the step 1), for the object module f based on deep learningpreThe loss function of () is defined as cross entropy and is used for
Weigh sample xnorPrediction category in object moduleWith the true category y of sample itselfnorDegree of closeness:
For the object module f based on deep learningpre(xnor):Its function is according to input
Sample xnorPredict that it exports categoryAnd realize the output category that prediction obtainsWith the true category of input sample
ynorIt is identical, i.e.,
3. a kind of multi-model composite defense method for fighting sexual assault towards deep learning as claimed in claim 1 or 2, special
Sign is:In the step 2.1), for attacker, if its desired output isIf realizingIt is then
There is target attack, have under target attack, the loss function of deep learning model can be expressed as
The optimization aim of loss function becomes the prediction category to resisting sampleWith the desired output category of attackerIt is identical general
Rate maximizes, and optimization aim at this time is that disturbed value is minimum, and penalty values are minimum, i.e. λ1> 0, λ2> 0.
4. a kind of multi-model composite defense method for fighting sexual assault towards deep learning as claimed in claim 1 or 2, special
Sign is:In the step 2.2), when deep learning model only allows to be predicted to beThe sample of class passes through, then attacks
Person conscious will improve and is predicted to be to resisting sampleProbabilityCarry out having target
Attack, the optimization task of loss function become the prediction category to resisting sampleWith object module allow sample by prediction
CategoryIdentical maximization, optimization aim at this time is that disturbed value is maximum, and penalty values are minimum, i.e. λ1< 0, λ2> 0.
5. a kind of multi-model composite defense method for fighting sexual assault towards deep learning as claimed in claim 1 or 2, special
Sign is:It is as follows to the parameter optimisation procedure of ρ-loss models in the step 3):
1) work as λ1=1, λ2When=0, p=2, ρ-loss models are described exactly to resisting sample to based on deep learning model
Machine vision task is deceived for the first time;
2) it is that the minimal disturbances ρ under conditions of searching minimum c > 0 meets prediction expectation f that will optimize Task Switchingpre(Ic+ ρ)=
L works as λ at this time1=c > 0, λ2When=1, p=1, it is namely based on box constrained procedure progress approximate solution described in ρ-loss models and searches
The L-BFGS methods of rope;
3) the step length searching direction of ρ-iter models is optimized:
3.1) as iterations T=1, gradient direction corrected parameter κdirWhen=0, step-size in search becomes ε, ρ-iter moulds at this time
It is that the disturbance based on single -step method solves described in type;
By fρIt is logical described in ρ-iter models when () is defined as the direction of search represented by sign function sign ()
It crosses and expands the Fast Field symbolic method that " linear " property of deep neural network model in higher dimensional space obtains;
3.2) work as definitionWhen to have the gradient direction of unit length under p' norms, ρ-iter
It is Fast Field L described in modelp'Method;;Particularly, as p'=0, by limiting zero norm, can again pixel scale into
Row disturbance calculates, and is the notable figure attack based on Jacobi described in ρ-iter models at this time;
3.3) as iterations T > 1, κdir=0, ρ-iter models are extended to Simple iterative method from single -step method, and each iteration is searched
Suo Buchang is defined as α=ε/T;As iterations T > 1, adjustment in direction parameter κdir=μ gtIndicate the momentum with attenuation rate μ
Memory correction, ρ-iter models upgrade to momentum iterative method from Simple iterative method;
3.4) when disturbing there are one number of pixels, described ρ-iter models are exactly single pixel point attack.
6. a kind of multi-model composite defense method for fighting sexual assault towards deep learning as claimed in claim 1 or 2, special
Sign is:In the step 4), the generation method includes the following steps:
1) unified model generates attack under single model and the attack effect of basic attack model is evaluated, for normal sample xnor, make
Attack resistance generation method is handled with different, is obtained to resisting sampleWith
2) concept based on adaboost carries out the defence of more deep neural networks, by forming weight to object moduleTake
Defence capability when value adjustment integrated model is in face of to resisting sample, wherein i=1,2 ..., n indicate single model in integrated model
Number, j=1,2 ..., g indicate weight group number, i.e., with different weights combination integrated model number, be based on
Adaboost methods are imitated by attack of the integrated model under the conditions of the different weights compositions of evaluation under different attack modes
Fruit, the quality of comparison attacks method;
3) first, to the integrated model optimal weights under basic attack modelIt automatically determines, forming face is to the attack
When the integrated model TMs with most strong defence capability that is combined by n object modulebaseline;Then, unified model is generated
TMs with most strong defence capability is applied to resisting samplebaseline, attack energy of the verification AG-GAN relative to basic attack model
The attack transfer ability of power;In addition, for integrated model TMsbaselineWeight adaptively using intersect entropy trapping defence capability
Variation, weightNormalization after adaptive formula be:
In formula (4), ωiInitial value be set as 1/n,Indicate integrated model TMsbaselineIn i-th |I=1,2 ..., m
The cross entropy loss function of a model, Θ indicate to calculate the parameter of function;When generation is to resisting sample xadvNormal sample xnorBelong to
In lnorWhen class, if by object module TMiPrediction result category afterwards is equal to lnor, it is anti-to show that the model attacks this
Imperial ability is stronger, then the corresponding weighing factor ω of the object moduleiCan accordingly it increase;If to xadvPrediction result be lnor,
Show that the model fails to the defence that this is attacked, corresponding model weighing factor can reduce.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810141253.9A CN108446765A (en) | 2018-02-11 | 2018-02-11 | The multi-model composite defense method of sexual assault is fought towards deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810141253.9A CN108446765A (en) | 2018-02-11 | 2018-02-11 | The multi-model composite defense method of sexual assault is fought towards deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108446765A true CN108446765A (en) | 2018-08-24 |
Family
ID=63192376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810141253.9A Pending CN108446765A (en) | 2018-02-11 | 2018-02-11 | The multi-model composite defense method of sexual assault is fought towards deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108446765A (en) |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117482A (en) * | 2018-09-17 | 2019-01-01 | 武汉大学 | A kind of confrontation sample generating method towards the detection of Chinese text emotion tendency |
CN109214327A (en) * | 2018-08-29 | 2019-01-15 | 浙江工业大学 | A kind of anti-face identification method based on PSO |
CN109460814A (en) * | 2018-09-28 | 2019-03-12 | 浙江工业大学 | A kind of deep learning classification method for attacking resisting sample function with defence |
CN109492355A (en) * | 2018-11-07 | 2019-03-19 | 中国科学院信息工程研究所 | A kind of software analysis resistant method and system based on deep learning |
CN109543760A (en) * | 2018-11-28 | 2019-03-29 | 上海交通大学 | Confrontation sample testing method based on image filters algorithm |
CN109581871A (en) * | 2018-12-03 | 2019-04-05 | 北京工业大学 | The immune industrial control system intrusion detection method to resisting sample |
CN109599109A (en) * | 2018-12-26 | 2019-04-09 | 浙江大学 | For the confrontation audio generation method and system of whitepack scene |
CN109886248A (en) * | 2019-03-08 | 2019-06-14 | 南方科技大学 | Image generation method and device, storage medium and electronic equipment |
CN109948663A (en) * | 2019-02-27 | 2019-06-28 | 天津大学 | A kind of confrontation attack method of the adaptive step based on model extraction |
CN109948658A (en) * | 2019-02-25 | 2019-06-28 | 浙江工业大学 | The confrontation attack defense method of Feature Oriented figure attention mechanism and application |
CN110020593A (en) * | 2019-02-03 | 2019-07-16 | 清华大学 | Information processing method and device, medium and calculating equipment |
CN110070115A (en) * | 2019-04-04 | 2019-07-30 | 广州大学 | A kind of single pixel attack sample generating method, device, equipment and storage medium |
CN110163163A (en) * | 2019-05-24 | 2019-08-23 | 浙江工业大学 | A kind of defence method and defence installation for the limited attack of individual face inquiry times |
CN110175611A (en) * | 2019-05-24 | 2019-08-27 | 浙江工业大学 | Defence method and device towards Vehicle License Plate Recognition System black box physical attacks model |
CN110264505A (en) * | 2019-06-05 | 2019-09-20 | 北京达佳互联信息技术有限公司 | A kind of monocular depth estimation method, device, electronic equipment and storage medium |
CN110276377A (en) * | 2019-05-17 | 2019-09-24 | 杭州电子科技大学 | A kind of confrontation sample generating method based on Bayes's optimization |
CN110378389A (en) * | 2019-06-24 | 2019-10-25 | 苏州浪潮智能科技有限公司 | A kind of Adaboost classifier calculated machine creating device |
CN110619292A (en) * | 2019-08-31 | 2019-12-27 | 浙江工业大学 | Countermeasure defense method based on binary particle swarm channel optimization |
CN110633570A (en) * | 2019-07-24 | 2019-12-31 | 浙江工业大学 | Black box attack defense method for malicious software assembly format detection model |
CN110728297A (en) * | 2019-09-04 | 2020-01-24 | 电子科技大学 | Low-cost antagonistic network attack sample generation method based on GAN |
CN110751291A (en) * | 2019-10-29 | 2020-02-04 | 支付宝(杭州)信息技术有限公司 | Method and device for realizing multi-party combined training neural network of security defense |
CN110768971A (en) * | 2019-10-16 | 2020-02-07 | 伍军 | Confrontation sample rapid early warning method and system suitable for artificial intelligence system |
CN110889117A (en) * | 2019-11-28 | 2020-03-17 | 支付宝(杭州)信息技术有限公司 | Method and device for defending model attack |
CN110941794A (en) * | 2019-11-27 | 2020-03-31 | 浙江工业大学 | Anti-attack defense method based on universal inverse disturbance defense matrix |
CN110968866A (en) * | 2019-11-27 | 2020-04-07 | 浙江工业大学 | Defense method for resisting attack for deep reinforcement learning model |
CN111104982A (en) * | 2019-12-20 | 2020-05-05 | 电子科技大学 | Label-independent cross-task confrontation sample generation method |
CN111291828A (en) * | 2020-03-03 | 2020-06-16 | 广州大学 | HRRP (high resolution ratio) counterattack method for sample black box based on deep learning |
CN111310836A (en) * | 2020-02-20 | 2020-06-19 | 浙江工业大学 | Method and device for defending voiceprint recognition integrated model based on spectrogram |
CN111310802A (en) * | 2020-01-20 | 2020-06-19 | 星汉智能科技股份有限公司 | Anti-attack defense training method based on generation of anti-network |
CN111340180A (en) * | 2020-02-10 | 2020-06-26 | 中国人民解放军国防科技大学 | Countermeasure sample generation method and device for designated label, electronic equipment and medium |
CN111401407A (en) * | 2020-02-25 | 2020-07-10 | 浙江工业大学 | Countermeasure sample defense method based on feature remapping and application |
CN111582295A (en) * | 2019-02-15 | 2020-08-25 | 百度(美国)有限责任公司 | System and method for joint antagonism training by combining both spatial and pixel attacks |
CN111600835A (en) * | 2020-03-18 | 2020-08-28 | 宁波送变电建设有限公司永耀科技分公司 | Detection and defense method based on FGSM (FGSM) counterattack algorithm |
CN111860832A (en) * | 2020-07-01 | 2020-10-30 | 广州大学 | Method for enhancing neural network defense capacity based on federal learning |
CN112016377A (en) * | 2019-05-30 | 2020-12-01 | 百度(美国)有限责任公司 | System and method for resistively robust object detection |
CN112162515A (en) * | 2020-10-10 | 2021-01-01 | 浙江大学 | Anti-attack method for process monitoring system |
CN112270700A (en) * | 2020-10-30 | 2021-01-26 | 浙江大学 | Attack judgment method capable of interpreting algorithm by fooling deep neural network |
CN112766430A (en) * | 2021-01-08 | 2021-05-07 | 广州紫为云科技有限公司 | Method, device and storage medium for resisting attack based on black box universal face detection |
CN112819109A (en) * | 2021-04-19 | 2021-05-18 | 中国工程物理研究院计算机应用研究所 | Video classification system security enhancement method aiming at black box resisting sample attack |
CN112907552A (en) * | 2021-03-09 | 2021-06-04 | 百度在线网络技术(北京)有限公司 | Robustness detection method, device and program product for image processing model |
CN112989361A (en) * | 2021-04-14 | 2021-06-18 | 华南理工大学 | Model security detection method based on generation countermeasure network |
CN113254927A (en) * | 2021-05-28 | 2021-08-13 | 浙江工业大学 | Model processing method and device based on network defense and storage medium |
CN113452548A (en) * | 2021-05-08 | 2021-09-28 | 浙江工业大学 | Index evaluation method and system for network node classification and link prediction |
CN113688914A (en) * | 2021-08-27 | 2021-11-23 | 西安交通大学 | Practical relative sequence attack resisting method |
CN113761249A (en) * | 2020-08-03 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Method and device for determining picture type |
CN114627373A (en) * | 2022-02-25 | 2022-06-14 | 北京理工大学 | Countermeasure sample generation method for remote sensing image target detection model |
CN114722812A (en) * | 2022-04-02 | 2022-07-08 | 尚蝉(浙江)科技有限公司 | Method and system for analyzing vulnerability of multi-mode deep learning model |
CN114724014A (en) * | 2022-06-06 | 2022-07-08 | 杭州海康威视数字技术股份有限公司 | Anti-sample attack detection method and device based on deep learning and electronic equipment |
CN115063790A (en) * | 2020-05-11 | 2022-09-16 | 北京航空航天大学 | Anti-attack method and device based on three-dimensional dynamic interaction scene |
CN115062306A (en) * | 2022-06-28 | 2022-09-16 | 中国海洋大学 | Black box anti-attack method for malicious code detection system |
CN115063654A (en) * | 2022-06-08 | 2022-09-16 | 厦门大学 | Black box attack method based on sequence element learning, storage medium and electronic equipment |
CN115271067A (en) * | 2022-08-25 | 2022-11-01 | 天津大学 | Android counterattack sample attack method based on characteristic relation evaluation |
CN115481719A (en) * | 2022-09-20 | 2022-12-16 | 宁波大学 | Method for defending gradient-based attack countermeasure |
CN116304959A (en) * | 2023-05-24 | 2023-06-23 | 山东省计算中心(国家超级计算济南中心) | Method and system for defending against sample attack for industrial control system |
CN116701910A (en) * | 2023-06-06 | 2023-09-05 | 山东省计算中心(国家超级计算济南中心) | Dual-feature selection-based countermeasure sample generation method and system |
-
2018
- 2018-02-11 CN CN201810141253.9A patent/CN108446765A/en active Pending
Cited By (94)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109214327A (en) * | 2018-08-29 | 2019-01-15 | 浙江工业大学 | A kind of anti-face identification method based on PSO |
CN109214327B (en) * | 2018-08-29 | 2021-08-03 | 浙江工业大学 | Anti-face recognition method based on PSO |
CN109117482A (en) * | 2018-09-17 | 2019-01-01 | 武汉大学 | A kind of confrontation sample generating method towards the detection of Chinese text emotion tendency |
CN109117482B (en) * | 2018-09-17 | 2021-07-06 | 武汉大学 | Confrontation sample generation method for Chinese text emotion orientation detection |
CN109460814B (en) * | 2018-09-28 | 2020-11-03 | 浙江工业大学 | Deep learning classification method with function of defending against sample attack |
CN109460814A (en) * | 2018-09-28 | 2019-03-12 | 浙江工业大学 | A kind of deep learning classification method for attacking resisting sample function with defence |
CN109492355A (en) * | 2018-11-07 | 2019-03-19 | 中国科学院信息工程研究所 | A kind of software analysis resistant method and system based on deep learning |
CN109492355B (en) * | 2018-11-07 | 2021-09-07 | 中国科学院信息工程研究所 | Software anti-analysis method and system based on deep learning |
CN109543760A (en) * | 2018-11-28 | 2019-03-29 | 上海交通大学 | Confrontation sample testing method based on image filters algorithm |
CN109543760B (en) * | 2018-11-28 | 2021-10-19 | 上海交通大学 | Confrontation sample detection method based on image filter algorithm |
CN109581871A (en) * | 2018-12-03 | 2019-04-05 | 北京工业大学 | The immune industrial control system intrusion detection method to resisting sample |
CN109581871B (en) * | 2018-12-03 | 2022-01-21 | 北京工业大学 | Industrial control system intrusion detection method of immune countermeasure sample |
CN109599109A (en) * | 2018-12-26 | 2019-04-09 | 浙江大学 | For the confrontation audio generation method and system of whitepack scene |
CN109599109B (en) * | 2018-12-26 | 2022-03-25 | 浙江大学 | Confrontation audio generation method and system for white-box scene |
CN110020593A (en) * | 2019-02-03 | 2019-07-16 | 清华大学 | Information processing method and device, medium and calculating equipment |
CN110020593B (en) * | 2019-02-03 | 2021-04-13 | 清华大学 | Information processing method and device, medium and computing equipment |
CN111582295B (en) * | 2019-02-15 | 2023-09-08 | 百度(美国)有限责任公司 | System and method for joint resistance training by combining both spatial and pixel attacks |
CN111582295A (en) * | 2019-02-15 | 2020-08-25 | 百度(美国)有限责任公司 | System and method for joint antagonism training by combining both spatial and pixel attacks |
CN109948658B (en) * | 2019-02-25 | 2021-06-15 | 浙江工业大学 | Feature diagram attention mechanism-oriented anti-attack defense method and application |
CN109948658A (en) * | 2019-02-25 | 2019-06-28 | 浙江工业大学 | The confrontation attack defense method of Feature Oriented figure attention mechanism and application |
CN109948663A (en) * | 2019-02-27 | 2019-06-28 | 天津大学 | A kind of confrontation attack method of the adaptive step based on model extraction |
CN109948663B (en) * | 2019-02-27 | 2022-03-15 | 天津大学 | Step-length self-adaptive attack resisting method based on model extraction |
CN109886248A (en) * | 2019-03-08 | 2019-06-14 | 南方科技大学 | Image generation method and device, storage medium and electronic equipment |
CN110070115A (en) * | 2019-04-04 | 2019-07-30 | 广州大学 | A kind of single pixel attack sample generating method, device, equipment and storage medium |
CN110276377A (en) * | 2019-05-17 | 2019-09-24 | 杭州电子科技大学 | A kind of confrontation sample generating method based on Bayes's optimization |
CN110276377B (en) * | 2019-05-17 | 2021-04-06 | 杭州电子科技大学 | Confrontation sample generation method based on Bayesian optimization |
CN110175611A (en) * | 2019-05-24 | 2019-08-27 | 浙江工业大学 | Defence method and device towards Vehicle License Plate Recognition System black box physical attacks model |
CN110163163A (en) * | 2019-05-24 | 2019-08-23 | 浙江工业大学 | A kind of defence method and defence installation for the limited attack of individual face inquiry times |
CN110163163B (en) * | 2019-05-24 | 2020-12-01 | 浙江工业大学 | Defense method and defense device for single face query frequency limited attack |
CN112016377A (en) * | 2019-05-30 | 2020-12-01 | 百度(美国)有限责任公司 | System and method for resistively robust object detection |
CN112016377B (en) * | 2019-05-30 | 2023-11-24 | 百度(美国)有限责任公司 | System and method for robust object detection |
CN110264505B (en) * | 2019-06-05 | 2021-07-30 | 北京达佳互联信息技术有限公司 | Monocular depth estimation method and device, electronic equipment and storage medium |
CN110264505A (en) * | 2019-06-05 | 2019-09-20 | 北京达佳互联信息技术有限公司 | A kind of monocular depth estimation method, device, electronic equipment and storage medium |
CN110378389A (en) * | 2019-06-24 | 2019-10-25 | 苏州浪潮智能科技有限公司 | A kind of Adaboost classifier calculated machine creating device |
CN110633570A (en) * | 2019-07-24 | 2019-12-31 | 浙江工业大学 | Black box attack defense method for malicious software assembly format detection model |
CN110633570B (en) * | 2019-07-24 | 2021-05-11 | 浙江工业大学 | Black box attack defense method for malicious software assembly format detection model |
CN110619292A (en) * | 2019-08-31 | 2019-12-27 | 浙江工业大学 | Countermeasure defense method based on binary particle swarm channel optimization |
CN110619292B (en) * | 2019-08-31 | 2021-05-11 | 浙江工业大学 | Countermeasure defense method based on binary particle swarm channel optimization |
CN110728297B (en) * | 2019-09-04 | 2021-08-06 | 电子科技大学 | Low-cost antagonistic network attack sample generation method based on GAN |
CN110728297A (en) * | 2019-09-04 | 2020-01-24 | 电子科技大学 | Low-cost antagonistic network attack sample generation method based on GAN |
CN110768971A (en) * | 2019-10-16 | 2020-02-07 | 伍军 | Confrontation sample rapid early warning method and system suitable for artificial intelligence system |
CN110751291A (en) * | 2019-10-29 | 2020-02-04 | 支付宝(杭州)信息技术有限公司 | Method and device for realizing multi-party combined training neural network of security defense |
CN110941794B (en) * | 2019-11-27 | 2023-08-22 | 浙江工业大学 | Challenge attack defense method based on general inverse disturbance defense matrix |
CN110941794A (en) * | 2019-11-27 | 2020-03-31 | 浙江工业大学 | Anti-attack defense method based on universal inverse disturbance defense matrix |
CN110968866A (en) * | 2019-11-27 | 2020-04-07 | 浙江工业大学 | Defense method for resisting attack for deep reinforcement learning model |
CN110889117A (en) * | 2019-11-28 | 2020-03-17 | 支付宝(杭州)信息技术有限公司 | Method and device for defending model attack |
CN111104982A (en) * | 2019-12-20 | 2020-05-05 | 电子科技大学 | Label-independent cross-task confrontation sample generation method |
CN111104982B (en) * | 2019-12-20 | 2021-09-24 | 电子科技大学 | Label-independent cross-task confrontation sample generation method |
CN111310802B (en) * | 2020-01-20 | 2021-09-17 | 星汉智能科技股份有限公司 | Anti-attack defense training method based on generation of anti-network |
CN111310802A (en) * | 2020-01-20 | 2020-06-19 | 星汉智能科技股份有限公司 | Anti-attack defense training method based on generation of anti-network |
CN111340180A (en) * | 2020-02-10 | 2020-06-26 | 中国人民解放军国防科技大学 | Countermeasure sample generation method and device for designated label, electronic equipment and medium |
CN111310836A (en) * | 2020-02-20 | 2020-06-19 | 浙江工业大学 | Method and device for defending voiceprint recognition integrated model based on spectrogram |
CN111310836B (en) * | 2020-02-20 | 2023-08-18 | 浙江工业大学 | Voiceprint recognition integrated model defending method and defending device based on spectrogram |
US11921819B2 (en) | 2020-02-25 | 2024-03-05 | Zhejiang University Of Technology | Defense method and an application against adversarial examples based on feature remapping |
CN111401407B (en) * | 2020-02-25 | 2021-05-14 | 浙江工业大学 | Countermeasure sample defense method based on feature remapping and application |
WO2021169157A1 (en) * | 2020-02-25 | 2021-09-02 | 浙江工业大学 | Feature remapping-based adversarial sample defense method and application |
CN111401407A (en) * | 2020-02-25 | 2020-07-10 | 浙江工业大学 | Countermeasure sample defense method based on feature remapping and application |
CN111291828B (en) * | 2020-03-03 | 2023-10-27 | 广州大学 | HRRP (high-resolution redundancy protocol) anti-sample black box attack method based on deep learning |
CN111291828A (en) * | 2020-03-03 | 2020-06-16 | 广州大学 | HRRP (high resolution ratio) counterattack method for sample black box based on deep learning |
CN111600835A (en) * | 2020-03-18 | 2020-08-28 | 宁波送变电建设有限公司永耀科技分公司 | Detection and defense method based on FGSM (FGSM) counterattack algorithm |
CN111600835B (en) * | 2020-03-18 | 2022-06-24 | 宁波送变电建设有限公司永耀科技分公司 | Detection and defense method based on FGSM (FGSM) counterattack algorithm |
CN115063790A (en) * | 2020-05-11 | 2022-09-16 | 北京航空航天大学 | Anti-attack method and device based on three-dimensional dynamic interaction scene |
CN111860832A (en) * | 2020-07-01 | 2020-10-30 | 广州大学 | Method for enhancing neural network defense capacity based on federal learning |
CN113761249A (en) * | 2020-08-03 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Method and device for determining picture type |
CN112162515A (en) * | 2020-10-10 | 2021-01-01 | 浙江大学 | Anti-attack method for process monitoring system |
CN112270700B (en) * | 2020-10-30 | 2022-06-28 | 浙江大学 | Attack judgment method capable of interpreting algorithm by using deep neural network |
CN112270700A (en) * | 2020-10-30 | 2021-01-26 | 浙江大学 | Attack judgment method capable of interpreting algorithm by fooling deep neural network |
CN112766430A (en) * | 2021-01-08 | 2021-05-07 | 广州紫为云科技有限公司 | Method, device and storage medium for resisting attack based on black box universal face detection |
CN112766430B (en) * | 2021-01-08 | 2022-01-28 | 广州紫为云科技有限公司 | Method, device and storage medium for resisting attack based on black box universal face detection |
CN112907552B (en) * | 2021-03-09 | 2024-03-01 | 百度在线网络技术(北京)有限公司 | Robustness detection method, device and program product for image processing model |
CN112907552A (en) * | 2021-03-09 | 2021-06-04 | 百度在线网络技术(北京)有限公司 | Robustness detection method, device and program product for image processing model |
CN112989361A (en) * | 2021-04-14 | 2021-06-18 | 华南理工大学 | Model security detection method based on generation countermeasure network |
CN112989361B (en) * | 2021-04-14 | 2023-10-20 | 华南理工大学 | Model security detection method based on generation countermeasure network |
CN112819109A (en) * | 2021-04-19 | 2021-05-18 | 中国工程物理研究院计算机应用研究所 | Video classification system security enhancement method aiming at black box resisting sample attack |
CN112819109B (en) * | 2021-04-19 | 2021-06-18 | 中国工程物理研究院计算机应用研究所 | Video classification system security enhancement method aiming at black box resisting sample attack |
CN113452548B (en) * | 2021-05-08 | 2022-07-19 | 浙江工业大学 | Index evaluation method and system for network node classification and link prediction |
CN113452548A (en) * | 2021-05-08 | 2021-09-28 | 浙江工业大学 | Index evaluation method and system for network node classification and link prediction |
CN113254927B (en) * | 2021-05-28 | 2022-05-17 | 浙江工业大学 | Model processing method and device based on network defense and storage medium |
CN113254927A (en) * | 2021-05-28 | 2021-08-13 | 浙江工业大学 | Model processing method and device based on network defense and storage medium |
CN113688914A (en) * | 2021-08-27 | 2021-11-23 | 西安交通大学 | Practical relative sequence attack resisting method |
CN114627373B (en) * | 2022-02-25 | 2024-07-23 | 北京理工大学 | Method for generating countermeasure sample for remote sensing image target detection model |
CN114627373A (en) * | 2022-02-25 | 2022-06-14 | 北京理工大学 | Countermeasure sample generation method for remote sensing image target detection model |
CN114722812A (en) * | 2022-04-02 | 2022-07-08 | 尚蝉(浙江)科技有限公司 | Method and system for analyzing vulnerability of multi-mode deep learning model |
CN114724014A (en) * | 2022-06-06 | 2022-07-08 | 杭州海康威视数字技术股份有限公司 | Anti-sample attack detection method and device based on deep learning and electronic equipment |
CN115063654A (en) * | 2022-06-08 | 2022-09-16 | 厦门大学 | Black box attack method based on sequence element learning, storage medium and electronic equipment |
CN115062306A (en) * | 2022-06-28 | 2022-09-16 | 中国海洋大学 | Black box anti-attack method for malicious code detection system |
CN115271067B (en) * | 2022-08-25 | 2024-02-23 | 天津大学 | Android anti-sample attack method based on feature relation evaluation |
CN115271067A (en) * | 2022-08-25 | 2022-11-01 | 天津大学 | Android counterattack sample attack method based on characteristic relation evaluation |
CN115481719B (en) * | 2022-09-20 | 2023-09-15 | 宁波大学 | Method for defending against attack based on gradient |
CN115481719A (en) * | 2022-09-20 | 2022-12-16 | 宁波大学 | Method for defending gradient-based attack countermeasure |
CN116304959B (en) * | 2023-05-24 | 2023-08-15 | 山东省计算中心(国家超级计算济南中心) | Method and system for defending against sample attack for industrial control system |
CN116304959A (en) * | 2023-05-24 | 2023-06-23 | 山东省计算中心(国家超级计算济南中心) | Method and system for defending against sample attack for industrial control system |
CN116701910A (en) * | 2023-06-06 | 2023-09-05 | 山东省计算中心(国家超级计算济南中心) | Dual-feature selection-based countermeasure sample generation method and system |
CN116701910B (en) * | 2023-06-06 | 2024-01-05 | 山东省计算中心(国家超级计算济南中心) | Dual-feature selection-based countermeasure sample generation method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446765A (en) | The multi-model composite defense method of sexual assault is fought towards deep learning | |
Yavuz et al. | Deep learning for detection of routing attacks in the internet of things | |
Ali et al. | Particle swarm optimization-based feature weighting for improving intelligent phishing website detection | |
CN110866287B (en) | Point attack method for generating countercheck sample based on weight spectrum | |
CN111325324A (en) | Deep learning confrontation sample generation method based on second-order method | |
CN108615048A (en) | It is evolved based on disturbance and fights the defence method of sexual assault to Image Classifier | |
CN111047006B (en) | Dual generation network-based anti-attack defense model and application | |
Fu et al. | The robust deep learning–based schemes for intrusion detection in internet of things environments | |
CN112217787B (en) | Method and system for generating mock domain name training data based on ED-GAN | |
CN114066912A (en) | Intelligent countermeasure sample generation method and system based on optimization algorithm and invariance | |
CN113591975A (en) | Countermeasure sample generation method and system based on Adam algorithm | |
Wang et al. | Defending dnn adversarial attacks with pruning and logits augmentation | |
CN112465015A (en) | Adaptive gradient integration adversity attack method oriented to generalized nonnegative matrix factorization algorithm | |
CN113269228B (en) | Method, device and system for training graph network classification model and electronic equipment | |
CN113947579B (en) | Confrontation sample detection method for image target detection neural network | |
Suzuki et al. | Adversarial example generation using evolutionary multi-objective optimization | |
Wu et al. | Genetic algorithm with multiple fitness functions for generating adversarial examples | |
CN110334508A (en) | A kind of host sequence intrusion detection method | |
CN113988312A (en) | Member reasoning privacy attack method and system facing machine learning model | |
CN114494771B (en) | Federal learning image classification method capable of defending back door attack | |
Chen et al. | DAmageNet: a universal adversarial dataset | |
CN115062306A (en) | Black box anti-attack method for malicious code detection system | |
CN113902974A (en) | Air combat threat target identification method based on convolutional neural network | |
CN116192537B (en) | APT attack report event extraction method, system and storage medium | |
CN113449865B (en) | Optimization method for enhancing training artificial intelligence model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180824 |
|
RJ01 | Rejection of invention patent application after publication |