CN110084002A - Deep neural network attack method, device, medium and calculating equipment - Google Patents

Deep neural network attack method, device, medium and calculating equipment Download PDF

Info

Publication number
CN110084002A
CN110084002A CN201910329772.2A CN201910329772A CN110084002A CN 110084002 A CN110084002 A CN 110084002A CN 201910329772 A CN201910329772 A CN 201910329772A CN 110084002 A CN110084002 A CN 110084002A
Authority
CN
China
Prior art keywords
sample
model
resisting
original sample
resisting sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910329772.2A
Other languages
Chinese (zh)
Inventor
朱军
董胤蓬
苏航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201910329772.2A priority Critical patent/CN110084002A/en
Publication of CN110084002A publication Critical patent/CN110084002A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present invention provide a kind of deep neural network attack method.This method comprises: establishing challenge model, wherein the challenge model include at least original sample information converting and identification model in the original sample according to the weight of the loss function of the transformed picture of the information converting;It is generated based on original sample to resisting sample using the challenge model.By generating for one group of picture after a true picture and its translation to resisting sample, method of the invention makes what is generated significantly to be promoted to resisting sample transfer performance, to reduce to resisting sample to the cost by the susceptibility of challenge model, reduced when being generated to resisting sample significantly.In addition, embodiments of the present invention provide a kind of deep neural network attack device, medium and calculate equipment.

Description

Deep neural network attack method, device, medium and calculating equipment
Technical field
Embodiments of the present invention are related to deep learning field, more specifically, embodiments of the present invention are related to a kind of depth It spends neural network attack method, device, medium and calculates equipment.
Background technique
Background that this section is intended to provide an explanation of the embodiments of the present invention set forth in the claims or context.Herein Description not because not recognizing it is the prior art being included in this section.
Deep neural network is as a kind of method in machine learning method, due in speech recognition, image classification, object The remarkable result that the numerous areas such as physical examination survey obtain, obtained people in recent years and widely paid close attention to.But in many tasks The deep neural network model that can achieve very high-accuracy is but highly susceptible to attack in Antagonistic Environment.In Antagonistic Environment In, deep neural network can be entered it is some based on normal sample malice construction to resisting sample, such as picture or voice Information.These are easy to be classified by deep learning model errors to resisting sample, but are but difficult to send out for human viewer Now to the difference between resisting sample and normal sample.Since the different systems based on deep learning can be measured to resisting sample Robustness quality, so research one important field of research is become to the generation of resisting sample.Meanwhile these fight Sample can also be used as a kind of mode of data enhancing, obtain more robust neural network for training.
Generate and be broadly divided into two kinds to the scene of resisting sample: white-box attack and black box are attacked.For white-box attack, attack Person knows the structure and parameter of target network, can use the Fast Field symbolic algorithm based on single-step iteration;Multi-Step Iterations are calculated Method;Based on the algorithm construction of optimization to resisting sample.Since what is constructed has certain transfer performance to resisting sample, so its It can be used to the black-box model of attack unknown structure and parameter, i.e. black box is attacked.
However, in actual application process, it is very difficult to attack a black-box model, certain anti-particularly with having The model of imperial measure is more difficult to black box success attack.For example, integrated dual training was trained by that will be added to resisting sample Cheng Zhong, can promote the robustness of trained deep neural network, and existing black box attack method is difficult to success.Cause this existing The basic reason of elephant is that objective function used in the white-box attack of existing attack method only only accounts for current picture, Make to generate in this way to resisting sample to very sensitive by challenge model, transfer performance is very poor.
Summary of the invention
Thus, it is also very desirable to a kind of improved method generated to resisting sample, so that the confrontation generated according to the method Sample has stronger transfer performance.
In the present context, embodiments of the present invention be intended to provide a kind of deep neural network attack method, device, Medium and calculating equipment.
In the first aspect of embodiment of the present invention, a kind of deep neural network attack method is provided, comprising:
Establish challenge model, wherein the challenge model includes at least the information converting and identification model of original sample In the original sample according to the weight of the loss function on the transformed picture of the information converting;
It is generated based on original sample to resisting sample using the challenge model.
In one embodiment of the invention, described that resisting sample is met:
With original sample in lDistance under norm is not more than preset threshold ∈;
Model can be identified to be mistakenly considered to be not belonging to the classification of original sample.
In another embodiment of the invention, the challenge model is based on any pair of resisting sample and generates model construction.
In yet another embodiment of the present invention, it is described to resisting sample generate model can be based on the original sample of input Gradient generate to resisting sample.
In yet another embodiment of the present invention, the challenge model and to resisting sample generate model can maximize institute Identification model is stated to the loss function at resisting sample, model can be identified to resisting sample is mistakenly considered not belong to so that described In the classification of original sample.
It is described that model satisfaction is generated to resisting sample in yet another embodiment of the present invention
Wherein, xadvFor to resisting sample, xoriFor original sample, y is the classification of original sample, J (xadv, y) and it is the knowledge Other model is to the loss function at resisting sample;
s.t.‖xadv-xori≤ ∈ indicates to meet resisting sample: with original sample in lDistance under norm is not more than Threshold value ∈.
In yet another embodiment of the present invention, the information converting include by the original sample in all directions The distance of transformation.
It is described to be transformed to translation transformation in yet another embodiment of the present invention.
In yet another embodiment of the present invention, the translatable range of the original sample in different directions is identical.
In yet another embodiment of the present invention, the challenge model is built as
Wherein, TijIt indicates Original sample is translated into i pixel in X direction, j pixel is translated on y direction;wijPicture after indicating translation The weight of corresponding loss function.
In yet another embodiment of the present invention, generated based on original sample using the challenge model to resisting sample, Include:
The original sample is transformed into corresponding picture according to the information converting;
It is obtained based on the transformed picture for generating the gradient information to resisting sample;
Based on described for generating gradient information and the original sample generation to resisting sample to resisting sample.
In yet another embodiment of the present invention, obtained based on the transformed picture for generating to resisting sample Gradient information, comprising:
The gradient information of all pictures is obtained based on the transformed picture;
It obtains being used to generate the gradient information to resisting sample after the gradient information of all pictures is weighted, wherein each power The sum of weight is 1.
In yet another embodiment of the present invention, confrontation is generated based on the original sample using the challenge model Sample, comprising:
It is obtained based on the original sample and information converting and weight for generating the gradient information to resisting sample;
Based on described for generating gradient information and the original sample generation to resisting sample to resisting sample.
In yet another embodiment of the present invention, it is used for based on the original sample and information converting and weight Generate the gradient information to resisting sample, comprising:
Wherein, the x is the variable for indicating sample, describedFor current sample.
In yet another embodiment of the present invention, i, j ∈ {-k ..., 0 ..., k }, k are the maximum of positive direction on any axis Translatable distance.
In yet another embodiment of the present invention,
It is equivalent to
Wherein, W indicates convolution kernel of the size for (2k+1) × (2k+1) and the element W in Wi,j=w-i-j, i.e., will be preparatory The convolution kernel and picture of settingGradient carry out convolution obtain for generating the gradient information to resisting sample.
In yet another embodiment of the present invention, the convolution kernel is arranged based on the information converting.
In yet another embodiment of the present invention, the kernel function of the convolution kernel is in homogeneous nucleus, linear kernel and Gaussian kernel One, wherein the value in the codomain of the homogeneous nucleus function is consistent.
In yet another embodiment of the present invention,
When kernel function is homogeneous nucleus, the element in the convolution kernel is arranged to
When kernel function is linear kernel, the element in the convolution kernel is arranged to
When kernel function is Gaussian kernel, the element in the convolution kernel is arranged to
It is described that model is generated using Fast Field symbolic method to resisting sample in yet another embodiment of the present invention (FGSM):
In yet another embodiment of the present invention, based on described for generating to the gradient information of resisting sample and original Sample is generated to resisting sample are as follows:
In the second aspect of embodiment of the present invention, providing a kind of deep neural network attack device includes:
Model building module is configured as establishing challenge model, wherein the challenge model includes at least original sample Information converting and identification model in the original sample according to the loss function of the transformed picture of the information converting Weight;
To resisting sample generation module, it is configured as generating based on original sample to resisting sample using the challenge model.
In the third aspect of embodiment of the present invention, a kind of computer readable storage medium is provided, program is stored with Code, said program code when being executed by a processor, realize the method as described in first aspect any embodiment.
In the fourth aspect of embodiment of the present invention, a kind of calculating equipment is provided, including processor and be stored with journey The storage medium of sequence code, said program code when being executed by a processor, are realized as described in first aspect any embodiment Method.
The deep neural network attack method, device, medium of embodiment and calculating equipment according to the present invention, can be with needle One group of picture after one true picture and its translation is generated to resisting sample so that generate to resisting sample metastatic Can significantly be promoted, thus reduce significantly to resisting sample to by the susceptibility of challenge model, reduce to resisting sample generate When cost.
Detailed description of the invention
The following detailed description is read with reference to the accompanying drawings, above-mentioned and other mesh of exemplary embodiment of the invention , feature and advantage will become prone to understand.In the accompanying drawings, it shows by way of example rather than limitation of the invention Several embodiments, in which:
Fig. 1 schematically shows the flow diagram of deep neural network attack method in an embodiment of the present invention;
Fig. 2 schematically shows use Fast Field symbolic method (FGSM) and translation invariant Fast Field symbolic method (TI-FGSM) generate to resisting sample;
Fig. 3 schematically shows influence of the selection of different convolution kernels for success attack rate;
The size of different convolution kernels of influence Fig. 4 schematically shows to(for) success attack rate;
The size that Fig. 5 schematically shows different convolution kernels generates confrontation sample instantiation;
Fig. 6 schematically shows Fast Field symbolic method (FGSM) and translation invariant Fast Field symbolic method (TI- FGSM) to the success attack rate of multiple identification models;
Fig. 7 schematically shows Fast Field symbolic method (MI-FGSM) and the translation invariant base used based on momentum In momentum Fast Field symbolic method (TI-MI-FGSM) to the success attack rates of multiple identification models;
Fig. 8 schematically shows multiplicity input attack algorithm (DIM) and translation invariant multiplicity inputs attack algorithm (TI-DIM) to the success attack rate of multiple identification models;
Fig. 9 schematically shows the success attack rate of multiple model integrateds;
The module that Figure 10 schematically shows the attack device of deep neural network according to an embodiment of the present invention is shown It is intended to
Figure 11 schematically shows a kind of showing for computer readable storage medium that embodiment provides according to the present invention It is intended to;
Figure 12 schematically shows a kind of schematic diagram for calculating equipment that embodiment provides according to the present invention;
In the accompanying drawings, identical or corresponding label indicates identical or corresponding part.
Specific embodiment
The principle and spirit of the invention are described below with reference to several illustrative embodiments.It should be appreciated that providing this A little embodiments are used for the purpose of making those skilled in the art can better understand that realizing the present invention in turn, and be not to appoint Where formula limits the scope of the invention.On the contrary, these embodiments are provided so that this disclosure will be more thorough and complete, and And the scope of the present disclosure can be completely communicated to those skilled in the art.
One skilled in the art will appreciate that embodiments of the present invention can be implemented as a kind of deep neural network attack system System, device, equipment, method or computer program product.Therefore, the present disclosure may be embodied in the following forms, it may be assumed that completely Hardware, the form that combines of complete software (including firmware, resident software, microcode etc.) or hardware and software.
Embodiment according to the present invention proposes a kind of deep neural network attack method, device, medium and calculating Equipment.
Herein, it is to be understood that related term
FGSM indicates Fast Field symbolic method;
MI-FGSM indicates the Fast Field symbolic method based on momentum;
DIM indicates multiplicity input attack algorithm;
Uniform indicates that kernel function is the convolution kernel of uniform kernel function;
Linear indicates that kernel function is the convolution kernel of linear kernel function;
Gaussian indicates that kernel function is the convolution kernel of gaussian kernel function.
In addition, any number of elements in attached drawing is used to example rather than limitation and any name are only used for area Point, without any restrictions meaning.
Below with reference to several representative embodiments of the invention, the principle and spirit of the present invention are explained in detail.
Illustrative methods
The deep neural network attack method of illustrative embodiments according to the present invention is described below with reference to Fig. 1.It needs It is noted which is shown only for the purpose of facilitating an understanding of the spirit and principles of the present invention for above-mentioned application scenarios, embodiment party of the invention Formula is unrestricted in this regard.On the contrary, embodiments of the present invention can be applied to applicable any scene.
In the present embodiment, the deep neural network attack method includes:
Step S110, establishes challenge model, wherein the challenge model include at least original sample information converting with And identification model in the original sample according to the weight of the loss function of the transformed picture of the information converting;
In the present embodiment, challenge model is established in this step first, the challenge model can be true for one A group picture after picture and its geometric transformation (geometric transformation include translation transformation, turning-over changed and rotation transformation etc.) Piece is generated to resisting sample, i.e., the described challenge model includes that (information converting includes by the original for the information converting of original sample The distance of beginning sample up conversion in all directions) and identification model converted in the original sample according to the information converting The weight of the loss function of picture afterwards;Challenge model in present embodiment generate it is described to resisting sample with according to existing The correct classification that all can equally be told by human viewer to resisting sample that technology generates (is carried out human viewer Say, distinguish very little, referring to fig. 2 in original sample Raw Image and be based on Fast Field symbolic method FGSM and translation invariant Fast Field symbolic method TI-FGSM generate to resisting sample), and identified model mistake point namely described full to resisting sample Foot and original sample are in lDistance under norm is not more than preset threshold ∈;And model can be identified to be mistakenly considered to be not belonging to The classification of original sample, difference be, the challenge model in present embodiment generate it is described have to resisting sample it is stronger Transfer performance, can be by more identification model misidentifications.
The challenge model established according to the present embodiment generates as model resisting sample in the prior art, can The identification model is maximized to the loss function at resisting sample, model can be identified to resisting sample misses so that described Think the classification for being not belonging to original sample, i.e., it is existing that model satisfaction is generated to resisting sample
Wherein, xadvFor to resisting sample, xoriFor original sample, y is the classification of original sample, J (xadv, y) and it is the knowledge Other model is to the loss function at resisting sample;
s.t.‖xadv-xori≤ ∈ indicates to meet resisting sample: with original sample in lDistance under norm is not more than Threshold value ∈.
But it is above-mentioned it is existing model is generated to resisting sample, be merely capable of to current identification model in current input xoriOn optimize so that generate to resisting sample xadvIt is more sensitive to network (identification model) f (x) attacked, it causes Its transfer performance is very poor.In order to enable it is insensitive to current model to resisting sample, further enhance its transfer performance, this hair It is bright to construct following challenge model:
Wherein, TijIt indicates original sample translating i pixel in X direction, j picture is translated on y direction Element;wijThe weight of loss function corresponding to picture after indicating translation.
In one embodiment of present embodiment, the pixel (distance) that can be translated in all directions is identical, i.e. i, J ∈ {-k ..., 0 ..., k }, k are the translatable distance of the maximum of positive direction on any axis.
It is understood that in one embodiment of present embodiment, the information converting of the original sample and knowledge Other model is not normal according to the weight of the loss function on the transformed picture of the information converting in the original sample Amount, but the variable that can be changed according to input, specifically, the alterable range of the information converting can be by outside Input, and the weight information can be configured according to the information converting of input.It is understood that challenge model Adaptation different application scene or the other a variety of transformation ranges of picture category and weight information can be preset, is being carried out to resisting sample When generation, it is dynamically selected according to practical application scene or picture classification.
After challenge model is completed in building, i.e., executable step S120, using the challenge model be based on original sample come It generates to resisting sample.
Two kinds of generating modes to resisting sample are provided in the present embodiment, specifically, in a reality of present embodiment It applies in example, the step S120 includes:
The original sample is transformed into corresponding picture according to the information converting;
It is obtained based on the transformed picture for generating the gradient information to resisting sample;
Based on described for generating gradient information and the original sample generation to resisting sample to resisting sample.
In the present embodiment, according to original sample, (original sample can be what user after model foundation inputted first Picture sample) and the information converting of original sample obtain one group of transformed picture, be then based on the transformed picture It obtains for generating the gradient information to resisting sample, specifically, in the present embodiment, being primarily based on the transformed picture The gradient information of all pictures is obtained, obtains being used to generate to resisting sample after then weighting the gradient information of all pictures Gradient information, wherein the sum of each weight is 1, such as original sample xoriGeometry (translation) is carried out based on one group of information converting One group of picture x is obtained after transformation1、x2、x3、x4、x5、x6、x7、x8And x9, then calculating gradient information one by one WithThen above-mentioned gradient information is weighted flat It is obtained afterwards for generating the gradient information to resisting sample, specifically, if picture x1、x2、x3、x4、 x5、x6、x7、x8And x9It is corresponding Loss function weight be λ1、λ2、λ3、λ4、λ5、λ6、 λ7、λ8And λ9, then being to the gradient of resisting sample for generating
Next based on described for generating Gradient information and original sample to resisting sample are generated to resisting sample, and what is obtained according to the present embodiment fights for generating The gradient information of sample can be applied to any confrontation sample generating method based on gradient, if described generate mould to resisting sample Type uses Fast Field symbolic method (FGSM):
So, based on described for generating gradient information and the original sample generation to resisting sample to resisting sample are as follows:
Own in view of being calculated to be used to generate to need to calculate to the gradient information of resisting sample according to the method for a upper embodiment Picture gradient information, if information converting i, j ∈ {-k ..., 0 ..., k }, k be any axis on positive direction maximum it is translatable Distance, then then need calculate (2k+1)2A picture, if the value of k is larger, calculation amount can be very big.In addition, confrontation sample This identification model attacked would generally have translation invariance, i.e., identification model is when identifying the object in picture, no Pipe object appears in which position of picture, can always be identified.Therefore, the picture T after translatingij(x) for knowing It is not different for other model with the picture x not translated, so its gradient is also not different, i.e.,
Therefore, in one embodiment of present embodiment, objective function (challenge model) can be sought into gradient to input, I.e.
Wherein, the x is the variable for indicating sample, describedFor current sample.
By being derived above as can be seen that in the present embodiment, it is thus only necessary to calculate the loss function of identification model not The image of translationThe gradient at place, then carry out translation rear weight and be averaged.This process be equivalent to by gradient and convolution kernel into Row convolution operation that is, in the present embodiment can be by pre-set convolution kernel and pictureGradient convolution obtain for giving birth to The gradient information of pairs of resisting sample, the weight are corresponding with the element in convolution kernel, it may be assumed that
Wherein, W indicates convolution kernel of the size for (2k+1) × (2k+1) and the element W in Wi,j=w-i-j, i.e., will be preparatory The convolution kernel and picture of settingGradient carry out convolution obtain for generating the gradient information to resisting sample.
In the present embodiment, the convolution kernel is arranged based on the information converting, specifically, the core letter of the convolution kernel Number is one in homogeneous nucleus, linear kernel and Gaussian kernel, wherein the value in the codomain of the homogeneous nucleus function is consistent.
In the present embodiment, when kernel function is homogeneous nucleus, the element in the convolution kernel is arranged to
When kernel function is linear kernel, the element in the convolution kernel is arranged to
When kernel function is Gaussian kernel, the element in the convolution kernel is arranged to
By the above content it is found that the step S120 includes:
It is obtained for generating the gradient information to resisting sample, i.e., based on the original sample and information converting and weight By pre-set convolution kernel and pictureGradient convolution obtain for generating the gradient information to resisting sample.
It, can be based on described for generating to resisting sample after getting for generating to the gradient information of resisting sample Gradient information and original sample are generated to resisting sample, in one embodiment of present embodiment, in conjunction with Fast Field symbol Method (FGSM) is generated to resisting sample then are as follows:
It, can also be by the challenge model of present embodiment building and based on dynamic in one embodiment of present embodiment The Fast Field symbolic method MI-FGSM of amount is combined and is obtained the translation invariant Fast Field symbolic method TI-MI- based on momentum FGSM obtains the challenge model of present embodiment building translation invariant more in conjunction with multiplicity input attack algorithm DIM Sample inputs attack algorithm TI-DIM.
It is easily understood that the attack result that selects of different matrixes is inevitable different, that is, generate to resisting sample Performance is different, and therefore, inventor devises the convolution karyogenesis confrontation sample using different type kernel function come attack recognition mould The experiment of type chooses 8 identification models in this experiment as research object, they are respectively Inc-v3ens3, Inc- V3ens4, IncRes-v2ens, HGD, R&P, JPEG, TVM and NIPS-r3, these identification models are in large-scale image data Training obtains on collection ImageNet.In addition 4 models Inc-v3, Inc-v4, IncRes-v2 and Res-v2- are chosen in this experiment 152 generate as whitepack model to resisting sample.The present embodiment chooses ImageNet verifying and concentrates 1000 pictures as research Object measures the success rate of different attack methods, further relates to its attack performance.Specific experimental result is as shown in figure 3, reality It tests the result shows that linear kernel function and the result of gaussian kernel function are preferable, therefore, in one embodiment of present embodiment, Linear kernel function or gaussian kernel function can preferentially be used.In addition, the size of convolution kernel also has an impact to result, inventor is also Influence of the size to result of experimental study convolution kernel is devised, specific experimental result is as shown in Figures 4 and 5, experimental result table It is bright, convolution kernel success attack rate highest when size is 15*15, therefore, in one embodiment of present embodiment, preferably Using size is the convolution kernel of linear kernel function or gaussian kernel function for the kernel function of 15*15, and k is 7 namely any axis at this time The translatable distance of maximum in positive direction is 7.
In addition, Fig. 6,7 and 8 respectively illustrate existing confrontation sample generating method (FGSM, MI-FGSM based on gradient And DIM) generate (TI-- after the present invention is combined to resisting sample and the existing confrontation sample generating method based on gradient FGSM, TI-MI-FGSM and TI-DIM) generate to resisting sample to the success attack rate of different identification models, experimental result table It is bright, original success attack rate is much larger than in conjunction with the success attack rate after challenge model of the invention.
In addition, Fig. 9 shows the success attack rate of multiple model integrateds, experimental result still shows existing attacker Method is much larger than the success attack rate of original method in conjunction with the success attack rate after challenge model of the invention
The deep neural network attack method of present embodiment, after for a true picture and its translation One group of picture generate to resisting sample so that is generated significantly promotes resisting sample transfer performance, to reduce significantly To resisting sample to the cost by the susceptibility of challenge model, reduced when being generated to resisting sample, in addition the one of present embodiment Embodiment also proposed simplified gradient calculation method, drastically reduce the calculation amount of gradient calculating, save calculating money Source.
Exemplary means
After describing the method for exemplary embodiment of the invention, next, exemplary to the present invention with reference to Figure 10 The deep neural network attack device of embodiment is illustrated, and described device includes:
Model building module 210 is configured as establishing challenge model, wherein the challenge model includes at least original sample This information converting and identification model is in the original sample according to the loss function of the transformed picture of the information converting Weight;
To resisting sample generation module 220, it is configured as generating confrontation sample based on original sample using the challenge model This.
It is described that the following conditions are met to resisting sample in one embodiment of present embodiment:
With original sample in lDistance under norm is not more than preset threshold ∈;
Model can be identified to be mistakenly considered to be not belonging to the classification of original sample.
In one embodiment of present embodiment, the challenge model is based on any pair of resisting sample and generates model construction.
In one embodiment of present embodiment, it is described to resisting sample generate model can be based on the original sample of input This gradient is generated to resisting sample.
In one embodiment of present embodiment, the challenge model and to resisting sample generate model can maximize The identification model can be identified model to resisting sample and be mistakenly considered not to the loss function at resisting sample so that described Belong to the classification of original sample.
It is described that model satisfaction is generated to resisting sample in one embodiment of present embodiment:
Wherein, xadvFor to resisting sample, xoriFor original sample, y is the classification of original sample, J (xadv, y) and it is the knowledge Other model is to the loss function at resisting sample;
s.t.‖xadv-xori≤ ∈ indicates to meet resisting sample: with original sample in lDistance under norm is not more than Threshold value ∈.
In one embodiment of present embodiment, the information converting include by the original sample in all directions The distance of up conversion.
It is described to be transformed to translation transformation in one embodiment of present embodiment.
In one embodiment of present embodiment, the translatable range of the original sample in different directions is identical.
In one embodiment of present embodiment, the challenge model is built as
Wherein, TijIt indicates Original sample is translated into i pixel in X direction, j pixel is translated on y direction;wijPicture after indicating translation The weight of corresponding loss function.
It is described to include: to resisting sample generation module in one embodiment of present embodiment
Picture converter unit is configured as that the original sample is transformed into corresponding picture according to the information converting;
First gradient information acquisition unit is configured as being obtained based on the transformed picture for generating confrontation sample This gradient information;
First confrontation sample generation unit, be configured as based on described for generating to the gradient information of resisting sample and Original sample is generated to resisting sample.
In one embodiment of present embodiment, the first gradient information acquisition unit includes:
Gradient information obtains subelement, is configured as obtaining the gradient letter of all pictures based on the transformed picture Breath;
Gradient information generates subelement, is configured as obtaining being used for generation pair after weighting the gradient information of all pictures The gradient information of resisting sample, wherein the sum of each weight is 1.
It is described to include: to resisting sample generation module in one embodiment of present embodiment
Second gradient information acquiring unit is configured as obtaining based on the original sample and information converting and weight For generating the gradient information to resisting sample;
Second confrontation sample generation unit, be configured as based on described for generating to the gradient information of resisting sample and Original sample is generated to resisting sample.
In one embodiment of present embodiment, the second gradient information acquiring unit is based on following acquisition for generating To the gradient information of resisting sample:
Wherein, the x is the variable for indicating sample, describedFor current sample.
In one embodiment of present embodiment, i, j ∈ {-k ..., 0 ..., k }, k be any axis on positive direction most Big translatable distance.
In one embodiment of present embodiment,
It is equivalent to
Wherein, W indicates convolution kernel of the size for (2k+1) × (2k+1) and the element W in Wi,j=w-i-j, i.e., will be preparatory The convolution kernel and picture of settingGradient carry out convolution obtain for generating the gradient information to resisting sample.
In one embodiment of present embodiment, the convolution kernel is arranged based on the information converting.
In one embodiment of present embodiment, the convolution kernel is one in homogeneous nucleus, linear kernel and Gaussian kernel It is a, wherein the value in the codomain of the homogeneous nucleus function is consistent.
In one embodiment of present embodiment,
When kernel function is homogeneous nucleus, the element in the convolution kernel is arranged to
When kernel function is linear kernel, the element in the convolution kernel is arranged to
When kernel function is Gaussian kernel, the element in the convolution kernel is arranged to
It is described that model is generated using Fast Field symbolic method to resisting sample in one embodiment of present embodiment (FGSM):
In one embodiment of present embodiment, the second confrontation sample generation unit is configured as through following generation To resisting sample:
Exemplary media
After describing the method, apparatus of exemplary embodiment of the invention, next, with reference to Figure 11 to the present invention The computer readable storage medium of illustrative embodiments is illustrated, and please refers to Figure 11, the computer-readable storage shown Medium is CD 110, is stored thereon with computer program (i.e. program product), the computer program is run by processor When, documented each step in above method embodiment can be realized, for example, establishing challenge model, wherein the attack mould Type includes at least the information converting of original sample and identification model is converted in the original sample according to the information converting The weight of the loss function of picture afterwards;It is generated based on original sample to resisting sample using the challenge model;Each step This will not be repeated here for specific implementation.
It should be noted that the example of the computer readable storage medium can also include, but are not limited to phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other kinds of arbitrary access Memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or its His optics, magnetic-based storage media, this is no longer going to repeat them.
Exemplary computer device
After the method, apparatus and medium for describing exemplary embodiment of the invention, next, with reference to Figure 12 pairs The calculating equipment of exemplary embodiment of the invention is illustrated, and Figure 12, which is shown, to be suitable for being used to realizing embodiment of the present invention Exemplary computer device 120 block diagram, which can be computer system or server.The meter that Figure 12 is shown Calculating equipment 120 is only an example, should not function to the embodiment of the present invention and use scope bring any restrictions.
As shown in figure 12, calculate equipment 120 component can include but is not limited to: one or more processor or Processing unit 1201, system storage 1202 connect different system components (including system storage 1202 and processing unit 1201) bus 1203.
It calculates equipment 120 and typically comprises a variety of computer system readable media.These media can be it is any can be by Calculate the usable medium that equipment 120 accesses, including volatile and non-volatile media, moveable and immovable medium.
System storage 1202 may include the computer system readable media of form of volatile memory, such as at random Access memory (RAM) 12021 and/or cache memory 12022.Calculate equipment 120 may further include it is other can Movement/immovable, volatile/non-volatile computer system storage medium.Only as an example, ROM12023 can be used In the immovable, non-volatile magnetic media (not shown in Figure 12, commonly referred to as " hard disk drive ") of read-write.Although not existing It is shown in Figure 12, the disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") can be provided, with And the CD drive to removable anonvolatile optical disk (such as CD-ROM, DVD-ROM or other optical mediums) read-write.? In the case of these, each driver can be connected by one or more data media interfaces with bus 1203.System storage It may include at least one program product in device 1202, which has one group of (for example, at least one) program module, this A little program modules are configured to perform the function of various embodiments of the present invention.
Program/utility 12025 with one group of (at least one) program module 12024, can store and be for example Unite in memory 1202, and such program module 12024 includes but is not limited to: operating system, one or more apply journey It may include network environment in sequence, other program modules and program data, each of these examples or certain combination It realizes.Program module 12024 usually executes function and/or method in embodiment described in the invention.
Calculating equipment 120 can also be with one or more external equipment 1204 (such as keyboard, sensing equipment, display) Communication.This communication can be carried out by input/output (I/O) interface 1205.Also, net can also be passed through by calculating equipment 120 Network adapter 1206 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, such as Internet) communication.As shown in figure 12, network adapter 1206 by bus 1203 and calculates other modules of equipment 120 (such as Processing unit 1201 etc.) communication.It should be understood that can be used in conjunction with calculating equipment 120 other hard although being not shown in Figure 12 Part and/or software module.
The program that processing unit 1201 is stored in system storage 1202 by operation, is answered thereby executing various functions With and data processing, for example, establishing challenge model, wherein the challenge model include at least original sample information converting And identification model in the original sample according to the weight of the loss function of the transformed picture of the information converting;It utilizes The challenge model is generated based on original sample to resisting sample.This will not be repeated here for the specific implementation of each step. It should be noted that although being referred to several units/modules or son of deep neural network attack device in the above detailed description Unit/submodule, but it is this division be only exemplary it is not enforceable.In fact, embodiment party according to the present invention The feature and function of formula, two or more above-described units/modules can embody in a units/modules.Instead It, the feature and function of an above-described units/modules can be by multiple units/modules Lai specific with further division Change.
In addition, although describing the operation of the method for the present invention in the accompanying drawings with particular order, this do not require that or Person implies must execute these operations in this particular order, or has to carry out operation shown in whole and be just able to achieve the phase The result of prestige.Additionally or alternatively, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/or One step is decomposed into execution of multiple steps.
Although detailed description of the preferred embodimentsthe spirit and principles of the present invention are described by reference to several, it should be appreciated that, this It is not limited to the specific embodiments disclosed for invention, does not also mean that the feature in these aspects not to the division of various aspects Energy combination is benefited to carry out, this to divide the convenience merely to statement.The present invention is directed to cover the spirit of appended claims With various modifications and equivalent arrangements included in range.
Through the above description, the embodiment provides scheme below, but not limited to this:
A kind of deep neural network attack method of scheme 1., comprising:
Establish challenge model, wherein the challenge model includes at least the information converting and identification model of original sample In the original sample according to the weight of the loss function on the transformed picture of the information converting;
It is generated based on original sample to resisting sample using the challenge model.
The method as described in scheme 1 of scheme 2., wherein described that resisting sample is met:
With original sample in lDistance under norm is not more than preset threshold ∈;
Model can be identified to be mistakenly considered to be not belonging to the classification of original sample.
Method of the scheme 3. as described in scheme 2, wherein the challenge model is based on any pair of resisting sample and generates model structure It builds.
Scheme 4. benefit require 3 as described in method, wherein it is described to resisting sample generate model can be based on the original of input The gradient of beginning sample is generated to resisting sample.
The method as described in scheme 4 of scheme 5., wherein the challenge model and model is generated to resisting sample can be maximum Change the identification model to the loss function at resisting sample, model can be identified to resisting sample be mistakenly considered so that described It is not belonging to the classification of original sample.
Method of the scheme 6. as described in scheme 1-5 is any, wherein described that model satisfaction is generated to resisting sample
Wherein, xadvFor to resisting sample, xoriFor original sample, y is the classification of original sample, J (xadv, y) and it is the knowledge Other model is to the loss function at resisting sample;
s.t.‖xadv-xori≤ ∈ indicates to meet resisting sample: with original sample in lDistance under norm is not more than Threshold value ∈.
Method of the scheme 7. as described in scheme 6, wherein the information converting includes by the original sample in each side To the distance of up conversion.
The method as described in scheme 7 of scheme 8., wherein described to be transformed to translation transformation.
Method of the scheme 9. as described in scheme 8, wherein the translatable range phase of the original sample in different directions Together.
Method of the scheme 10. as described in scheme 8 or 9, wherein the challenge model is built as
Wherein, TijIt indicates Original sample is translated into i pixel in X direction, j pixel is translated on y direction;wijPicture after indicating translation The weight of corresponding loss function.
Method of the scheme 11. as described in scheme 8 or 9, wherein generated using the challenge model based on original sample To resisting sample, comprising:
The original sample is transformed into corresponding picture according to the information converting;
It is obtained based on the transformed picture for generating the gradient information to resisting sample;
Based on described for generating gradient information and the original sample generation to resisting sample to resisting sample.
Method of the scheme 12. as described in scheme 11, wherein obtained based on the transformed picture for generating confrontation The gradient information of sample, comprising:
The gradient information of all pictures is obtained based on the transformed picture;
It obtains being used to generate the gradient information to resisting sample after the gradient information of all pictures is weighted, wherein each power The sum of weight is 1.
Method of the scheme 13. as described in scheme 10, wherein be based on the original sample next life using the challenge model Pairs of resisting sample, comprising:
It is obtained based on the original sample and information converting and weight for generating the gradient information to resisting sample;
Based on described for generating gradient information and the original sample generation to resisting sample to resisting sample.
Method of the scheme 14. as described in scheme 13, wherein obtained based on the original sample and information converting and weight To for generating the gradient information to resisting sample, comprising:
Wherein, the x is the variable for indicating sample, describedFor current sample.
Method of the scheme 15. as described in scheme 14, wherein i, j ∈ {-k ..., 0 ..., k }, k are positive direction on any axis The translatable distance of maximum.
Method of the scheme 16. as described in scheme 15, wherein
It is equivalent to
Wherein, W indicates convolution kernel of the size for (2k+1) × (2k+1) and the element W in Wi,j=w-i-j, i.e., will be preparatory The convolution kernel and picture of settingGradient carry out convolution obtain for generating the gradient information to resisting sample.
Method of the scheme 17. as described in scheme 16, wherein the convolution kernel is arranged based on the information converting.
Method of the scheme 18. as described in scheme 17, wherein the kernel function of the convolution kernel is homogeneous nucleus, linear kernel and height One in this core, wherein the value in the codomain of the homogeneous nucleus function is consistent.
Method of the scheme 19. as described in scheme 18, wherein
When kernel function is homogeneous nucleus, the element in the convolution kernel is arranged to
When kernel function is linear kernel, the element in the convolution kernel is arranged to
When kernel function is Gaussian kernel, the element in the convolution kernel is arranged to
Method of the scheme 20. as described in scheme 16-19 is any, wherein described that model is generated using quickly ladder to resisting sample It spends symbolic method (FGSM):
Method of the scheme 21. as described in scheme 20, wherein based on it is described for generate to the gradient information of resisting sample with And original sample is generated to resisting sample are as follows:
A kind of deep neural network of scheme 22. attacks device, comprising:
Model building module is configured as establishing challenge model, wherein the challenge model includes at least original sample Information converting and identification model in the original sample according to the loss function of the transformed picture of the information converting Weight;
To resisting sample generation module, it is configured as generating based on original sample to resisting sample using the challenge model.
Device of the scheme 23. as described in scheme 22, wherein described that the following conditions are met to resisting sample:
With original sample in lDistance under norm is not more than preset threshold ∈;
Model can be identified to be mistakenly considered to be not belonging to the classification of original sample.
Device of the scheme 24. as described in scheme 23, wherein the challenge model is based on any pair of resisting sample and generates model Building.
Device of the scheme 25. as described in scheme 24, wherein it is described to resisting sample generate model can be based on the original of input The gradient of beginning sample is generated to resisting sample.
Device of the scheme 26. as described in scheme 25, wherein the challenge model and to resisting sample generate model can be most Change the identification model greatly to the loss function at resisting sample, so that described can be identified model misidentification to resisting sample For the classification for being not belonging to original sample.
Device of the scheme 27. as described in scheme 22-26 is any, wherein described that model satisfaction is generated to resisting sample:
Wherein, xadvFor to resisting sample, xoriFor original sample, y is the classification of original sample, J (xadv, y) and it is the knowledge Other model is to the loss function at resisting sample;
s.t.‖xadv-xori≤ ∈ indicates to meet resisting sample: with original sample in lDistance under norm is not more than Threshold value ∈.
Device of the scheme 28. as described in scheme 27, wherein the information converting includes by the original sample each The distance of direction up conversion.
Device of the scheme 29. as described in scheme 28, wherein described to be transformed to translation transformation.
Device of the scheme 30. as described in scheme 29, wherein the translatable range of the original sample in different directions It is identical.
Device of the scheme 31. as described in scheme 29 or 30, wherein the challenge model is built as
Wherein, TijIndicating will Original sample translates i pixel in X direction, and j pixel is translated on y direction;wijPicture institute after indicating translation The weight of corresponding loss function.
Device of the scheme 32. as described in scheme 29 or 30, wherein described to include: to resisting sample generation module
Picture converter unit is configured as that the original sample is transformed into corresponding picture according to the information converting;
First gradient information acquisition unit is configured as being obtained based on the transformed picture for generating confrontation sample This gradient information;
First confrontation sample generation unit, be configured as based on described for generating to the gradient information of resisting sample and Original sample is generated to resisting sample.
Device of the scheme 33. as described in scheme 32, wherein the first gradient information acquisition unit includes:
Gradient information obtains subelement, is configured as obtaining the gradient letter of all pictures based on the transformed picture Breath;
Gradient information generates subelement, is configured as obtaining being used for generation pair after weighting the gradient information of all pictures The gradient information of resisting sample, wherein the sum of each weight is 1.
Device of the scheme 34. as described in scheme 31, wherein described to include: to resisting sample generation module
Second gradient information acquiring unit is configured as obtaining based on the original sample and information converting and weight For generating the gradient information to resisting sample;
Second confrontation sample generation unit, be configured as based on described for generating to the gradient information of resisting sample and Original sample is generated to resisting sample.
Device of the scheme 35. as described in scheme 34, wherein the second gradient information acquiring unit is based on following acquisition and is used for Generate the gradient information to resisting sample:
Wherein, the x is the variable for indicating sample, describedFor current sample.
Device of the scheme 36. as described in scheme 35, wherein i, j ∈ {-k ..., 0 ..., k }, k are positive direction on any axis The translatable distance of maximum.
Device of the scheme 37. as described in scheme 36, wherein
It is equivalent to
Wherein, W indicates convolution kernel of the size for (2k+1) × (2k+1) and the element W in Wi,j=w-i-j, i.e., will be preparatory The convolution kernel and picture of settingGradient carry out convolution obtain for generating the gradient information to resisting sample.
Device of the scheme 38. as described in scheme 37, wherein the convolution kernel is arranged based on the information converting.
Device of the scheme 39. as described in scheme 38, wherein the convolution kernel is in homogeneous nucleus, linear kernel and Gaussian kernel One, wherein the value in the codomain of the homogeneous nucleus function is consistent.
Device of the scheme 40. as described in scheme 39, wherein
When kernel function is homogeneous nucleus, the element in the convolution kernel is arranged to
When kernel function is linear kernel, the element in the convolution kernel is arranged to
When kernel function is Gaussian kernel, the element in the convolution kernel is arranged to
Device of the scheme 41. as described in scheme 37-40 is any, wherein described that model is generated using quickly ladder to resisting sample It spends symbolic method (FGSM):
Device of the scheme 42. as described in scheme 41, wherein the second confrontation sample generation unit is configured as by following It generates to resisting sample:
A kind of computer readable storage medium of scheme 43. is stored with program code, and said program code is when by processor When execution, the method as described in one of scheme 1-21 is realized.
A kind of calculating equipment of scheme 44., including processor and the storage medium for being stored with program code, described program generation Code when being executed by a processor, realizes the method as described in one of scheme 1-21.

Claims (10)

1. a kind of deep neural network attack method, comprising:
Establish challenge model, wherein the information converting and identification model that the challenge model includes at least original sample are in institute Original sample is stated according to the weight of the loss function on the transformed picture of the information converting;
It is generated based on original sample to resisting sample using the challenge model.
2. the method for claim 1, wherein described meet resisting sample:
With original sample in lDistance under norm is not more than preset threshold ∈;
Model can be identified to be mistakenly considered to be not belonging to the classification of original sample.
3. method according to claim 2, wherein the challenge model is based on any pair of resisting sample and generates model construction.
4. benefit require 3 as described in method, wherein it is described to resisting sample generate model can be based on the ladder of the original sample of input Degree is generated to resisting sample.
5. a kind of deep neural network attacks device, comprising:
Model building module is configured as establishing challenge model, wherein the challenge model includes at least the transformation of original sample Information and identification model are in the original sample according to the weight of the loss function of the transformed picture of the information converting;
To resisting sample generation module, it is configured as generating based on original sample to resisting sample using the challenge model.
6. device as claimed in claim 5, wherein described to meet the following conditions to resisting sample:
With original sample in lDistance under norm is not more than preset threshold ∈;
Model can be identified to be mistakenly considered to be not belonging to the classification of original sample.
7. device as claimed in claim 6, wherein the challenge model is based on any pair of resisting sample and generates model construction.
8. device as claimed in claim 7, wherein it is described to resisting sample generate model can be based on the original sample of input Gradient is generated to resisting sample.
9. a kind of computer readable storage medium, is stored with program code, said program code when being executed by a processor, is realized Method as described in one of claim 1-4.
10. a kind of calculating equipment, including processor and the storage medium for being stored with program code, said program code is when processed When device executes, the method as described in one of claim 1-4 is realized.
CN201910329772.2A 2019-04-23 2019-04-23 Deep neural network attack method, device, medium and calculating equipment Pending CN110084002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910329772.2A CN110084002A (en) 2019-04-23 2019-04-23 Deep neural network attack method, device, medium and calculating equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910329772.2A CN110084002A (en) 2019-04-23 2019-04-23 Deep neural network attack method, device, medium and calculating equipment

Publications (1)

Publication Number Publication Date
CN110084002A true CN110084002A (en) 2019-08-02

Family

ID=67416165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910329772.2A Pending CN110084002A (en) 2019-04-23 2019-04-23 Deep neural network attack method, device, medium and calculating equipment

Country Status (1)

Country Link
CN (1) CN110084002A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689500A (en) * 2019-09-29 2020-01-14 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium
CN111625820A (en) * 2020-05-29 2020-09-04 华东师范大学 Federal defense method based on AIoT-oriented security
CN111737691A (en) * 2020-07-24 2020-10-02 支付宝(杭州)信息技术有限公司 Method and device for generating confrontation sample
CN111881034A (en) * 2020-07-23 2020-11-03 深圳慕智科技有限公司 Confrontation sample generation method based on distance
CN112035834A (en) * 2020-08-28 2020-12-04 北京推想科技有限公司 Countermeasure training method and device, and application method and device of neural network model
CN112464230A (en) * 2020-11-16 2021-03-09 电子科技大学 Black box attack type defense system and method based on neural network intermediate layer regularization
WO2021042665A1 (en) * 2019-09-04 2021-03-11 笵成科技南京有限公司 Dnn-based method for protecting passport against fuzzy attack
CN113066002A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment
CN113222480A (en) * 2021-06-11 2021-08-06 支付宝(杭州)信息技术有限公司 Training method and device for confrontation sample generation model
CN113378118A (en) * 2020-03-10 2021-09-10 百度在线网络技术(北京)有限公司 Method, apparatus, electronic device, and computer storage medium for processing image data
CN113407939A (en) * 2021-06-17 2021-09-17 电子科技大学 Substitution model automatic selection method facing black box attack, storage medium and terminal
CN114707661A (en) * 2022-04-13 2022-07-05 支付宝(杭州)信息技术有限公司 Confrontation training method and system
CN115115905A (en) * 2022-06-13 2022-09-27 苏州大学 High-mobility image countermeasure sample generation method based on generation model

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021042665A1 (en) * 2019-09-04 2021-03-11 笵成科技南京有限公司 Dnn-based method for protecting passport against fuzzy attack
CN110689500A (en) * 2019-09-29 2020-01-14 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium
CN110689500B (en) * 2019-09-29 2022-05-24 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium
CN113378118A (en) * 2020-03-10 2021-09-10 百度在线网络技术(北京)有限公司 Method, apparatus, electronic device, and computer storage medium for processing image data
CN113378118B (en) * 2020-03-10 2023-08-22 百度在线网络技术(北京)有限公司 Method, apparatus, electronic device and computer storage medium for processing image data
CN111625820A (en) * 2020-05-29 2020-09-04 华东师范大学 Federal defense method based on AIoT-oriented security
CN111881034A (en) * 2020-07-23 2020-11-03 深圳慕智科技有限公司 Confrontation sample generation method based on distance
CN111737691A (en) * 2020-07-24 2020-10-02 支付宝(杭州)信息技术有限公司 Method and device for generating confrontation sample
CN112035834A (en) * 2020-08-28 2020-12-04 北京推想科技有限公司 Countermeasure training method and device, and application method and device of neural network model
CN112464230B (en) * 2020-11-16 2022-05-17 电子科技大学 Black box attack type defense system and method based on neural network intermediate layer regularization
CN112464230A (en) * 2020-11-16 2021-03-09 电子科技大学 Black box attack type defense system and method based on neural network intermediate layer regularization
CN113066002A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment
CN113222480A (en) * 2021-06-11 2021-08-06 支付宝(杭州)信息技术有限公司 Training method and device for confrontation sample generation model
CN113222480B (en) * 2021-06-11 2023-05-12 支付宝(杭州)信息技术有限公司 Training method and device for challenge sample generation model
CN113407939A (en) * 2021-06-17 2021-09-17 电子科技大学 Substitution model automatic selection method facing black box attack, storage medium and terminal
CN114707661A (en) * 2022-04-13 2022-07-05 支付宝(杭州)信息技术有限公司 Confrontation training method and system
CN115115905A (en) * 2022-06-13 2022-09-27 苏州大学 High-mobility image countermeasure sample generation method based on generation model

Similar Documents

Publication Publication Date Title
CN110084002A (en) Deep neural network attack method, device, medium and calculating equipment
CN109522942B (en) Image classification method and device, terminal equipment and storage medium
CN111598214B (en) Cross-modal retrieval method based on graph convolution neural network
Mallya et al. Learning informative edge maps for indoor scene layout prediction
Goodfellow et al. Multi-digit number recognition from street view imagery using deep convolutional neural networks
CN105303179A (en) Fingerprint identification method and fingerprint identification device
CN109117480A (en) Word prediction technique, device, computer equipment and storage medium
CN103279746B (en) A kind of face identification method based on support vector machine and system
CN110826609B (en) Double-current feature fusion image identification method based on reinforcement learning
CN105184260A (en) Image characteristic extraction method, pedestrian detection method and device
CN109145083B (en) Candidate answer selecting method based on deep learning
CN110363830B (en) Element image generation method, device and system
CN115860091B (en) Depth feature descriptor learning method based on orthogonal constraint
CN104881684A (en) Stereo image quality objective evaluate method
CN104680190B (en) Object detection method and device
CN113409157B (en) Cross-social network user alignment method and device
CN110020593A (en) Information processing method and device, medium and calculating equipment
Hou et al. Robust dense registration of partial nonrigid shapes
CN109255377A (en) Instrument recognition methods, device, electronic equipment and storage medium
CN117315090A (en) Cross-modal style learning-based image generation method and device
CN115984400A (en) Automatic image generation method and system based on hand-drawn sketch
Dong et al. Scene-oriented hierarchical classification of blurry and noisy images
CN113010687B (en) Exercise label prediction method and device, storage medium and computer equipment
CN111126617B (en) Method, device and equipment for selecting fusion model weight parameters
CN117746075B (en) Zero sample image retrieval method and device based on fine texture features and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190802

RJ01 Rejection of invention patent application after publication