CN112949678A - Method, system, equipment and storage medium for generating confrontation sample of deep learning model - Google Patents

Method, system, equipment and storage medium for generating confrontation sample of deep learning model Download PDF

Info

Publication number
CN112949678A
CN112949678A CN202110049467.5A CN202110049467A CN112949678A CN 112949678 A CN112949678 A CN 112949678A CN 202110049467 A CN202110049467 A CN 202110049467A CN 112949678 A CN112949678 A CN 112949678A
Authority
CN
China
Prior art keywords
disturbance
norm
matrix
deep learning
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110049467.5A
Other languages
Chinese (zh)
Other versions
CN112949678B (en
Inventor
蔺琛皓
朱炯历
沈超
管晓宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202110049467.5A priority Critical patent/CN112949678B/en
Publication of CN112949678A publication Critical patent/CN112949678A/en
Application granted granted Critical
Publication of CN112949678B publication Critical patent/CN112949678B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Algebra (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of deep learning models, and discloses a method, a system, equipment and a storage medium for generating a confrontation sample of a deep learning model. By acquiring the sensitive matrix, the disturbance is realized based on the sensitive matrix, the distribution of disturbed pixel points becomes sparse, the disturbance is less easily perceived from the angle of human eye observation, and the two norms of the confrontation sample are greatly reduced from the angle of quantization.

Description

Method, system, equipment and storage medium for generating confrontation sample of deep learning model
Technical Field
The invention belongs to the field of deep learning models, and relates to a method, a system, equipment and a storage medium for generating confrontation samples of a deep learning model.
Background
Deep learning performs better than traditional machine learning in many tasks, such as image classification, object detection, speech recognition, natural language processing, etc. With the wide application and development of deep neural networks in various fields, the safety problem of the deep neural networks is more and more interesting. The counterattack means that an attacker makes misjudgment on a deep learning model by constructing targeted deep learning model input, namely the misjudgment is inconsistent with a human judgment result, in general, the attacker adds an imperceptible disturbance on the basis of a benign sample to make the misjudgment on the deep learning model, so that a result inconsistent with the input benign sample is obtained, and thus a 'malignant' sample is called a countersample.
Aiming at the field of image classification, the disturbance of resisting attack is to slightly change the value of each pixel in the image, so that the deep learning model makes mistakes when classifying the image. The attack resistance is divided into two cases, namely a black box and a white box according to the amount of information which can be acquired by an attacker. The white box means that an attacker can obtain internal parameters of the deep learning model, including the structure and weight of the deep learning model, while the black box attack is closer to the practical application condition, and the attacker can only obtain the output vector of the deep learning model. In a black-box counterattack, some methods of optimization using evolutionary strategies may perform relatively well because the gradient of the deep learning model is not directly available. The evolution strategy is a method for solving a parameter optimization problem by simulating a biological evolution principle, and is to eliminate by continuously generating new individuals (individual) and comparing fitness (fitness) among the individuals to finally obtain the individuals with higher fitness. Since the optimization process of the evolutionary algorithm does not need gradient information, the evolutionary algorithm is not constrained by the black box condition. In image classification, counterattacks are classified into target counterattacks (targeted) and non-target counterattacks (non-targeted) according to the misjudgment condition of a deep learning model, and if the classification result of the model is different from the original class, the model is a non-target attack, and if the model misjudges the image as a certain arbitrarily selected target class, the model is a target attack.
When the generation efficiency of the countermeasure sample under the condition of a black box is measured, the number of times of inquiry of a deep learning model is usually adopted as an index, when the 'imperceptible' degree of one countermeasure sample is measured, the zero norm (the number of disturbance pixels), the two norms (the square sum of the disturbance values of all the pixels and the root opening number) and the infinite norm (the maximum disturbance value in all the pixels) added with disturbance are usually adopted as the indexes, under the condition of limited infinite norm, the evolutionary algorithm usually has relatively high efficiency, however, due to the existence of some redundant disturbance, namely, some pixels do not need to be disturbed, the two norms of the countermeasure sample generated by the evolutionary algorithm are generally high, the distribution of disturbance points is relatively wide, and the two norms are easy to be perceived by human eyes. Aiming at the problem, if the size of the two-norm is simply added into the fitness, the balance between the original fitness and the two-norm is difficult to control, so that the optimization speed of the deep learning model is reduced, or the size of the two-norm has almost no binding force, so that the generated two-norm of the confrontation sample is still higher, and the higher two-norm of the confrontation sample causes lower stability and safety of the deep learning model trained by using the confrontation sample.
Disclosure of Invention
The invention aims to overcome the defects that the two norms of the confrontation samples generated by the existing black box attack method are generally higher or the two norms have almost no binding force in the prior art, and provides a method, a system, equipment and a storage medium for generating the confrontation samples of a deep learning model.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
in a first aspect of the present invention, a method for generating confrontation samples of a deep learning model includes the following steps:
s1: acquiring a sensitive matrix of an original image based on a target deep learning model;
s2: constructing a plurality of norm groups according to a plurality of preset zero norms and a plurality of infinite norms for disturbance; obtaining a disturbance diagram corresponding to each norm group according to the sensitive matrix and the plurality of norm groups;
s3: inquiring a target deep learning model for the original image and the disturbance graphs corresponding to the range groups to obtain and obtain a zero norm countermeasure and an infinite norm countermeasure according to a reduction value of the original class prediction probability of the disturbance graphs corresponding to the range groups relative to the original image;
s4: constructing a preset number of confrontation disturbance matrixes according to the confrontation infinite norm, taking the prediction probability of the attack target class as the fitness, taking the fitness as the maximum optimization target, iteratively optimizing each pair of confrontation disturbance matrixes through an evolutionary algorithm according to the original image, the sensitivity matrix and the confrontation zero norm, and performing S5 after each iteration;
s5: when at least one target anti-disturbance matrix exists in the anti-disturbance matrix after current iterative optimization, disturbing the original image through the target anti-disturbance matrix to obtain an anti-sample and output the anti-sample, and finishing iteration; otherwise, return to S4.
The method for generating the confrontation sample of the deep learning model is further improved in that:
the specific method of S1 is as follows:
acquiring a deep learning model with the same classification target as the target deep learning model as a reference model;
selecting pixel points in the original image one by one to carry out the following steps: disturbing the current pixel point by a preset size to obtain a sensitive image, inputting the sensitive image into a reference model to obtain a change value of the original class prediction probability output relative to the reference model before disturbance after the current pixel point is disturbed, and taking the change value as the sensitive value of the current pixel point;
and arranging the sensitive values of all pixel points in the original image according to the positions of all the pixel points in the original image to obtain a sensitive matrix of the original image based on the target deep learning model.
The specific method of S2 is as follows:
selecting zero norms in the plurality of zero norms one by one to traverse the plurality of zero norms according to a plurality of preset zero norms and a plurality of infinite norms for disturbance, and combining the selected zero norms and the infinite norms in the plurality of infinite norms one by one to obtain a plurality of norm groups;
presetting an initial disturbance matrix, wherein the number of parameters in the initial disturbance matrix is the same as the number of pixel points in the original image, the parameters are arranged according to the positions of the pixel points in the original image, the absolute value of each parameter is 1, and the symbols of the parameters at the same position in the initial disturbance matrix and the sensitive matrix are consistent; selecting a norm group one by one, traversing a plurality of norm groups, multiplying half of infinite norms in the current norm group by each parameter in the initial disturbance matrix to obtain disturbance matrixes corresponding to the norm groups, obtaining zero norms in the current norm group, setting the front zero norm sensitive values in the sensitive matrixes to be 1 and setting the other sensitive values to be 0 according to the descending order of the sensitive values, and obtaining mask matrixes corresponding to the disturbance matrixes;
and superposing the pixel points in the original image with the disturbance matrixes corresponding to the norm groups respectively, and multiplying the pixel points by the mask matrixes corresponding to the disturbance matrixes to obtain the disturbance graphs corresponding to the norm groups.
The specific method of S3 is as follows:
inputting the original image and the disturbance images corresponding to the norm groups into a target deep learning model to obtain the original class prediction probability of the original image and the original class prediction probability of the disturbance images corresponding to the norm groups, and obtaining a reduction value of the original class prediction probability of the disturbance images corresponding to the norm groups relative to the original image;
superposing disturbance graphs corresponding to norm groups of the same zero norm relative to the descending values of the original type prediction probability of the original image in all the norm groups to obtain a plurality of first superposed values, and selecting the zero norm corresponding to the first superposed value with the largest slope change in the plurality of first superposed values as the anti-zero norm;
and superposing disturbance graphs corresponding to norm groups of the same infinite norm with respect to the descending values of the original type prediction probability of the original image in all the norm groups to obtain a plurality of second superposed values, and selecting the infinite norm corresponding to the second superposed value with the largest slope change in the plurality of second superposed values as the confrontation infinite norm.
The evolutionary algorithm in S4 is: an adaptive differential evolution strategy or a linear-stretched covariance matrix adaptive evolution strategy.
When the evolutionary algorithm is the adaptive differential evolution strategy, the specific method of S4 is as follows:
s401: randomly taking values in [ -confrontation infinite norm, confrontation infinite norm ] as parameters in the confrontation disturbance matrix, and constructing a preset number of confrontation disturbance matrices; each parameter in the anti-disturbance matrix corresponds to each pixel point of the original image one by one;
s402: according to the adaptive scale factor and the variation formula, a preset number of the antagonistic disturbance matrixes are varied to obtain a preset number of the variable antagonistic disturbance matrixes, and the antagonistic disturbance matrixes and the variable antagonistic disturbance matrixes are used as individuals; acquiring a mask matrix corresponding to each body according to the anti-zero norm and the sensitive matrix; respectively superposing each individual on each pixel point in the original image, and multiplying the individual with the mask matrix corresponding to each individual to obtain a disturbance diagram corresponding to each individual; inputting the disturbance graphs corresponding to the individuals into a target deep learning model to obtain the fitness of the disturbance graphs corresponding to the individuals, and selecting a preset number of the individuals as an anti-disturbance matrix after iterative optimization according to the sequence of the fitness from large to small;
s403: and updating the anti-disturbance matrix with the iteratively optimized anti-disturbance matrix, and iterating S402 and after each iteration, performing S5.
When the evolutionary algorithm is a linear-stretched covariance matrix adaptive evolution strategy, the specific method of S4 is as follows:
s411: randomly generating k n-dimensional vectors as a basis of a span space; randomly generating a preset number of k-dimensional vectors through preset k-dimensional Gaussian distribution, multiplying the k-dimensional vectors by a substrate by taking the k-dimensional vectors as weights to obtain a preset number of n-dimensional sample vectors, and limiting the n-dimensional sample vectors by an anti-infinite norm to obtain an anti-disturbance matrix;
s412: acquiring a mask matrix corresponding to each anti-disturbance matrix according to the anti-zero norm and the sensitive matrix; superposing each anti-disturbance matrix on each pixel point in the original image respectively, and multiplying the anti-disturbance matrixes by the corresponding mask matrixes of the anti-disturbance matrixes to obtain disturbance graphs corresponding to the anti-disturbance matrixes; inputting a disturbance graph corresponding to each anti-disturbance matrix into a target deep learning model to obtain the fitness of the disturbance graph corresponding to each anti-disturbance matrix, selecting a set number of anti-disturbance matrices according to the sequence of the fitness from large to small, updating k-dimensional Gaussian distribution according to the set number of anti-disturbance matrices, randomly generating a preset number of optimized k-dimensional vectors according to the updated k-dimensional Gaussian distribution, multiplying the optimized k-dimensional vectors by a weight and a substrate to obtain a preset number of optimized n-dimensional sample vectors, and limiting the optimized n-dimensional sample vectors by an infinite norm to obtain an iteratively optimized anti-disturbance matrix;
s413: and updating the anti-disturbance matrix with the iteratively optimized anti-disturbance matrix, and iterating S412, and performing S5 after each iteration.
In a second aspect of the invention, a system for generating confrontation samples of a deep learning model comprises a sensitive matrix acquisition module, a disturbance map acquisition module, an optimization module, a parameter determination module and an output module;
the sensitive matrix acquisition module is used for acquiring a sensitive matrix of the original image based on the target deep learning model;
the disturbance diagram acquisition module is used for constructing a plurality of norm groups according to a plurality of preset zero norms and a plurality of infinite norms for disturbance; obtaining a disturbance diagram corresponding to each norm group according to the sensitive matrix and the plurality of norm groups;
the parameter determination module is used for inquiring the target deep learning model from the original image and the disturbance graphs corresponding to the range groups to obtain and obtain a zero-norm countermeasure and an infinite norm countermeasure according to a reduction value of the original class prediction probability of the disturbance graphs corresponding to the range groups relative to the original image;
the optimization module is used for constructing a preset number of resistance disturbance matrixes according to resistance infinite norms, taking the prediction probability of an attack target class as the fitness, taking the fitness as the maximum optimization target, iteratively optimizing each pair of resistance disturbance matrixes through an evolutionary algorithm according to the original image, the sensitivity matrix and the resistance zero norm, and triggering the output module after each iteration;
the output module is used for disturbing the original image through the target disturbance resisting matrix when at least one target disturbance resisting matrix exists in the disturbance resisting matrix after current iteration optimization to obtain a resisting sample and output the resisting sample, and the iteration is finished; otherwise, triggering the optimization module.
In a third aspect of the present invention, a computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the above-mentioned deep learning model confrontation sample generation method when executing the computer program.
In a fourth aspect of the present invention, a computer-readable storage medium is stored with a computer program, and the computer program is characterized in that when being executed by a processor, the computer program implements the steps of the above-mentioned method for generating the confrontation samples of the deep learning model.
Compared with the prior art, the invention has the following beneficial effects:
according to the method for generating the countermeasure sample of the deep learning model, the disturbance area of the original image is limited through the sensitive matrix of the original image based on the target deep learning model, so that the distribution of disturbance points becomes sparse, the countermeasure disturbance is less prone to be perceived, and the two norms of the countermeasure disturbance are greatly reduced from the quantization perspective. Meanwhile, reasonable 'probing' is carried out by combining the sensitive matrix, combinations of different zero norms and infinite norms are tried, the zero norm and the infinite norms are automatically and dynamically adjusted for each time of resisting attack, and the efficiency of resisting attack and the success rate of limiting the number of times of inquiry are improved. The whole process is of a plug-in type, wherein the iterative optimization strategy part can be replaced by different evolutionary algorithms for optimization by combining with actual application situations, and the method has the characteristics of convenience and quickness. Then, the target deep learning model is trained through the generated confrontation sample with the low two-norm, so that the prediction accuracy of the target deep learning model can be further improved, and the robustness of the target deep learning model is improved.
Furthermore, by means of the characteristic that the sensitivity matrixes among the learning models with different depths are highly similar, the target deep learning model is not directly queried, the sensitivity matrixes are obtained through the reference model, and the query times are reduced.
Drawings
FIG. 1 is a flow chart of a method for generating confrontation samples of a deep learning model according to the present invention;
FIG. 2 is a schematic diagram of a method for generating confrontation samples of the deep learning model according to the present invention;
FIG. 3 is a schematic block diagram of the evolutionary algorithm of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1 and 2, the invention provides a method for generating a confrontation sample of a deep learning model, which is based on a sensitivity matrix and designs a low-disturbance evolutionary algorithm black box confrontation attack plug-in type framework to realize generation of the confrontation sample of the deep learning model.
S1: and acquiring a sensitive matrix of the original image based on the target deep learning model.
Specifically, a deep learning model with the same classification target as the target deep learning model is obtained as a reference model Mref(ii) a Selecting pixel points in the original image one by one to carry out the following steps:
disturbing the current pixel point by a preset size to obtain a sensitive image, inputting the sensitive image into a reference model to obtain a change value of the original class prediction probability output relative to the reference model before disturbance after the current pixel point is disturbed, and taking the change value as the sensitive value of the current pixel point; and arranging the sensitivity values of all pixel points in the original image according to the positions of all the pixel points in the original image to obtain a sensitivity matrix of the original image based on the target deep learning model, wherein the sensitivity matrix consists of the sensitivity value of each pixel point.
In particular, the reference model MrefThe sensitivity of each pixel point in the original image is defined as a reference model M caused by disturbance of a certain fixed size on the pixel pointrefThe process is used for estimating the dimensionality, namely the value of the partial derivative of the pixel point, and the value is used as the change value of the original type prediction probability output by the reference model before disturbance after the current pixel point is disturbed, and the larger the absolute value of the partial derivative is, the more sensitive the change value is.
S2: constructing a plurality of norm groups according to a plurality of preset zero norms and a plurality of infinite norms for disturbance; and obtaining the disturbance diagram corresponding to each norm group according to the sensitive matrix and the plurality of norm groups.
The sensitivity matrix generated in the step S1 is used for attacking the target deep learning model, and the sensitivity matrix of the reference model can be directly used for attacking the target deep learning model due to the high similarity of the sensitivity matrices of the different deep learning models, specifically, according to a plurality of preset zero norms for disturbance
Figure BDA0002898478670000091
And a number of infinite norms
Figure BDA0002898478670000092
Selecting zero norms in a plurality of zero norms one by one to traverse a plurality of zero norms, and combining the selected zero norms with infinite norms in a plurality of infinite norms one by one to obtain a plurality of norm groups
Figure BDA0002898478670000093
Presetting an initial disturbance matrix, wherein the number of parameters in the initial disturbance matrix is the same as the number of pixel points in the original image, the parameters are arranged according to the positions of the pixel points in the original image, the absolute value of each parameter is 1, and the symbols of the parameters at the same position in the initial disturbance matrix and the sensitive matrix are consistent; selecting the norm groups one by one, traversing a plurality of norm groups, and combining infinite norms in the current norm group
Figure BDA0002898478670000094
Is half that
Figure BDA0002898478670000095
Multiplying the initial disturbance matrix by each parameter in the initial disturbance matrix to obtain a disturbance matrix corresponding to each norm group, acquiring a zero norm in the current norm group, and sequentially dividing the front zero norm in the sensitive matrix according to the sequence of the sensitivity values from large to small
Figure BDA0002898478670000096
Setting each sensitive value as 1 and setting the other sensitive values as 0 to obtain a mask matrix corresponding to each disturbance matrix; and superposing the pixel points in the original image with the disturbance matrixes corresponding to the norm groups respectively, and multiplying the pixel points by the mask matrixes corresponding to the disturbance matrixes to obtain the disturbance graphs corresponding to the norm groups.
S3: inquiring the original image and the disturbance graphs corresponding to the norm groups into a target deep learning model to obtain and obtain a zero norm l of confrontation according to the reduction value of the original class prediction probability of the disturbance graphs corresponding to the norm groups relative to the original image0And fight againstInfinite norm l
Specifically, the original image and the disturbance map corresponding to each norm group are input into the target deep learning model to obtain the original class prediction probability of the original image and the original class prediction probability of the disturbance map corresponding to each norm group, and a reduction value p of the disturbance map corresponding to each norm group relative to the original class prediction probability of the original image is obtainedi,j(ii) a Superposing the descending values of the disturbance graphs corresponding to the norm groups of the same zero norm relative to the original type prediction probability of the original image in all the norm groups to obtain a plurality of first superposed values
Figure BDA0002898478670000101
Selecting a zero norm corresponding to a first superposition value with the largest slope change from the plurality of first superposition values as a confrontation zero norm by applying an elbow rule; superposing the descending values of the disturbance graphs corresponding to the norm groups of the same infinite norm relative to the original type prediction probability of the original image in all the norm groups to obtain a plurality of second superposed values
Figure BDA0002898478670000102
And selecting the infinite norm corresponding to the second superposition value with the largest slope change from the plurality of second superposition values as the confrontation infinite norm by applying an elbow rule.
S4: constructing a preset number of confrontation disturbance matrixes according to the confrontation infinite norm, taking the prediction probability of the attack target class as the fitness, taking the fitness as the maximum optimization target, iteratively optimizing each pair of confrontation disturbance matrixes through an evolutionary algorithm according to the original image, the sensitivity matrix and the confrontation zero norm, and performing S5 after each iteration.
When no target attack exists, the fitness is a negative value of the probability of the original class in the output vector of the target deep learning model, and when the target attack exists, the fitness is the probability of the target class in the output vector of the target deep learning model. Referring to fig. 3, the evolutionary algorithm is schematically based on the principle of the evolutionary algorithm, the population is initialized according to human experience, then variation and crossover are performed, the optimization target with the maximum fitness is selected, whether the attack target is reached is judged according to the result of selection, if the attack target is reached, the corresponding countermeasure sample is output, otherwise, iteration is continued within the iteration time limit. The evolutionary algorithm is a covariance matrix adaptive evolutionary strategy which can select an adaptive differential evolutionary strategy or linear expansion.
When the evolutionary algorithm is the adaptive differential evolution strategy, the specific method of S4 is as follows:
s401: randomly taking values in [ -confrontation infinite norm, confrontation infinite norm ] as parameters in the confrontation disturbance matrix, and constructing a preset number of confrontation disturbance matrices, wherein the number of the confrontation disturbance matrices is generally 20; and each parameter in the disturbance resisting matrix corresponds to each pixel point of the original image one by one.
S402: according to the adaptive scale factor and the variation formula, a preset number of the antagonistic disturbance matrixes are varied to obtain a preset number of the variable antagonistic disturbance matrixes, and the antagonistic disturbance matrixes and the variable antagonistic disturbance matrixes are used as individuals; acquiring a mask matrix corresponding to each body according to the anti-zero norm and the sensitive matrix; respectively superposing each individual on each pixel point in the original image, and multiplying the individual with the mask matrix corresponding to each individual to obtain a disturbance diagram corresponding to each individual; and inputting the disturbance graphs corresponding to the individuals into a target deep learning model to obtain the fitness of the disturbance graphs corresponding to the individuals, and selecting a preset number of the individuals as an anti-disturbance matrix after iterative optimization according to the sequence of the fitness from large to small.
Specifically, the variant part of the original differential evolution strategy is as follows:
Figure BDA0002898478670000111
wherein,
Figure BDA0002898478670000112
indicates a certain individual in the i-th generation population,
Figure BDA0002898478670000113
and
Figure BDA0002898478670000114
are all randomThe selected index value, F, is a fixed scale factor. In this embodiment, on the basis of the original variation, considering that the scale factor F (variation step size) can be appropriately increased at the initial stage of evolution (optimization), F is changed to a form varying with i:
Figure BDA0002898478670000115
wherein AF represents an adaptive scaling factor (adaptive factor), F1And F2Respectively, the lower limit and the initial value of the AF, alpha is a factor for adjusting the AF change rate, and s is a scaling factor of the iteration number i. The improved scale factor is used for a differential evolution strategy, so that the convergence speed of the differential evolution strategy can be increased.
S403: and updating the anti-disturbance matrix with the iteratively optimized anti-disturbance matrix, and iterating S402 and after each iteration, performing S5.
When the evolution algorithm is a Linear spread Covariance Matrix adaptive evolution strategy (Linear spread Covariance Matrix additive evolution Strategies), in order to solve the problem of high time complexity caused by frequent computation of Covariance matrices when solving a very high-dimensional optimization problem, the optimization space of n-dimension (n ═ w × h) is reduced to a space formed by Linear spreading of k (k is far less than n) n-dimension vectors, namely, the output disturbance is set to be a weighted sum of the k n-dimension vectors, each weight is optimized by using the Covariance Matrix adaptive evolution strategy, and the time complexity of computing the Covariance Matrix is O (n) (n is far less than n)2) Reduced to O (k)2) And is greatly reduced. On this basis, a good effect can be achieved by using a covariance matrix adaptive evolution strategy, and the specific method of S4 is as follows:
s411: randomly generating k n-dimensional vectors as a basis of a span space; the method comprises the steps of randomly generating a preset number of k-dimensional vectors through preset k-dimensional Gaussian distribution, multiplying the k-dimensional vectors by a base by taking the k-dimensional vectors as weights to obtain a preset number of n-dimensional sample vectors, and limiting the n-dimensional sample vectors by an anti-infinite norm to obtain an anti-disturbance matrix.
S412: acquiring a mask matrix corresponding to each anti-disturbance matrix according to the anti-zero norm and the sensitive matrix; superposing each anti-disturbance matrix on each pixel point in the original image respectively, and multiplying the anti-disturbance matrixes by the corresponding mask matrixes of the anti-disturbance matrixes to obtain disturbance graphs corresponding to the anti-disturbance matrixes; inputting a disturbance graph corresponding to each anti-disturbance matrix into a target deep learning model to obtain the fitness of the disturbance graph corresponding to each anti-disturbance matrix, selecting a set number of anti-disturbance matrices according to the sequence of the fitness from large to small, updating k-dimensional Gaussian distribution according to the set number of anti-disturbance matrices, randomly generating a preset number of optimized k-dimensional vectors according to the updated k-dimensional Gaussian distribution, multiplying the optimized k-dimensional vectors by a weight and a substrate to obtain a preset number of optimized n-dimensional sample vectors, and limiting the optimized n-dimensional sample vectors by an infinite norm to obtain an iteratively optimized anti-disturbance matrix.
S413: and updating the anti-disturbance matrix with the iteratively optimized anti-disturbance matrix, and iterating S412, and performing S5 after each iteration.
S5: when at least one target anti-disturbance matrix exists in the anti-disturbance matrix after current iterative optimization, disturbing the original image through the target anti-disturbance matrix to obtain an anti-sample and output the anti-sample, and finishing iteration; otherwise, return to S4.
The target anti-disturbance matrix is an original image disturbed by the target anti-disturbance matrix, and the output of the target deep learning model can reach an attack target, such as: and enabling the output of the disturbed original image input target deep learning model to be different from the output of the original image input target deep learning model, or enabling the output of the disturbed original image input target deep learning model to be a preset output result.
In summary, according to the method for generating the confrontation sample of the deep learning model, the sensitive graph of the reference model is migrated, the threshold is applied to limit the disturbance area to the pixel points with high sensitivity, so that the distribution of the disturbed pixel points becomes sparse, and the confrontation disturbance is less noticeableAnd (5) reducing. At the same time, by reasonable 'probing' combined with the sensitive graph, different infinite norms l are triedAnd zero norm l0Combinations of automatically, dynamically adjusting for each counter attack lAnd l0The attack resisting efficiency and the success rate of limiting the query times are improved. The whole process is of a plug-in type, wherein the evolution strategy part can be replaced by different evolution strategies (such as an adaptive differential evolution algorithm and a linear-stretched covariance matrix adaptive evolution strategy) for optimization by combining with practical application situations, and the method has the characteristics of convenience and quickness. The evolutionary algorithm has the characteristic of parallelism, and the calculation of different individuals in the same iteration can be processed in parallel, so that the evolutionary algorithm has higher efficiency in the iteration process. The target deep learning model is trained through the generated confrontation sample with the low two-norm, so that the prediction accuracy of the target deep learning model can be further improved, and the robustness of the target deep learning model is improved.
Through experimental demonstration, in the method for generating the confrontation sample of the deep learning model, even if different reference models such as VGG16 or DenseNet121 are used, the two-norm disturbance smaller than that of the original algorithm can be obtained, and only less than 20% of pixel points are used, which shows that the generated confrontation sample is greatly reduced compared with the two-norm of the confrontation sample generated by the conventional method.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details of non-careless mistakes in the embodiment of the apparatus, please refer to the embodiment of the method of the present invention.
In another embodiment of the present invention, a system for generating confrontation samples of a deep learning model is provided, which can be used to implement the method for generating confrontation samples of a deep learning model described above.
The sensitive matrix acquisition module is used for acquiring a sensitive matrix of an original image based on a target deep learning model; the disturbance diagram acquisition module is used for constructing a plurality of norm groups according to a plurality of preset zero norms and a plurality of infinite norms for disturbance; obtaining a disturbance diagram corresponding to each norm group according to the sensitive matrix and the plurality of norm groups; the parameter determination module is used for inquiring the target deep learning model from the original image and the disturbance graphs corresponding to the range groups to obtain and obtain a zero-norm countermeasure and an infinite norm countermeasure according to a reduction value of the original class prediction probability of the disturbance graphs corresponding to the range groups relative to the original image; the optimization module is used for constructing a preset number of resistance disturbance matrixes according to resistance infinite norms, taking the prediction probability of an attack target class as the fitness, taking the fitness as the maximum optimization target, iteratively optimizing each pair of resistance disturbance matrixes through an evolutionary algorithm according to the original image, the sensitivity matrix and the resistance zero norm, and triggering the output module after each iteration; the output module is used for disturbing the original image through the target disturbance resisting matrix when at least one target disturbance resisting matrix exists in the disturbance resisting matrix after current iteration optimization to obtain a resisting sample and output the resisting sample, and the iteration is finished; otherwise, triggering the optimization module.
In yet another embodiment of the present invention, a computer device is provided that includes a processor and a memory for storing a computer program comprising program instructions, the processor for executing the program instructions stored by the computer storage medium. The Processor may be a Central Processing Unit (CPU), or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc., which is a computing core and a control core of the terminal, and is specifically adapted to load and execute one or more instructions in a computer storage medium to implement a corresponding method flow or a corresponding function; the processor provided by the embodiment of the invention can be used for the operation of the method for generating the countersample of the deep learning model.
In yet another embodiment of the present invention, the present invention further provides a storage medium, specifically a computer-readable storage medium (Memory), which is a Memory device in a computer device and is used for storing programs and data. It is understood that the computer readable storage medium herein can include both built-in storage media in the computer device and, of course, extended storage media supported by the computer device. The computer-readable storage medium provides a storage space storing an operating system of the terminal. Also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory. One or more instructions stored in a computer-readable storage medium may be loaded and executed by a processor to perform the corresponding steps of the method for generating countersamples with deep learning models in the above-described embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. A method for generating confrontation samples of a deep learning model is characterized by comprising the following steps:
s1: acquiring a sensitive matrix of an original image based on a target deep learning model;
s2: constructing a plurality of norm groups according to a plurality of preset zero norms and a plurality of infinite norms for disturbance; obtaining a disturbance diagram corresponding to each norm group according to the sensitive matrix and the plurality of norm groups;
s3: inquiring a target deep learning model for the original image and the disturbance graphs corresponding to the range groups to obtain and obtain a zero norm countermeasure and an infinite norm countermeasure according to a reduction value of the original class prediction probability of the disturbance graphs corresponding to the range groups relative to the original image;
s4: constructing a preset number of confrontation disturbance matrixes according to the confrontation infinite norm, taking the prediction probability of the attack target class as the fitness, taking the fitness as the maximum optimization target, iteratively optimizing each pair of confrontation disturbance matrixes through an evolutionary algorithm according to the original image, the sensitivity matrix and the confrontation zero norm, and performing S5 after each iteration;
s5: when at least one target anti-disturbance matrix exists in the anti-disturbance matrix after current iterative optimization, disturbing the original image through the target anti-disturbance matrix to obtain an anti-sample and output the anti-sample, and finishing iteration; otherwise, return to S4.
2. The method for generating confrontation samples of deep learning model according to claim 1, wherein the specific method of S1 is:
acquiring a deep learning model with the same classification target as the target deep learning model as a reference model;
selecting pixel points in the original image one by one to carry out the following steps: disturbing the current pixel point by a preset size to obtain a sensitive image, inputting the sensitive image into a reference model to obtain a change value of the original class prediction probability output relative to the reference model before disturbance after the current pixel point is disturbed, and taking the change value as the sensitive value of the current pixel point;
and arranging the sensitive values of all pixel points in the original image according to the positions of all the pixel points in the original image to obtain a sensitive matrix of the original image based on the target deep learning model.
3. The method for generating confrontation samples of deep learning model according to claim 2, wherein the specific method of S2 is:
selecting zero norms in the plurality of zero norms one by one to traverse the plurality of zero norms according to a plurality of preset zero norms and a plurality of infinite norms for disturbance, and combining the selected zero norms and the infinite norms in the plurality of infinite norms one by one to obtain a plurality of norm groups;
presetting an initial disturbance matrix, wherein the number of parameters in the initial disturbance matrix is the same as the number of pixel points in the original image, the parameters are arranged according to the positions of the pixel points in the original image, the absolute value of each parameter is 1, and the symbols of the parameters at the same position in the initial disturbance matrix and the sensitive matrix are consistent; selecting a norm group one by one, traversing a plurality of norm groups, multiplying half of infinite norms in the current norm group by each parameter in the initial disturbance matrix to obtain disturbance matrixes corresponding to the norm groups, obtaining zero norms in the current norm group, setting the front zero norm sensitive values in the sensitive matrixes to be 1 and setting the other sensitive values to be 0 according to the descending order of the sensitive values, and obtaining mask matrixes corresponding to the disturbance matrixes;
and superposing the pixel points in the original image with the disturbance matrixes corresponding to the norm groups respectively, and multiplying the pixel points by the mask matrixes corresponding to the disturbance matrixes to obtain the disturbance graphs corresponding to the norm groups.
4. The method for generating confrontation samples of deep learning model according to claim 1, wherein the specific method of S3 is:
inputting the original image and the disturbance images corresponding to the norm groups into a target deep learning model to obtain the original class prediction probability of the original image and the original class prediction probability of the disturbance images corresponding to the norm groups, and obtaining a reduction value of the original class prediction probability of the disturbance images corresponding to the norm groups relative to the original image;
superposing disturbance graphs corresponding to norm groups of the same zero norm relative to the descending values of the original type prediction probability of the original image in all the norm groups to obtain a plurality of first superposed values, and selecting the zero norm corresponding to the first superposed value with the largest slope change in the plurality of first superposed values as the anti-zero norm;
and superposing disturbance graphs corresponding to norm groups of the same infinite norm with respect to the descending values of the original type prediction probability of the original image in all the norm groups to obtain a plurality of second superposed values, and selecting the infinite norm corresponding to the second superposed value with the largest slope change in the plurality of second superposed values as the confrontation infinite norm.
5. The method for generating the confrontation sample of the deep learning model as claimed in claim 1, wherein the evolutionary algorithm in S4 is: an adaptive differential evolution strategy or a linear-stretched covariance matrix adaptive evolution strategy.
6. The method for generating the confrontation sample of the deep learning model as claimed in claim 5, wherein when the evolutionary algorithm is the adaptive differential evolution strategy, the specific method of S4 is as follows:
s401: randomly taking values in [ -confrontation infinite norm, confrontation infinite norm ] as parameters in the confrontation disturbance matrix, and constructing a preset number of confrontation disturbance matrices; each parameter in the anti-disturbance matrix corresponds to each pixel point of the original image one by one;
s402: according to the adaptive scale factor and the variation formula, a preset number of the antagonistic disturbance matrixes are varied to obtain a preset number of the variable antagonistic disturbance matrixes, and the antagonistic disturbance matrixes and the variable antagonistic disturbance matrixes are used as individuals; acquiring a mask matrix corresponding to each body according to the anti-zero norm and the sensitive matrix; respectively superposing each individual on each pixel point in the original image, and multiplying the individual with the mask matrix corresponding to each individual to obtain a disturbance diagram corresponding to each individual; inputting the disturbance graphs corresponding to the individuals into a target deep learning model to obtain the fitness of the disturbance graphs corresponding to the individuals, and selecting a preset number of the individuals as an anti-disturbance matrix after iterative optimization according to the sequence of the fitness from large to small;
s403: and updating the anti-disturbance matrix with the iteratively optimized anti-disturbance matrix, and iterating S402 and after each iteration, performing S5.
7. The method for generating the confrontation sample of the deep learning model according to claim 5, wherein when the evolutionary algorithm is a linear-stretched covariance matrix adaptive evolution strategy, the specific method of S4 is as follows:
s411: randomly generating k n-dimensional vectors as a basis of a span space; randomly generating a preset number of k-dimensional vectors through preset k-dimensional Gaussian distribution, multiplying the k-dimensional vectors by a substrate by taking the k-dimensional vectors as weights to obtain a preset number of n-dimensional sample vectors, and limiting the n-dimensional sample vectors by an anti-infinite norm to obtain an anti-disturbance matrix;
s412: acquiring a mask matrix corresponding to each anti-disturbance matrix according to the anti-zero norm and the sensitive matrix; superposing each anti-disturbance matrix on each pixel point in the original image respectively, and multiplying the anti-disturbance matrixes by the corresponding mask matrixes of the anti-disturbance matrixes to obtain disturbance graphs corresponding to the anti-disturbance matrixes; inputting a disturbance graph corresponding to each anti-disturbance matrix into a target deep learning model to obtain the fitness of the disturbance graph corresponding to each anti-disturbance matrix, selecting a set number of anti-disturbance matrices according to the sequence of the fitness from large to small, updating k-dimensional Gaussian distribution according to the set number of anti-disturbance matrices, randomly generating a preset number of optimized k-dimensional vectors according to the updated k-dimensional Gaussian distribution, multiplying the optimized k-dimensional vectors by a weight and a substrate to obtain a preset number of optimized n-dimensional sample vectors, and limiting the optimized n-dimensional sample vectors by an infinite norm to obtain an iteratively optimized anti-disturbance matrix;
s413: and updating the anti-disturbance matrix with the iteratively optimized anti-disturbance matrix, and iterating S412, and performing S5 after each iteration.
8. A system for generating confrontation samples of a deep learning model is characterized by comprising a sensitive matrix acquisition module, a disturbance map acquisition module, an optimization module, a parameter determination module and an output module;
the sensitive matrix acquisition module is used for acquiring a sensitive matrix of the original image based on the target deep learning model;
the disturbance diagram acquisition module is used for constructing a plurality of norm groups according to a plurality of preset zero norms and a plurality of infinite norms for disturbance; obtaining a disturbance diagram corresponding to each norm group according to the sensitive matrix and the plurality of norm groups;
the parameter determination module is used for inquiring the target deep learning model from the original image and the disturbance graphs corresponding to the range groups to obtain and obtain a zero-norm countermeasure and an infinite norm countermeasure according to a reduction value of the original class prediction probability of the disturbance graphs corresponding to the range groups relative to the original image;
the optimization module is used for constructing a preset number of resistance disturbance matrixes according to resistance infinite norms, taking the prediction probability of an attack target class as the fitness, taking the fitness as the maximum optimization target, iteratively optimizing each pair of resistance disturbance matrixes through an evolutionary algorithm according to the original image, the sensitivity matrix and the resistance zero norm, and triggering the output module after each iteration;
the output module is used for disturbing the original image through the target disturbance resisting matrix when at least one target disturbance resisting matrix exists in the disturbance resisting matrix after current iteration optimization to obtain a resisting sample and output the resisting sample, and the iteration is finished; otherwise, triggering the optimization module.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the deep learning model confrontation sample generation method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for generating confrontation samples of a deep learning model according to any one of claims 1 to 7.
CN202110049467.5A 2021-01-14 2021-01-14 Deep learning model countermeasure sample generation method, system, equipment and storage medium Active CN112949678B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110049467.5A CN112949678B (en) 2021-01-14 2021-01-14 Deep learning model countermeasure sample generation method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110049467.5A CN112949678B (en) 2021-01-14 2021-01-14 Deep learning model countermeasure sample generation method, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112949678A true CN112949678A (en) 2021-06-11
CN112949678B CN112949678B (en) 2023-05-02

Family

ID=76235230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110049467.5A Active CN112949678B (en) 2021-01-14 2021-01-14 Deep learning model countermeasure sample generation method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112949678B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343025A (en) * 2021-08-05 2021-09-03 中南大学 Sparse attack resisting method based on weighted gradient Hash activation thermodynamic diagram
CN113656813A (en) * 2021-07-30 2021-11-16 深圳清华大学研究院 Image processing method, system, equipment and storage medium based on anti-attack
CN113740903A (en) * 2021-08-27 2021-12-03 西安交通大学 Data and intelligent optimization dual-drive deep learning seismic wave impedance inversion method
CN113780123A (en) * 2021-08-27 2021-12-10 广州大学 Countermeasure sample generation method, system, computer device and storage medium
CN114139631A (en) * 2021-12-03 2022-03-04 华北电力大学 Multi-target training object-oriented selectable ash box confrontation sample generation method
CN114764616A (en) * 2022-04-01 2022-07-19 中国工程物理研究院计算机应用研究所 Countermeasure sample generation method and system based on trigger condition
CN114882323A (en) * 2022-07-08 2022-08-09 第六镜科技(北京)集团有限责任公司 Confrontation sample generation method and device, electronic equipment and storage medium
CN114943641A (en) * 2022-07-26 2022-08-26 北京航空航天大学 Method and device for generating anti-texture image based on model sharing structure
CN115019102A (en) * 2022-06-17 2022-09-06 华中科技大学 Construction method and application of confrontation sample generation model
CN115063654A (en) * 2022-06-08 2022-09-16 厦门大学 Black box attack method based on sequence element learning, storage medium and electronic equipment
CN116030312A (en) * 2023-03-30 2023-04-28 中国工商银行股份有限公司 Model evaluation method, device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN110163093A (en) * 2019-04-15 2019-08-23 浙江工业大学 A kind of guideboard identification confrontation defence method based on genetic algorithm
CN110941794A (en) * 2019-11-27 2020-03-31 浙江工业大学 Anti-attack defense method based on universal inverse disturbance defense matrix
US20200175176A1 (en) * 2018-11-30 2020-06-04 Robert Bosch Gmbh Measuring the vulnerability of ai modules to spoofing attempts
CN111461177A (en) * 2020-03-09 2020-07-28 北京邮电大学 Image identification method and device
WO2020192849A1 (en) * 2019-03-28 2020-10-01 Conti Temic Microelectronic Gmbh Automatic identification and classification of adversarial attacks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200175176A1 (en) * 2018-11-30 2020-06-04 Robert Bosch Gmbh Measuring the vulnerability of ai modules to spoofing attempts
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
WO2020192849A1 (en) * 2019-03-28 2020-10-01 Conti Temic Microelectronic Gmbh Automatic identification and classification of adversarial attacks
CN110163093A (en) * 2019-04-15 2019-08-23 浙江工业大学 A kind of guideboard identification confrontation defence method based on genetic algorithm
CN110941794A (en) * 2019-11-27 2020-03-31 浙江工业大学 Anti-attack defense method based on universal inverse disturbance defense matrix
CN111461177A (en) * 2020-03-09 2020-07-28 北京邮电大学 Image identification method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CIHANG XIE等: "Adversarial Examples for Semantic Segmentation and Object Detection", 《ARXIV-COMPUTER VISION AND PATTERN RECOGNITION》 *
XIAOYONG YUAN等: "Adversarial Examples: Attacks and Defenses for Deep Learning", 《ARXIV-MACHINE LEARNING》 *
刘恒等: "基于生成式对抗网络的通用性对抗扰动生成方法", 《信息网络安全》 *
郭鹏等: "差分隐私GAN梯度裁剪阈值的自适应选取方法", 《网络与信息安全学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656813A (en) * 2021-07-30 2021-11-16 深圳清华大学研究院 Image processing method, system, equipment and storage medium based on anti-attack
CN113656813B (en) * 2021-07-30 2023-05-23 深圳清华大学研究院 Image processing method, system, equipment and storage medium based on attack resistance
CN113343025A (en) * 2021-08-05 2021-09-03 中南大学 Sparse attack resisting method based on weighted gradient Hash activation thermodynamic diagram
CN113343025B (en) * 2021-08-05 2021-11-02 中南大学 Sparse attack resisting method based on weighted gradient Hash activation thermodynamic diagram
CN113740903A (en) * 2021-08-27 2021-12-03 西安交通大学 Data and intelligent optimization dual-drive deep learning seismic wave impedance inversion method
CN113780123A (en) * 2021-08-27 2021-12-10 广州大学 Countermeasure sample generation method, system, computer device and storage medium
CN113780123B (en) * 2021-08-27 2023-08-08 广州大学 Method, system, computer device and storage medium for generating countermeasure sample
CN114139631A (en) * 2021-12-03 2022-03-04 华北电力大学 Multi-target training object-oriented selectable ash box confrontation sample generation method
CN114764616B (en) * 2022-04-01 2023-03-24 中国工程物理研究院计算机应用研究所 Countermeasure sample generation method and system based on trigger condition
CN114764616A (en) * 2022-04-01 2022-07-19 中国工程物理研究院计算机应用研究所 Countermeasure sample generation method and system based on trigger condition
CN115063654A (en) * 2022-06-08 2022-09-16 厦门大学 Black box attack method based on sequence element learning, storage medium and electronic equipment
CN115019102A (en) * 2022-06-17 2022-09-06 华中科技大学 Construction method and application of confrontation sample generation model
CN115019102B (en) * 2022-06-17 2024-09-10 华中科技大学 Construction method and application of countermeasure sample generation model
CN114882323A (en) * 2022-07-08 2022-08-09 第六镜科技(北京)集团有限责任公司 Confrontation sample generation method and device, electronic equipment and storage medium
CN114943641B (en) * 2022-07-26 2022-10-28 北京航空航天大学 Method and device for generating confrontation texture image based on model sharing structure
CN114943641A (en) * 2022-07-26 2022-08-26 北京航空航天大学 Method and device for generating anti-texture image based on model sharing structure
CN116030312A (en) * 2023-03-30 2023-04-28 中国工商银行股份有限公司 Model evaluation method, device, computer equipment and storage medium
CN116030312B (en) * 2023-03-30 2023-06-16 中国工商银行股份有限公司 Model evaluation method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112949678B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN112949678A (en) Method, system, equipment and storage medium for generating confrontation sample of deep learning model
Goceri Analysis of deep networks with residual blocks and different activation functions: classification of skin diseases
CN108491765B (en) Vegetable image classification and identification method and system
Sabour et al. Matrix capsules with EM routing
CN110633745B (en) Image classification training method and device based on artificial intelligence and storage medium
Zeng et al. CNN model design of gesture recognition based on tensorflow framework
US20210224647A1 (en) Model training apparatus and method
Dozono et al. Convolutional self organizing map
Zhang et al. Evolving neural network classifiers and feature subset using artificial fish swarm
Wang et al. Efficient yolo: A lightweight model for embedded deep learning object detection
CN115601583A (en) Deep convolution network target identification method of double-channel attention mechanism
Balakrishnan et al. Meticulous fuzzy convolution C means for optimized big data analytics: adaptation towards deep learning
CN115063847A (en) Training method and device for facial image acquisition model
CN105096304A (en) Image characteristic estimation method and device
CN116992941A (en) Convolutional neural network pruning method and device based on feature similarity and feature compensation
CN109858543B (en) Image memorability prediction method based on low-rank sparse representation and relationship inference
CN113554104B (en) Image classification method based on deep learning model
CN112529637B (en) Service demand dynamic prediction method and system based on context awareness
CN110826726B (en) Target processing method, target processing device, target processing apparatus, and medium
Goh et al. Learning invariant color features with sparse topographic restricted Boltzmann machines
Sharma et al. LightNet: A Lightweight Neural Network for Image Classification
KR101763259B1 (en) Electronic apparatus for categorizing data and method thereof
CN113011446A (en) Intelligent target identification method based on multi-source heterogeneous data learning
CN113449817B (en) Image classification implicit model acceleration training method based on phantom gradient
Yu et al. A finger vein recognition method based on PCA-RBF Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant