CN114036503B - Migration attack method and device, electronic equipment and storage medium - Google Patents

Migration attack method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114036503B
CN114036503B CN202111265538.1A CN202111265538A CN114036503B CN 114036503 B CN114036503 B CN 114036503B CN 202111265538 A CN202111265538 A CN 202111265538A CN 114036503 B CN114036503 B CN 114036503B
Authority
CN
China
Prior art keywords
layer
network model
original image
preset network
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111265538.1A
Other languages
Chinese (zh)
Other versions
CN114036503A (en
Inventor
唐可可
乔佳诚
娄添瑞
李树栋
顾钊铨
李默涵
仇晶
韩伟红
田志宏
殷丽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202111265538.1A priority Critical patent/CN114036503B/en
Publication of CN114036503A publication Critical patent/CN114036503A/en
Application granted granted Critical
Publication of CN114036503B publication Critical patent/CN114036503B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a migration attack method, a migration attack device, electronic equipment and a storage medium, wherein the migration attack method comprises the following steps: each layer of output of an original image transmitted in a preset network model corresponds to each layer of output of an initial countermeasure sample corresponding to the original image transmitted in the preset network model one by one, and a two-norm of a difference value of each layer of output is obtained; calculating the Lyapunov exponent of the initial challenge sample according to the two-norm of the difference value output by each layer; determining an objective function corresponding to the initial challenge sample based on the Lyapunov exponent and a loss function of the preset network model; and back-propagating the preset network model based on the objective function to obtain a countermeasure sample corresponding to the original image. The application can stably generate the countermeasure sample with high mobility.

Description

Migration attack method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a migration attack method, a migration attack device, an electronic device, and a storage medium.
Background
Artificial intelligence is one of the most popular research fields at present, and has deep application in many directions, wherein deep learning has become one of the most widely used technologies at present, and plays an important role in fields such as computer vision, natural language processing, data retrieval and the like. At the same time, however, security issues with artificial intelligence systems are increasingly exposed. In many application scenarios, artificial intelligence systems, especially deep learning systems, are susceptible to well-designed small disturbances, i.e. by superimposing small noise on normal samples, which can lead to erroneous results of the neural network. At the same time, the human eye cannot find out the disturbance, which causes great threat to the safety of the artificial intelligence today.
With the intensive study of deep learning technology, many methods for combating attacks are proposed successively, and combating attacks can be mainly classified into white-box attacks and black-box attacks:
1. white box attack: the attacker fully grasps the internal structure of the target model and the training parameter values, and even further comprises the feature set, the training method, the training data and the like to fight against the attack.
2. Black box attack: the attacker analyzes, designs and constructs the challenge sample and implements the challenge attack through the input and output results of the data without knowing the internal structure of the target model, the training parameters and the corresponding algorithms. In an actual application scene, the security threat of the black box attack is more serious.
Among the black box attacks, one method of attack is a migration attack. Because the black box model is a model that is not aware of the model structure and parameters, the black box model can be attacked by a challenge sample that is generated by some white box models, an attack method that can use the challenge sample generated by the white box model to attack the black box model is called a migration attack.
The existing migration attack method is mainly an attack method based on middle layer output, an attack method based on data enhancement on original data, an attack method based on a GAN generator thought and the like, and although the existing migration attack method can generate an countermeasure sample with strong migration, the existing migration attack method has high randomness in the generation process of the countermeasure sample, so that the countermeasure sample with strong migration cannot be stably generated.
Disclosure of Invention
The application provides a migration attack method, a migration attack device, electronic equipment and a storage medium, which can stably generate an countermeasure sample with stronger migration.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides a migration attack method, including:
S1, outputting each layer of output of an original image transmitted in a preset network model to correspond to each layer of output of an initial countermeasure sample transmitted in the preset network model, and acquiring a two-norm of a difference value of each layer of output;
S2, calculating the Lyapunov index of the initial challenge sample according to the two-norm of the difference value output by each layer;
s3, determining an objective function corresponding to the initial countermeasure sample based on the Lyapunov exponent and a loss function of the preset network model;
S4, back-propagating the preset network model based on the objective function to obtain a countermeasure sample corresponding to the original image;
s5, taking the challenge sample as a new initial challenge sample, and continuing to execute the steps S1-S5 until the challenge sample meets the preset condition.
According to one implementation manner of the first aspect of the present application, the preset network model has a d+1 layer network, the first layer network has N l neurons, the weight of the first layer network of the preset network model is W l, the W l is a matrix of N l×Nl-1, and the first layer output propagated in the preset network model has the following formula:
hl=Wlxl-1+bl
Where x l-1 denotes the input vector of the first layer, h l denotes the output vector of the first layer, b l denotes the offset vector of the first layer, Representing the activation function, x l represents the input vector of layer l+1.
According to one implementation manner of the first aspect of the present application, two Fan Shike of differences between the original image and the first layer output of the original image propagated by the initial challenge sample in the preset network model are expressed as:
dl=||hl-h″l||;
dl=||(Wlxl-1+bl)-(Wlx″l-1+bl)||;
Where h l represents the output vector of the first layer of the original image propagated in the preset network model, h "l represents the output vector of the first layer of the original image propagated in the preset network model, x l-1 represents the input vector of the first layer of the original image propagated in the preset network model, and x" l-1 represents the input vector of the first layer of the original image propagated in the preset network model.
According to one manner that the first aspect of the application can be implemented, the lyapunov exponent of the initial challenge sample can be expressed as:
Where d 0 represents a second-order form of the difference between the original image and the initial challenge sample, and d i represents a second-order form of the difference between the original image and the i-th layer output of the initial challenge sample in the propagation of the preset network model.
According to one manner in which the first aspect of the application can be implemented, the objective function can be expressed as:
where J (X, y true) is used to represent the loss function, X represents the original image, y true represents the true label value, and θ is used to represent the hyper-parameter.
According to one possible implementation manner of the first aspect of the present application, the back propagation procedure may be obtained by a gradient update procedure as follows:
Where x ' n represents the challenge sample obtained by the nth gradient update, x ' n+1 represents the challenge sample obtained by the (n+1) th gradient update, x represents the initial challenge sample, and x ' 0 represents the initial gradient update sample.
In a second aspect, the present application provides a migration attack apparatus, including:
The output difference value acquisition module is used for outputting each layer of output transmitted by an original image in a preset network model and outputting each layer of output transmitted by an initial countermeasure sample corresponding to the original image in the preset network model in a one-to-one correspondence manner, so as to acquire a two-norm of the difference value of each layer of output;
The parameter calculation module is used for calculating the Lyapunov index of the initial challenge sample according to the two-norm of the difference value output by each layer;
The objective function determining module is used for determining an objective function corresponding to the initial countermeasure sample based on the Lyapunov exponent and a loss function of the preset network model;
and the countermeasure sample generation module is used for carrying out back propagation on the preset network model based on the objective function so as to obtain a countermeasure sample corresponding to the original image.
The judging module is used for judging whether the countermeasure sample meets preset conditions.
In a third aspect, the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing a migration attack method for executing any one of the embodiments described above when the computer program is executed.
In a fourth aspect, a computer readable storage medium stores a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the method according to any of the embodiments above.
Compared with the prior art, the application provides a migration attack method, a migration attack device, electronic equipment and a storage medium, wherein the Lyapunov index is introduced in the process of generating an challenge sample, and the complexity of the challenge sample is reduced by limiting the Lyapunov index of the challenge sample, so that the challenge sample with stronger migration can be stably generated.
Drawings
FIG. 1 is a flow chart of a migration attack method according to a preferred embodiment of the present application;
fig. 2 is a block diagram of a migration attack apparatus according to a preferred embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a schematic flow chart of a preferred embodiment of a migration attack method provided by the present application.
As shown in fig. 1, the method includes:
S1, outputting each layer of output of an original image transmitted in a preset network model and each layer of output of an initial countermeasure sample corresponding to the original image transmitted in the preset network model in a one-to-one correspondence mode, and acquiring a two-norm of a difference value of each layer of output.
The original image is a sample image for producing an countermeasure sample, the initial countermeasure sample is generated according to the original image, and the initial countermeasure sample can be obtained through mifgsm or the like, and the generation method of the initial countermeasure sample of the original image is not limited.
Specifically, the preset network model is a network model which is pre-established according to a target network model, and the same image classification function as the target network model can be realized, and the target network model is a model which an attacker wants to attack in a black box attack, but the internal structure, training parameters and corresponding algorithm of the target network model are unknown to the attacker, and the attacker can acquire samples and labels from the target network model and then use the samples and the labels to train the preset network model. Since the internal structure, training parameters and corresponding algorithms of the trained preset network model are known, an original image is input into the preset network model, the output of the original image in each layer can be obtained, an initial challenge sample corresponding to the original image is input into the preset network model, the output of the initial challenge sample corresponding to the original image in each layer can be obtained, each layer of output obtained by taking the original image as the input of the preset network model corresponds to each layer of output obtained by taking the initial challenge sample corresponding to the original image as the input of the preset network model one by one, the propagation of each layer of network is regarded as the change of each moment, and the two-norm of the difference value of the output of each layer of network is calculated.
In an embodiment, the preset network model has a d+1 layer network, the first layer network has N l neurons, the first layer network of the preset network model has a weight of W l, the W l is a matrix of N l×Nl-1, and the first layer output propagated in the preset network model has the following formula:
Where x l-1 denotes the input vector of the first layer, h l denotes the output vector of the first layer, b l denotes the offset vector of the first layer, Representing the activation function, x l represents the input vector of layer l+1.
S2, calculating the Lyapunov index of the initial challenge sample according to the two-norm of the difference value output by each layer.
In this embodiment, when the lyapunov exponent is used in the field of neural networks, the propagation of the original image and the initial challenge sample in each layer in a preset network model may be regarded as a change at each moment, and a two-norm of the difference value of the output of each layer may be calculated as a distance. The Lyapunov index is used for measuring the confusion degree of a sample, and the confusion degree of the sample is positively correlated with the overfitting degree of the sample to the source model, because once the overfitting degree of the sample is high, elements in the sample can change irregularly according to parameters of the source model, so that the confusion degree of the sample is higher. If the overfitting of a sample to the source model is high, this results in a sample with low mobility, and therefore the lower the Lyapunov exponent of a sample, the more mobile it is.
In one embodiment, two Fan Shike of the difference between the original image and the first layer output of the original image propagated by the initial challenge sample in the predetermined network model is represented as:
dl=||hl-h″l||;
dl=||(Wlxl-1+bl)-(Wlx″l-1+bl)||;
Where h l represents the output vector of the first layer of the original image propagated in the preset network model, h "l represents the output vector of the first layer of the original image propagated in the preset network model, x l-1 represents the input vector of the first layer of the original image propagated in the preset network model, and x" l-1 represents the input vector of the first layer of the original image propagated in the preset network model.
In one embodiment, the lyapunov exponent of the initial challenge sample may be expressed as:
Where d 0 represents a second-order form of the difference between the original image and the initial challenge sample, and d i represents a second-order form of the difference between the original image and the i-th layer output of the initial challenge sample in the propagation of the preset network model.
S3, determining an objective function corresponding to the initial countermeasure sample based on the Lyapunov exponent and the loss function of the preset network model.
In one embodiment, the objective function may be expressed as:
where J (X, y true) is used to represent the loss function, X represents the original image, y true represents the true label value, and θ is used to represent the hyper-parameter.
Specifically, the loss value obtained by calculation of the loss function is the distance between the predicted value and the real label value, the greater the distance between the predicted value and the real label value is, the greater the loss value is, the lower the accuracy of the predicted result is, the loss value part is the part for controlling the aggressiveness of the anti-sample, and the greater the loss value is, the stronger the aggressiveness is; the mobility part of the control sample is the Lyapunov exponent, and the lower the Lyapunov exponent, the stronger the representative mobility; the loss value part and the mobility part in the objective function are balanced through the super parameter theta, and in the actual use process, a user can adjust the super parameter through the actual attack effect, for example, when the success rate of the finally generated challenge sample on the migration network model is low, the super parameter theta can be reduced, and when the success rate of the finally generated challenge sample on the objective network model is low, the super parameter theta can be increased, so that the finally generated challenge sample can have stronger aggressiveness while ensuring high mobility.
And S4, back-propagating the preset network model based on the objective function to obtain a countermeasure sample corresponding to the original image.
In one embodiment, the back propagation process may be obtained by a gradient update process as follows:
Where x ' n represents the challenge sample obtained by the nth gradient update, x ' n+1 represents the challenge sample obtained by the (n+1) th gradient update, x represents the initial challenge sample, and x ' 0 represents the initial gradient update sample.
Wherein the counter-propagating initial iteration sample may be the initial challenge sample, it being understood that the initial challenge sample is a counter-propagating initial gradient update sample. After determining the initial gradient update sample of the back propagation, adopting a fast gradient iterative update (FGSM) method based on an objective function, and performing gradient update on the initial gradient update sample through the back propagation of a preset network model so as to obtain an countermeasure sample corresponding to the original image.
In this embodiment, the initial gradient update sample is gradient updated by back propagation of the preset network model, so that the contrast sample and the original image are different, thereby improving the effectiveness of the contrast sample.
S5, taking the challenge sample as a new initial challenge sample, and continuing to execute the steps S1-S5 until the challenge sample meets the preset condition.
Specifically, after the challenge sample is generated through back propagation, whether the lyapunov exponent of the challenge sample reaches a preset threshold or not, that is, whether the challenge sample meets a preset condition or not may be determined, when the lyapunov exponent of the challenge sample reaches the preset threshold, it is indicated that the challenge sample meets the preset condition, and the challenge sample is taken as the challenge sample corresponding to the original image; and when the Lyapunov exponent of the challenge sample does not reach the preset threshold, indicating that the challenge sample does not meet the preset condition, taking the challenge sample as an initial challenge sample corresponding to the original image, and continuing to execute the steps S1-S5 until the challenge sample meets the preset condition.
The preset threshold value can be set according to actual conditions.
In this embodiment, the lyapunov exponent of the challenge sample is limited by performing a plurality of iterations on the challenge sample, so that the complexity of the challenge sample is reduced, and the challenge sample with high mobility can be stably generated.
Fig. 2 is a block diagram of a migration attack apparatus according to a preferred embodiment of the present application.
As shown in fig. 2, the apparatus includes:
An output difference value obtaining module 201, configured to obtain a two-norm of a difference value of each layer output by corresponding each layer output propagated in a preset network model by an original image to each layer output propagated in the preset network model by an initial countermeasure sample corresponding to the original image;
a parameter calculation module 202, configured to calculate a lyapunov exponent of the initial challenge sample according to the second-norm of the difference output by each layer;
An objective function determining module 203, configured to determine an objective function corresponding to the initial challenge sample based on the lyapunov exponent and a loss function of the preset network model;
the challenge sample generation module 204 is configured to counter-propagate the preset network model based on the objective function, so as to obtain a challenge sample corresponding to the original image;
a determining module 205, configured to determine whether the challenge sample meets a preset condition.
In one embodiment, the output difference obtaining module 201 includes:
the network model presetting unit is used for acquiring the first layer output transmitted in the network model presetting unit.
In one embodiment, the output difference obtaining module 201 includes:
and the output difference value representation unit is used for representing a second-norm of the difference value of the first-layer output of the original image and the initial countermeasure sample of the original image, which propagates in the preset network model.
In one embodiment, the parameter calculation module 202 includes:
and the parameter representation unit is used for representing the Lyapunov index of the initial challenge sample.
In one embodiment, the objective function determining module 203 includes:
And the objective function representation unit is used for representing the objective function.
In one embodiment, the challenge sample generation module 204 includes:
And a gradient update process unit for representing a gradient update process.
In one embodiment, an electronic device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing a migration attack method according to any one of the preceding claims when executing the computer program.
In an embodiment, a computer readable storage medium stores a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the steps of a migration attack method as described above.
While the foregoing is directed to the preferred embodiments of the present application, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the application, such changes and modifications are also intended to be within the scope of the application.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a ROM (Read-Only Memory), a RAM (Random Access Memory ), or the like.

Claims (5)

1. A migration attack method, comprising the steps of:
S1, outputting each layer of output of an original image transmitted in a preset network model to correspond to each layer of output of an initial countermeasure sample transmitted in the preset network model, and acquiring a two-norm of a difference value of each layer of output;
The preset network model is provided with a D+1 layer network, and the first layer network is Layer network has/>A first neuron of the preset network modelThe layer network is weighted/>Said/>Is one/>Is propagated in the preset network modelThe layer output has the following formula:
in the method, in the process of the invention, Represents the/>Output vector of layer,/>Represents the/>Bias vector of layer, brave represents activation function,/>Represents the/>An input vector of +1 layer;
the original image and the initial challenge sample of the original image propagate in the preset network model The two-norm of the difference in layer outputs is expressed as:
in the method, in the process of the invention, A/>, representing the propagation of the original image in the preset network modelOutput vector of layer,/>Representing the first/>, of the propagation of the initial challenge sample corresponding to the original image in the preset network modelOutput vector of layer,/>A/>, representing the propagation of the original image in the preset network modelInput vector of layer,/>Representing the first/>, of the propagation of the initial challenge sample corresponding to the original image in the preset network modelAn input vector of a layer;
S2, calculating the Lyapunov index of the initial challenge sample according to the two-norm of the difference value output by each layer;
The lyapunov index of the initial challenge sample is expressed as:
in the method, in the process of the invention, A two-range expression representing a difference of the original image and the initial challenge sample,/>Representing the/>, in the preset network model propagation, of the original image and the initial challenge sampleA second-order form of the difference of the layer outputs;
S3, determining an objective function corresponding to the initial countermeasure sample based on the Lyapunov exponent and a loss function of the preset network model; the objective function is expressed as:
in the method, in the process of the invention, Used to represent the loss function, X represents the original image,/>Representing a real tag value, θ being used to represent a hyper-parameter;
S4, back-propagating the preset network model based on the objective function to obtain a countermeasure sample corresponding to the original image;
s5, taking the challenge sample as a new initial challenge sample, and continuing to execute the steps S1-S5 until the challenge sample meets a preset condition.
2. The migration attack method according to claim 1, wherein the back propagation process is obtained by a gradient update process, the gradient update process being as follows:
in the method, in the process of the invention, Representing the challenge sample obtained by the nth gradient update,/>Representing the challenge sample obtained from the n+1st gradient update,/>Representing the initial gradient update samples.
3. A migration attack apparatus, comprising:
The output difference value acquisition module is used for outputting each layer of output transmitted by an original image in a preset network model and outputting each layer of output transmitted by an initial countermeasure sample corresponding to the original image in the preset network model in a one-to-one correspondence manner, so as to acquire a two-norm of the difference value of each layer of output; the preset network model is provided with a D+1 layer network, and the first layer network is Layer network has/>Neurons of the preset network modelThe layer network is weighted/>Said/>Is one/>Is propagated in the preset network modelThe layer output has the following formula:
in the method, in the process of the invention, Represents the/>Output vector of layer,/>Represents the/>Bias vector of layer, brave represents activation function,/>Represents the/>An input vector of +1 layer;
the original image and the initial challenge sample of the original image propagate in the preset network model The two-norm of the difference in layer outputs is expressed as:
in the method, in the process of the invention, A/>, representing the propagation of the original image in the preset network modelOutput vector of layer,/>Representing the first/>, of the propagation of the initial challenge sample corresponding to the original image in the preset network modelOutput vector of layer,/>A/>, representing the propagation of the original image in the preset network modelInput vector of layer,/>Representing the first/>, of the propagation of the initial challenge sample corresponding to the original image in the preset network modelAn input vector of a layer;
The parameter calculation module is used for calculating the Lyapunov index of the initial challenge sample according to the two-norm of the difference value output by each layer; the lyapunov index of the initial challenge sample is expressed as:
in the method, in the process of the invention, A two-range expression representing a difference of the original image and the initial challenge sample,/>Representing the/>, in the preset network model propagation, of the original image and the initial challenge sampleA second-order form of the difference of the layer outputs;
The objective function determining module is configured to determine an objective function corresponding to the initial challenge sample based on the lyapunov exponent and a loss function of the preset network model, where the objective function is expressed as:
in the method, in the process of the invention, Used to represent the loss function, X represents the original image,/>Representing a real tag value, θ being used to represent a hyper-parameter;
The countermeasure sample generation module is used for carrying out counter propagation on the preset network model based on the objective function so as to obtain a countermeasure sample corresponding to the original image;
the judging module is used for judging whether the countermeasure sample meets preset conditions.
4. An electronic device, characterized in that: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing a migration attack method according to any of claims 1 to 2 when said computer program is executed.
5. A computer readable storage medium, characterized in that the computer readable storage medium stores therein a computer program adapted to be loaded and executed by a processor to cause a computer device having the processor to perform the method of any of claims 1 to 2.
CN202111265538.1A 2021-10-28 2021-10-28 Migration attack method and device, electronic equipment and storage medium Active CN114036503B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111265538.1A CN114036503B (en) 2021-10-28 2021-10-28 Migration attack method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111265538.1A CN114036503B (en) 2021-10-28 2021-10-28 Migration attack method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114036503A CN114036503A (en) 2022-02-11
CN114036503B true CN114036503B (en) 2024-04-30

Family

ID=80142259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111265538.1A Active CN114036503B (en) 2021-10-28 2021-10-28 Migration attack method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114036503B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948663A (en) * 2019-02-27 2019-06-28 天津大学 A kind of confrontation attack method of the adaptive step based on model extraction
CN111340180A (en) * 2020-02-10 2020-06-26 中国人民解放军国防科技大学 Countermeasure sample generation method and device for designated label, electronic equipment and medium
CN111382837A (en) * 2020-02-05 2020-07-07 鹏城实验室 Countermeasure sample generation method based on depth product quantization
CN111461307A (en) * 2020-04-02 2020-07-28 武汉大学 General disturbance generation method based on generation countermeasure network
CN112488172A (en) * 2020-11-25 2021-03-12 北京有竹居网络技术有限公司 Method, device, readable medium and electronic equipment for resisting attack
CN113066002A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment
CN113178255A (en) * 2021-05-18 2021-07-27 西安邮电大学 Anti-attack method of medical diagnosis model based on GAN

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016000035A1 (en) * 2014-06-30 2016-01-07 Evolving Machine Intelligence Pty Ltd A system and method for modelling system behaviour
US11164085B2 (en) * 2019-04-25 2021-11-02 Booz Allen Hamilton Inc. System and method for training a neural network system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948663A (en) * 2019-02-27 2019-06-28 天津大学 A kind of confrontation attack method of the adaptive step based on model extraction
CN111382837A (en) * 2020-02-05 2020-07-07 鹏城实验室 Countermeasure sample generation method based on depth product quantization
CN111340180A (en) * 2020-02-10 2020-06-26 中国人民解放军国防科技大学 Countermeasure sample generation method and device for designated label, electronic equipment and medium
CN111461307A (en) * 2020-04-02 2020-07-28 武汉大学 General disturbance generation method based on generation countermeasure network
CN112488172A (en) * 2020-11-25 2021-03-12 北京有竹居网络技术有限公司 Method, device, readable medium and electronic equipment for resisting attack
CN113066002A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment
CN113178255A (en) * 2021-05-18 2021-07-27 西安邮电大学 Anti-attack method of medical diagnosis model based on GAN

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Artificial intelligence Security:threats and countermeasures;Mahdi Ahmadi 等;Expert Systems with Applications;20200515;第146卷;1-15 *
SAR图像目标识别的可解释性问题探讨;郭炜炜 等;雷达学报;20200617;第9卷(第03期);462-476 *
基于PCA-MPSO-ELM的空战目标威胁评估;奚之飞 等;航空学报;20200522;第41卷(第09期);216-231 *
基于组合混沌系统和交换策略的彩色图像加密算法;叶瑞松 等;徐州工程学院学报(自然科学版);20200930;第35卷(第03期);1-10 *
深度学习中的对抗攻击与防御;刘西蒙 等;网络与信息安全学报;20201013;第6卷(第05期);36-53 *

Also Published As

Publication number Publication date
CN114036503A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
CN110276377B (en) Confrontation sample generation method based on Bayesian optimization
CN113554089B (en) Image classification countermeasure sample defense method and system and data processing terminal
CN113222960B (en) Deep neural network confrontation defense method, system, storage medium and equipment based on feature denoising
CN111753881A (en) Defense method for quantitatively identifying anti-attack based on concept sensitivity
CN112633280B (en) Countermeasure sample generation method and system
CN112580728B (en) Dynamic link prediction model robustness enhancement method based on reinforcement learning
CN115860112B (en) Model inversion method-based countermeasure sample defense method and equipment
CN112085050A (en) Antagonistic attack and defense method and system based on PID controller
CN111047054A (en) Two-stage countermeasure knowledge migration-based countermeasure sample defense method
CN113255526B (en) Momentum-based confrontation sample generation method and system for crowd counting model
Vyas et al. Evaluation of adversarial attacks and detection on transfer learning model
WO2019234156A1 (en) Training spectral inference neural networks using bilevel optimization
CN113935496A (en) Robustness improvement defense method for integrated model
CN116543240B (en) Defending method for machine learning against attacks
CN114036503B (en) Migration attack method and device, electronic equipment and storage medium
CN115719085B (en) Deep neural network model inversion attack defense method and device
CN115481719B (en) Method for defending against attack based on gradient
CN116824334A (en) Model back door attack countermeasure method based on frequency domain feature fusion reconstruction
CN115758337A (en) Back door real-time monitoring method based on timing diagram convolutional network, electronic equipment and medium
CN115238271A (en) AI security detection method based on generative learning
CN112861601A (en) Method for generating confrontation sample and related equipment
CN114139601A (en) Evaluation method and system for artificial intelligence algorithm model of power inspection scene
CN113487506A (en) Countermeasure sample defense method, device and system based on attention denoising
Pittman et al. Stovepiping and Malicious Software: A Critical Review of AGI Containment
CN117197589B (en) Target classification model countermeasure training method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant