CN113449865A - Optimization method for enhancing training artificial intelligence model - Google Patents

Optimization method for enhancing training artificial intelligence model Download PDF

Info

Publication number
CN113449865A
CN113449865A CN202111001506.0A CN202111001506A CN113449865A CN 113449865 A CN113449865 A CN 113449865A CN 202111001506 A CN202111001506 A CN 202111001506A CN 113449865 A CN113449865 A CN 113449865A
Authority
CN
China
Prior art keywords
training
model
sample
attack
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111001506.0A
Other languages
Chinese (zh)
Other versions
CN113449865B (en
Inventor
周晓辉
袁博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Computing Chip Shenzhen Information Technology Co ltd
Original Assignee
Computing Chip Shenzhen Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Computing Chip Shenzhen Information Technology Co ltd filed Critical Computing Chip Shenzhen Information Technology Co ltd
Priority to CN202111001506.0A priority Critical patent/CN113449865B/en
Publication of CN113449865A publication Critical patent/CN113449865A/en
Application granted granted Critical
Publication of CN113449865B publication Critical patent/CN113449865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an optimization method for enhancing a training artificial intelligence model, which comprises the following steps in sequence: the method comprises the steps of obtaining an original data set and a pre-training model, generating two confrontation sample sets by utilizing a three-level gradient optimization generation method and a transformation algorithm function, generating two mixed attack sample sets and carrying out differential training until obtaining an identification model with the defense performance meeting the requirements. According to the invention, two confrontation sample groups with strong attack capacity can be generated by adopting a three-level gradient optimization method and combining a transformation algorithm function, wherein one group serves as a reference, the other group serves as an evolutionary enhancement group, and the association degree of the pre-training model and the confrontation samples is enhanced, so that the attack capacity of the confrontation samples is greatly improved, a defense model with high defense performance can be obtained in differential training, the purpose of artificial intelligence model enhanced training is finally achieved, the design is reasonable, a model with high defense grade can be obtained efficiently, and the method is suitable for large-scale popularization.

Description

Optimization method for enhancing training artificial intelligence model
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to an optimization method for enhancing training of an artificial intelligence model.
Background
Artificial intelligence refers to the intelligence exhibited by a machine manufactured by a human. Artificial intelligence generally refers to techniques for presenting human intelligence through ordinary computer programs. Machine learning has been widely used in recent years, but there are many safety problems, such as the existence of countermeasures to reduce the safety of artificial intelligence. The countermeasure sample is data which can directly change or influence the recognition result of the machine learning model, and is obtained by using a certain algorithm to generate fine and well-constructed disturbance on element data, so that the original normal machine learning model is recognized wrongly or cannot be recognized. In order to defend against the attack of the sample, the most direct method is to adopt a resistance training method to optimize the model so as to improve the safety of the model. The countermeasure training can improve the anti-interference capability of deep learning for the countermeasure sample. Since the existence of countermeasure samples for the deep learning network model is proven, the substitution training of countermeasure samples is widely used to ensure the robustness of the neural network model against attacks.
At present, for the generation of the countermeasure sample, a single generation algorithm is adopted in many training stages, so the generated countermeasure sample has insufficient aggressivity to the artificial intelligent model, and the defending model still has a great attack success probability; moreover, the relevance of the pre-training model and the attack model is low, and the dispersion of the confrontation sample can be greatly expanded, so that the invalid workload of training is increased, and the training efficiency needs to be improved.
Disclosure of Invention
Aiming at the technical problems of the artificial intelligence, the invention provides an optimization method for enhancing the training artificial intelligence model, which has the advantages of reasonable design, simple structure, higher training intensity, higher efficiency and higher defense capability and is beneficial to obtaining the model.
In order to achieve the above object, the technical solution adopted by the present invention is that, the optimization method for enhancing training of the artificial intelligence model provided by the present invention includes steps of S1, extracting features of samples of the obtained original data set based on the neural network model to be trained, using the extracted features as an original data set, and training a recognition model which is not subjected to adversarial training, namely a pre-training model;
s2, generating a confrontation sample group I by using a three-level gradient optimization generation method; taking the pre-training model as a parameter for calculating transformation probability in a transformation algorithm function, transforming the confrontation samples adopting iteration in the gradient optimization generation method, and generating a confrontation sample group II by transformation of the transformation algorithm function;
s3, mixing the countermeasure sample set I and the original data set to generate a mixed attack sample set I, mixing the countermeasure sample set I, the countermeasure sample set II and the original data set to generate a mixed attack sample set II, and performing attack training of the recognition model again by using the two mixed sample attack sets to obtain two post-attack-training models;
s4, performing difference training on the two models after the attack training, performing pre-attack on the mixed attack sample set I, performing secondary attack on the mixed attack sample set II, if a difference function of moving target defense success is achieved, obtaining a final defense model, namely a trained recognition model, otherwise, regenerating a confrontation sample set II, wherein the method for generating the confrontation sample set II comprises the following steps: the iteration step size of the gradient optimization generation algorithm and the transformation probability in the transformation algorithm in the step S2 are adjusted, and the steps S3-S4 are executed again.
Preferably, the gradient optimization generation method in S2 includes the steps of: s2.1, passing the maximum loss function
Figure 7922DEST_PATH_IMAGE001
Obtaining a challenge sample
Figure 56781DEST_PATH_IMAGE002
The perturbation r is sought in the direction in which the gradient change of the loss function for x is greatest, i.e.
Figure 178714DEST_PATH_IMAGE003
Wherein sign represents a sign function,
Figure 553195DEST_PATH_IMAGE004
is the gradient of the loss function to the input x,
Figure 554386DEST_PATH_IMAGE005
s2.2, finding disturbance in an iterative manner, i.e.
Figure 672515DEST_PATH_IMAGE006
Where t is the current iteration number, iteration step size
Figure 209807DEST_PATH_IMAGE007
Where T is the total number of iterations, and where Proj projects each updated challenge sample onto x
Figure 533952DEST_PATH_IMAGE008
Within the neighborhood;
s2.3, the iteration part in the S2.2 is replaced by momentum iteration, and the multi-step iteration method based on gradient reduction of momentum is represented as
Figure 258325DEST_PATH_IMAGE009
Wherein the content of the first and second substances,
Figure 914566DEST_PATH_IMAGE010
and u is a momentum decay factor.
Preferably, the transformation algorithm function in S2 is:
Figure 70478DEST_PATH_IMAGE011
where p is the probability of transformation, a random transformation function
Figure 786761DEST_PATH_IMAGE012
Wherein, in the step (A),
Figure 499896DEST_PATH_IMAGE013
Figure 959827DEST_PATH_IMAGE014
is a probability coefficient and satisfies
Figure 206132DEST_PATH_IMAGE015
G (x) is a function of the pre-trained model as a parameter for calculating the transformation probability.
Preferably, the perturbation r in S2.1 satisfies the function:
Figure 591852DEST_PATH_IMAGE016
wherein, in the step (A),
Figure 759659DEST_PATH_IMAGE017
a distance function representing the degree of disturbance, F represents a trained classification model,
Figure 259167DEST_PATH_IMAGE018
representing the disturbed sample, wherein f1 is the confidence coefficient of the disturbed sample output in the original category; f2 is the perturbation distance of the sample,
Figure 625557DEST_PATH_IMAGE019
representing the distribution of all the disturbed samples in the two directions in the whole disturbance range.
Preferably, will
Figure 683643DEST_PATH_IMAGE019
Optimizing the multi-objective optimization problem with boundary limitation, and searching for the minimum disturbance, wherein the optimized function is as follows:
Figure 368440DEST_PATH_IMAGE020
preferably, the differential function of moving target defense in S4 is:
Figure 170174DEST_PATH_IMAGE021
wherein, in the step (A),
Figure 391071DEST_PATH_IMAGE022
a proxy model representing the choice of an attacker,
Figure 918260DEST_PATH_IMAGE023
a set of proxy models is represented that,
Figure 591818DEST_PATH_IMAGE024
a system membership model representing the defensive party's selection by the game, N represents a set of system membership models,
Figure 197243DEST_PATH_IMAGE025
to represent
Figure 771182DEST_PATH_IMAGE026
On-generated counter sample attack
Figure 171070DEST_PATH_IMAGE027
The higher the DL value, the better the moving object defense effect.
Compared with the prior art, the invention has the advantages and positive effects that:
1. according to the optimization method for the enhanced training artificial intelligence model, two confrontation sample groups with strong attack capacity can be generated by adopting a three-level gradient optimization method and combining a transformation algorithm function, one group serves as a reference, the other group serves as an evolutionary enhanced group, and the relevance between a pre-training model and the confrontation samples is enhanced, so that the attack capacity of the confrontation samples is greatly improved, a defense model with high defense performance can be obtained in difference training, the aim of enhanced training of the artificial intelligence model is finally achieved, the design is reasonable, the efficiency is high, the model with high defense level can be obtained, and the method is suitable for large-scale popularization.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and thus the present invention is not limited to the specific embodiments of the present disclosure.
The embodiment of the invention provides an optimization method for enhancing a training artificial intelligence model, which comprises the following steps of S1, extracting the characteristics of an acquired sample of an original data set based on a neural network model to be trained, and training a recognition model which is not subjected to antagonism training, namely a pre-training model, as an original data set;
s2, generating a confrontation sample group I by using a three-level gradient optimization generation method; taking the pre-training model as a parameter for calculating transformation probability in a transformation algorithm function, transforming the confrontation samples adopting iteration in the gradient optimization generation method, and generating a confrontation sample group II by transformation of the transformation algorithm function;
s3, mixing the countermeasure sample set I and the original data set to generate a mixed attack sample set I, mixing the countermeasure sample set I, the countermeasure sample set II and the original data set to generate a mixed attack sample set II, and performing attack training of the recognition model again by using the two mixed sample attack sets to obtain two post-attack-training models;
s4, performing difference training on the two models after the attack training, performing pre-attack on the mixed attack sample set I, performing secondary attack on the mixed attack sample set II, if a difference function of moving target defense success is achieved, obtaining a final defense model, namely a trained recognition model, otherwise, regenerating a confrontation sample set II, wherein the method for generating the confrontation sample set II comprises the following steps: and adjusting the iteration step size of the gradient optimization generation algorithm and the transformation probability in the transformation algorithm in the step S2, and executing the steps S3-S4 again until a model with successful defense is obtained.
Two confrontation sample groups with strong attack ability can be generated by adopting a three-level gradient optimization method and combining a transformation algorithm function, wherein one group is used as a reference, the other group is used as an evolutionary enhanced group, and the association degree of the pre-training model and the confrontation samples is enhanced, so that the attack ability of the confrontation samples is greatly improved, a defense model with high defense performance can be obtained in differential training, the purpose of artificial intelligence model enhanced training is finally achieved, the design is reasonable, and the model with high defense grade can be obtained.
More specifically, the gradient optimization generation method in step S2 of the present invention includes the following steps: the gradient optimization generation method in S2 includes the steps of: s2.1, passing the maximum loss function
Figure 821670DEST_PATH_IMAGE028
Obtaining a challenge sample
Figure 230786DEST_PATH_IMAGE029
Finding the perturbation r in the direction in which the gradient of the loss function for x changes the most,
namely, it is
Figure 160696DEST_PATH_IMAGE030
Wherein sign represents a sign function,
Figure 230021DEST_PATH_IMAGE031
is the gradient of the loss function to the input x,
Figure 878171DEST_PATH_IMAGE032
s2.2, finding disturbance in an iterative manner, i.e.
Figure 825398DEST_PATH_IMAGE033
Where t is the current iteration number, iteration step size
Figure 111280DEST_PATH_IMAGE034
Where T is the total number of iterations, and where Proj projects each updated challenge sample onto x
Figure 321812DEST_PATH_IMAGE035
Within the neighborhood;
s2.3, the iteration part in the S2.2 is replaced by momentum iteration, and the multi-step iteration method based on gradient reduction of momentum is represented as
Figure 690214DEST_PATH_IMAGE036
Wherein the content of the first and second substances,
Figure 441133DEST_PATH_IMAGE037
and u is a momentum decay factor.
In order to improve the attack capability of the mixed sample attack group for carrying out the secondary attack, the transformation algorithm function in the S2 in the invention is
Figure 876793DEST_PATH_IMAGE038
Where p is the probability of transformation, a random transformation function
Figure 759692DEST_PATH_IMAGE039
Wherein, in the step (A),
Figure 116855DEST_PATH_IMAGE040
Figure 904420DEST_PATH_IMAGE041
is a probability coefficient and satisfies
Figure 663429DEST_PATH_IMAGE042
G (x) is a function of the pre-trained model as a parameter for calculating the transformation probability. Meanwhile, the pre-training model is effectively associated with the confrontation sample, so that the dispersion of the confrontation sample is reduced, the invalid workload of training is reduced, the training efficiency is improved, and the purpose of improving the sample attack capacity is further achieved.
In order to enhance the robustness of the model and find samples with rapidly changing confidence of the model, the disturbance r in the invention S2.1 satisfies the function:
Figure 12502DEST_PATH_IMAGE043
wherein, in the step (A),
Figure 155164DEST_PATH_IMAGE044
a distance function representing the degree of disturbance, F represents a trained classification model,
Figure 247885DEST_PATH_IMAGE045
representing the disturbed sample, wherein f1 is the confidence coefficient of the disturbed sample output in the original category; f2 is the perturbation distance of the sample,
Figure 861400DEST_PATH_IMAGE046
representing the distribution of all the disturbed samples in the two directions in the whole disturbance range.
Further, because
Figure 879909DEST_PATH_IMAGE046
The number of samples in the set is too large, and all the disturbance samples meeting the requirements cannot be found out and then the confrontation training of the model is carried out, so that the method needs to be used in
Figure 211664DEST_PATH_IMAGE046
And finding out representative samples to carry out the countertraining. In order to find the minimum disturbance under the condition that the output result of the classifier has large change, the output result of the classifier is to be searched
Figure 576918DEST_PATH_IMAGE046
Optimizing the multi-objective optimization problem with boundary limitation, and searching for the minimum disturbance, wherein the optimized function is as follows:
Figure 89281DEST_PATH_IMAGE047
in order to obtain a defense model with a moving defense function, the difference function of moving target defense in the invention S4 is:
Figure 514578DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 832164DEST_PATH_IMAGE049
a proxy model representing the choice of an attacker,
Figure 266688DEST_PATH_IMAGE050
a set of proxy models is represented that,
Figure 854795DEST_PATH_IMAGE051
a system membership model representing the defensive party's selection by the game, N represents a set of system membership models,
Figure 218037DEST_PATH_IMAGE052
to represent
Figure 258805DEST_PATH_IMAGE053
On-generated counter sample attack
Figure 231440DEST_PATH_IMAGE055
The higher the DL value, the better the moving object defense effect.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention in other forms, and any person skilled in the art may apply the above modifications or changes to the equivalent embodiments with equivalent changes, without departing from the technical spirit of the present invention, and any simple modification, equivalent change and change made to the above embodiments according to the technical spirit of the present invention still belong to the protection scope of the technical spirit of the present invention.

Claims (6)

1. An optimization method for enhancing a training artificial intelligence model comprises the steps of S1, extracting characteristics of an acquired sample of an original data set based on a neural network model to be trained, using the extracted sample as an original data set, and training a recognition model which is not subjected to antagonism training, namely a pre-training model;
the method is characterized by further comprising the following steps:
s2, generating a confrontation sample group I by using a three-level gradient optimization generation method; taking the pre-training model as a parameter for calculating transformation probability in a transformation algorithm function, transforming the confrontation samples adopting iteration in the gradient optimization generation method, and generating a confrontation sample group II by transformation of the transformation algorithm function;
s3, mixing the countermeasure sample set I and the original data set to generate a mixed attack sample set I, mixing the countermeasure sample set I, the countermeasure sample set II and the original data set to generate a mixed attack sample set II, and performing attack training of the recognition model again by using the two mixed sample attack sets to obtain two post-attack-training models;
s4, performing difference training on the two models after the attack training, performing pre-attack on the mixed attack sample set I as a reference, performing secondary attack on the mixed attack sample set II, if the secondary attack reaches a difference function of moving target defense success, obtaining a final defense model, namely a trained recognition model, otherwise, regenerating the confrontation sample set II, wherein the method for generating the confrontation sample set II comprises the following steps: the iteration step size of the gradient optimization generation algorithm and the transformation probability in the transformation algorithm in the step S2 are adjusted, and the steps S3-S4 are executed again.
2. The optimization method for enhancing the training of the artificial intelligence model as claimed in claim 1, wherein the gradient optimization generation method in S2 comprises the following steps:
s2.1, passing the maximum loss function
Figure 114485DEST_PATH_IMAGE001
Obtaining a challenge sample
Figure 77761DEST_PATH_IMAGE002
The perturbation r is sought in the direction in which the gradient change of the loss function for x is greatest, i.e.
Figure 361500DEST_PATH_IMAGE003
Wherein, in the step (A),
Figure 79926DEST_PATH_IMAGE004
the function of the symbol is represented by,
Figure 78975DEST_PATH_IMAGE005
is the gradient of the loss function to the input x,
Figure 721309DEST_PATH_IMAGE006
s2.2, adopting an iteration mode to search for disturbance,
namely, it is
Figure 653361DEST_PATH_IMAGE007
Where t is the current iteration number, iteration step size
Figure 14460DEST_PATH_IMAGE008
And T is the total number of iterations in the whole,
Figure 704068DEST_PATH_IMAGE009
projecting each updated challenge sample onto x
Figure 884513DEST_PATH_IMAGE010
In the field;
s2.3, the iteration part in the S2.2 is replaced by momentum iteration, and the multi-step iteration method based on gradient reduction of momentum is represented as
Figure 343176DEST_PATH_IMAGE011
Wherein, in the step (A),
Figure 200143DEST_PATH_IMAGE012
and u is a momentum decay factor.
3. The optimization method for enhancing training of artificial intelligence model of claim 2, wherein the transformation algorithm function in S2 is
Figure 379976DEST_PATH_IMAGE013
Where p is the transformation probability, a random transformation function
Figure 98533DEST_PATH_IMAGE014
Wherein, in the step (A),
Figure 411703DEST_PATH_IMAGE015
Figure 705150DEST_PATH_IMAGE016
is a probability coefficient and satisfies
Figure 103770DEST_PATH_IMAGE017
G (x) is a function of the pre-trained model as a parameter for calculating the transformation probability.
4. The optimization method for enhancing the training of the artificial intelligence model according to claim 3, wherein the disturbance r in S2.1 satisfies the function:
Figure 626018DEST_PATH_IMAGE018
where D (r0) represents a distance function of the degree of perturbation, F represents a trained classification model,
Figure 50485DEST_PATH_IMAGE019
representing the disturbed sample, wherein f1 is the confidence coefficient of the disturbed sample output in the original category; f2 is the perturbation distance of the sample, and Z (r0) represents the distribution of all the perturbed samples in the two directions in the whole perturbation range.
5. The optimization method for enhancing the training artificial intelligence model of claim 4, wherein Z (r0) is optimized to an optimization problem with multiple objectives of boundary constraints, and the minimum perturbation is found, and the optimized function is:
Figure 249254DEST_PATH_IMAGE020
6. the optimization method for enhancing training of artificial intelligence models according to claim 5, wherein the difference function of moving target defense in S4 is:
Figure 135170DEST_PATH_IMAGE021
wherein, in the step (A),
Figure 461109DEST_PATH_IMAGE022
a proxy model representing the choice of an attacker,
Figure 748871DEST_PATH_IMAGE023
a set of proxy models is represented that,
Figure 121471DEST_PATH_IMAGE024
a system membership model representing the defensive party's selection by the game, N represents a set of system membership models,
Figure 760263DEST_PATH_IMAGE025
to represent
Figure 624314DEST_PATH_IMAGE026
On-generated counter sample attack
Figure 766582DEST_PATH_IMAGE028
The higher the DL value, the better the moving object defense effect.
CN202111001506.0A 2021-08-30 2021-08-30 Optimization method for enhancing training artificial intelligence model Active CN113449865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111001506.0A CN113449865B (en) 2021-08-30 2021-08-30 Optimization method for enhancing training artificial intelligence model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111001506.0A CN113449865B (en) 2021-08-30 2021-08-30 Optimization method for enhancing training artificial intelligence model

Publications (2)

Publication Number Publication Date
CN113449865A true CN113449865A (en) 2021-09-28
CN113449865B CN113449865B (en) 2021-12-07

Family

ID=77818893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111001506.0A Active CN113449865B (en) 2021-08-30 2021-08-30 Optimization method for enhancing training artificial intelligence model

Country Status (1)

Country Link
CN (1) CN113449865B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757349A (en) * 2022-04-01 2022-07-15 中国工程物理研究院计算机应用研究所 Model poisoning method and system based on conditional confrontation sample

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766991A (en) * 2019-01-14 2019-05-17 电子科技大学 A kind of artificial intelligence optimization's system and method using antagonistic training
CN110163235A (en) * 2018-10-11 2019-08-23 腾讯科技(深圳)有限公司 Training, image enchancing method, device and the storage medium of image enhancement model
CN112115469A (en) * 2020-09-15 2020-12-22 浙江科技学院 Edge intelligent moving target defense method based on Bayes-Stackelberg game
CN113066002A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163235A (en) * 2018-10-11 2019-08-23 腾讯科技(深圳)有限公司 Training, image enchancing method, device and the storage medium of image enhancement model
CN109766991A (en) * 2019-01-14 2019-05-17 电子科技大学 A kind of artificial intelligence optimization's system and method using antagonistic training
CN112115469A (en) * 2020-09-15 2020-12-22 浙江科技学院 Edge intelligent moving target defense method based on Bayes-Stackelberg game
CN113066002A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Generation method of countermeasure sample, training method of neural network, training device of neural network and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王滨等: "《面向对抗样本攻击的移动目标防御》", 《网络与信息安全学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757349A (en) * 2022-04-01 2022-07-15 中国工程物理研究院计算机应用研究所 Model poisoning method and system based on conditional confrontation sample
CN114757349B (en) * 2022-04-01 2023-09-19 中国工程物理研究院计算机应用研究所 Model poisoning method and system based on condition countermeasure sample

Also Published As

Publication number Publication date
CN113449865B (en) 2021-12-07

Similar Documents

Publication Publication Date Title
Zhong et al. Towards transferable adversarial attack against deep face recognition
CN108446765A (en) The multi-model composite defense method of sexual assault is fought towards deep learning
CN109766991A (en) A kind of artificial intelligence optimization's system and method using antagonistic training
Kaviani et al. Defense against neural trojan attacks: A survey
Zhao et al. Intrusion detection based on clustering genetic algorithm
CN111047054A (en) Two-stage countermeasure knowledge migration-based countermeasure sample defense method
CN113449865B (en) Optimization method for enhancing training artificial intelligence model
CN111178504B (en) Information processing method and system of robust compression model based on deep neural network
CN113901448A (en) Intrusion detection method based on convolutional neural network and lightweight gradient elevator
CN116665214A (en) Large character set verification code attack defense method based on countermeasure sample
Zhang et al. Network intrusion detection based on active semi-supervised learning
Wang et al. Out-of-distributed semantic pruning for robust semi-supervised learning
Kong et al. Evolutionary multi-label adversarial examples: An effective black-box attack
CN111881989B (en) Hyperspectral image classification method
CN112487933A (en) Radar waveform identification method and system based on automatic deep learning
Xie et al. Improving the transferability of adversarial examples with new iteration framework and input dropout
Zhang et al. A Intrusion Detection Model Based on Convolutional Neural Network and Feature Selection
Du et al. Combating word-level adversarial text with robust adversarial training
CN114584337A (en) Voice attack counterfeiting method based on genetic algorithm
CN113902974A (en) Air combat threat target identification method based on convolutional neural network
CN113641990A (en) Intrusion detection method based on multi-innovation extended Kalman filtering
Li et al. Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout
CN114898168B (en) Black box countermeasure sample generation method based on conditional standard flow model
Zhang et al. An efficient general black-box adversarial attack approach based on multi-objective optimization for high dimensional images
Chengjiang et al. The vectorization research of military map base on micro variation chaos genetic algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant