CN110837637A - Black box attack method for brain-computer interface system - Google Patents

Black box attack method for brain-computer interface system Download PDF

Info

Publication number
CN110837637A
CN110837637A CN201910982682.3A CN201910982682A CN110837637A CN 110837637 A CN110837637 A CN 110837637A CN 201910982682 A CN201910982682 A CN 201910982682A CN 110837637 A CN110837637 A CN 110837637A
Authority
CN
China
Prior art keywords
samples
model
brain
sample
interface system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910982682.3A
Other languages
Chinese (zh)
Other versions
CN110837637B (en
Inventor
伍冬睿
蒋雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201910982682.3A priority Critical patent/CN110837637B/en
Publication of CN110837637A publication Critical patent/CN110837637A/en
Application granted granted Critical
Publication of CN110837637B publication Critical patent/CN110837637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/561Virus type analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/562Static detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Virology (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention discloses a black box attack method for a brain-computer interface system, which comprises the steps of firstly generating an EEG sample with more information amount and diversity based on inquiry synthesis active learning as a surrogate model training sample, then training a surrogate model based on the training sample to enable the surrogate model to better approximate a target model, and finally generating a countersample on the trained surrogate model, and carrying out black box attack on the brain-computer interface system of the EEG by using the countersample to enable the countersample to be wrongly classified on the target model. Compared with the traditional black box attack method based on the Jacobian matrix, the method has better attack effect, can obtain the same or better attack performance under the condition of less inquiry times, generates a countercheck sample with little noise, has almost no difference with the original EEG signal in a time domain and a frequency domain, can not be easily found, and greatly improves the attack efficiency of the black box attack in a brain-computer interface system.

Description

Black box attack method for brain-computer interface system
Technical Field
The invention belongs to the field of brain-computer interface system safety based on an EEG (electroencephalogram), and particularly relates to a black box attack method for a brain-computer interface system.
Background
The brain-computer interface system is a real-time communication system for connecting the brain and external electronic equipment, and the system directly collects physiological electric signals generated by the human brain and then converts the physiological electric signals into commands capable of controlling the external electronic equipment, thereby replacing natural limbs or language organs of human beings, communicating with the outside world and controlling the external environment. EEG is by far the most widely used brain-computer interface system input signal due to its ease of use, relatively low cost and minimal risk to the user. Among brain-computer interface systems, a machine learning module is the most important one, and with the development of deep learning in recent years, some deep learning models of brain-computer interface systems for EEG signals are proposed.
However, there is currently a serious safety issue in deep learning: deep learning models are vulnerable to challenge samples. A challenge sample refers to a data sample that alters legitimate input by adding small, usually imperceptible perturbations, forcing a learner to misclassify the resulting challenge input, while maintaining the correct classification for a human observer. In such an attack, small perturbations that are deliberately designed are added to the normal input samples to fool the deep learning model and cause significant performance degradation. In order to better defend against the attack of the sample, the method for researching the black box attack of the brain-computer interface system has important significance.
Many methods for generating challenge samples have been proposed, and they are mainly classified into two types of attack methods: white box attacks and black box attacks. White box attack means that an attacker can obtain all information of a target model, including the structure, parameters and training samples of the target model, while black box attack is opposite, and the attacker cannot obtain internal information of the target model and can only observe the input output condition of the target model. Black box attacks are a more practical and challenging approach than white box attacks. Since most EEG based BCI systems do not allow white-box access for security reasons. Papernot et al propose a migration-based black-box attack method, which trains a surrogate model and then makes a challenge sample for it, and the challenge sample that is expected to be generated can successfully attack an unknown target model. Zhang et al successfully implemented black-box attacks in EEG-based brain-computer interface systems using a similar idea in the mobility-based black-box attack approach. Although good attack effect is achieved, the methods adopt a data enhancement method based on Jacobian for generation of a substitute model countersample, and the key limitations are as follows: training a better surrogate model requires a large number of queries to collect sufficient information, and the attack efficiency is low.
In summary, it is an urgent need to solve the problem of providing a black box attack method for a brain-computer interface system with high attack efficiency.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a black box attack method of a brain-computer interface system, and aims to solve the problem of low attack efficiency caused by the fact that a large number of inquiry times are needed to collect enough information when a substitution model is trained in the prior art.
In order to achieve the above object, in a first aspect, the present invention provides a black box attack method for a brain-computer interface system, including the following steps:
s1, inquiring a sample label in a pre-collected EEG sample set from a target model in a brain-computer interface system to obtain a surrogate model training set, and training a classification model based on the training set to obtain a surrogate model;
s2, generating samples on two sides of each decision boundary in the substitution model, inquiring labels of the target models, synthesizing the labels into substitution model training samples, training the substitution model based on the training samples, repeating the step S2 to iterate until the iteration number reaches the upper limit of the iteration number, and obtaining the trained substitution model;
s3, constructing an antagonistic sample based on the obtained surrogate model;
and S4, carrying out black box attack on the brain-computer interface system of the EEG by using the confrontation sample.
Further preferably, the method of step S2 includes the steps of:
s21, initializing the generated sample set to be an empty set;
s22, sequentially selecting two types of EEG sample sets in the current surrogate model training set, and initializing the middle generation sample set as an empty set;
s23, randomly selecting two different types of EEG samples in the two types of currently selected EGG sample sets, iteratively generating two different types of samples positioned at two sides of a decision boundary of the two types of currently selected samples on a feature space through a bisection method, calculating to obtain samples on perpendicular bisectors of the two samples, adding the samples into the middle generated sample set, and repeating the step S23 to iterate until the number of the samples in the middle generated sample set reaches a preset number of samples;
s24, adding the intermediate generation sample set into the generation sample set;
s25, repeating the steps S22-S24 until the combination of every two EGG samples in the surrogate model training set is completely covered;
s26, inquiring the target model to generate sample labels in the sample set, adding the generated sample set and the labels into the current surrogate model training set, and updating the surrogate model training set;
s27, retraining the surrogate model based on the surrogate model training set;
and S28, repeating the steps S21-S27 to iterate until the iteration number reaches the upper limit of the iteration number, and obtaining the trained substitution model.
Further preferably, the number of EEG samples corresponding to each type of label in the pre-collected EEG sample set is greater than or equal to 1.
Further preferably, the classification model used as the surrogate model is a deep learning model.
Further preferably, the confrontation sample constructed in step S2 is:
Figure BDA0002235709880000031
wherein x isiFor samples in the pre-collected EEG sample set,. epsilon.is the perturbation upper bound, J is the loss function of the resulting surrogate model, and θ is the surrogate model parameter, y'iFor querying the surrogate model for the resulting xiThe label of (1).
Further preferably, the black box attack method of the brain-computer interface system is applied to the field of brain-computer interface system security based on EEG.
In a second aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the attack method described in the first aspect.
Through the technical scheme, compared with the prior art, the invention can obtain the following beneficial effects:
1. the invention provides a black box attack method for a brain-computer interface system, which comprises the steps of firstly, actively learning and generating samples near decision boundaries in a surrogate model based on inquiry synthesis, adding the samples into a surrogate model training set, enabling EEG samples in the surrogate model training set to have larger information quantity and higher diversity, greatly reducing the inquiry times in the training process, then, training the surrogate model based on the training samples, enabling the surrogate model to better approximate a target model, finally, generating countersamples on the trained surrogate model, and carrying out black box attack on the brain-computer interface system of the EEG by using the countersamples, so that the brain-computer interface system can be wrongly classified on the target model. Compared with the traditional black box attack method based on the Jacobian matrix, the method has better attack effect, can obtain the same or better attack performance under the condition of less inquiry times, generates a countercheck sample with little noise, has almost no difference with the original EEG signal in a time domain and a frequency domain, can not be easily found, and greatly improves the attack efficiency of the black box attack in a brain-computer interface system.
2. The black box attack method of the brain-computer interface system combines the idea of active learning into counterattack, and can select or generate the most useful sample for marking, so that a learner can be trained from the least marked training samples. As the acquisition of the electroencephalogram signals is very time-consuming, the training process of the substitution model is greatly accelerated by adopting active learning, so that better attack performance is obtained.
Drawings
FIG. 1 is a black box attack method for a brain-computer interface system according to the present invention;
fig. 2 is a schematic diagram of a black box attack in an EEG brain-computer interface system provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to achieve the above object, the present invention provides a black box attack method for a brain-computer interface system, as shown in fig. 1, including the following steps:
s1, interrogating a target model f in a brain-computer interface system for a pre-collected EEG sample set S0Obtaining a surrogate model training set D by the sample label in the step (1), and training a classification model based on the training set to obtain a surrogate model f';
specifically, the classification model used as the surrogate model in this embodiment is a deep learning model, and the pre-collected EEG sample set S in this embodiment0The number of samples in (1) is 400, wherein the number of EEG samples corresponding to each class label is greater than or equal to 1.
S2, generating samples on two sides of each decision boundary in the substitution model, inquiring labels of the target models, synthesizing the labels into substitution model training samples, training the substitution model based on the training samples, repeating the step S2 to iterate until the iteration number reaches the upper limit of the iteration number, and obtaining the trained substitution model;
specifically, the method comprises the following steps:
s21, initializing the generated sample set delta S to be an empty set;
s22, sequentially selecting two types of EEG sample sets in the current surrogate model training set, and initializing a middle generation sample set delta S1 as a null set;
s23, randomly selecting two different classes of EEG samples { x ] in the two currently selected classes of EGG sample sets+,x-Iteratively generating two different types of samples positioned at two sides of the decision boundary of the current two types of samples of the substitution model through dichotomy in the feature space, calculating to obtain samples on the perpendicular bisector of the two samples, adding the samples into a middle generation sample set delta S1, and repeating the stepsStep S23, iterating until the number of samples in the intermediate generation sample set delta S1 reaches a preset number of samples;
specifically, a binary search is performed in the feature space to obtain a sample xb=(x++x-) (ii)/2, the generated sample x is queried for the surrogate model fbIf the label is positive, then x is+Is replaced by xbIf the label is negative, x is added-Is replaced by xb. Repeating the above process to iterate until the maximum dichotomy number m is reached to obtain two different types of samples { x+,x-}. In this embodiment, m is 10. Then calculating to obtain samples on the perpendicular bisector of the two samples, making the generated sample distribution more dispersed, and re-pairing { x+,x-And continuously executing binary search to ensure that the generated samples have more information quantity. Specifically, in this embodiment, the samples located on the perpendicular bisector of the two samples are calculated by using the gray-schmitt method, and the (x) is obtained by using the gray-schmitt algorithm+-x-) Of the size xpThen, continuing to execute m times of binary search to obtain new two samples of different classes { x+,x-Get the sample x at the perpendicular bisectors=xp+(x++x-) 2, mixing xsAdded to as 1. Repeating the process for iteration until the number of samples in the generated sample set delta S1 reaches a preset sample number n _ max; in this embodiment, the preset sample number n _ max is 200.
S24, adding the intermediate generation sample set delta S1 into the generation sample set delta S;
s25, repeating the steps S22-S24 until the combination of every two EGG samples in the surrogate model training set is completely covered;
s26, inquiring the target model to generate a sample label in the sample set delta S, adding the generated sample set delta S and the label thereof into the substitution model training set D, and updating the substitution model training set D;
s27, retraining the surrogate model based on the surrogate model training set;
and S28, repeating the steps S21-S27 to iterate until the iteration number reaches the iteration number upper limit N _ max, and obtaining the trained substitution model. In this embodiment, the preset iteration number N _ max is 2, and finally the number of samples generated by each two types of EEG sample sets is N _ max × N _ max, which is 400 in this embodiment.
S3, constructing an antagonistic sample based on the obtained surrogate model;
specifically, the resulting challenge sample was constructed as follows:
Figure BDA0002235709880000061
wherein x isiFor samples in the pre-collected EEG sample set,. epsilon.is the perturbation upper bound, J is the loss function of the resulting surrogate model, and θ is the surrogate model parameter, y'iFor querying the surrogate model for the resulting xiThe label of (1).
And S4, carrying out black box attack on the brain-computer interface system of the EEG by using the confrontation sample.
Specifically, the brain-computer interface system targeted by the invention is composed of a signal acquisition module, a signal preprocessing module, a target model and a controller, as shown in fig. 2, the schematic diagram of black box attack in the EEG brain-computer interface system provided by the invention is provided. The attack module is arranged between the signal preprocessing module and the target model, the challenge sample is generated on the trained substitute model through the attack module, and due to the mobility of the challenge sample, the generated challenge sample can successfully attack the target model, so that the target model cannot make a prediction correctly, the whole brain-computer interface system is influenced, and the brain-computer interface system cannot work normally.
Under the conditions of the same query times and the same initial training set of the surrogate model, the attack method and the attack method based on the Jacobian matrix are respectively adopted to carry out comparison experiments on the ERN data set, wherein the query times are 400, and the initial training set S of the surrogate model is used for carrying out comparison experiments on the ERN data set0The size of (A) is 400, and the experimental results are shown in Table 1.
TABLE 1
Figure BDA0002235709880000071
Figure BDA0002235709880000081
It can be seen from table 1 that, when the target model is EEGNet and the surrogate model is DeepCNN, the accuracy of the target model before attack on the test set is 76.26%, after the attack is performed by respectively adopting the method proposed by the present invention and the attack method based on the jacobian matrix, the accuracy of the target model on the test set is respectively reduced to 36.55% and 47.06%, and for different target models, different surrogate models are adopted for attack, and the attack effect of the present invention is also obviously superior to that of the attack method based on the jacobian matrix. The method provided by the invention carries out inquiry synthesis active learning by generating samples near each decision boundary in the substitution model, thereby greatly reducing the inquiry times in the training process, and having better attack performance and higher attack efficiency under the condition of less inquiry times.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A black box attack method of a brain-computer interface system is characterized by comprising the following steps:
s1, inquiring a sample label in a pre-collected EEG sample set from a target model in a brain-computer interface system to obtain a surrogate model training set, and training a classification model based on the training set to obtain a surrogate model;
s2, generating samples on two sides of each decision boundary in the substitution model, inquiring labels of the target models, synthesizing the labels into substitution model training samples, training the substitution model based on the training samples, repeating the step S2 to iterate until the iteration number reaches the upper limit of the iteration number, and obtaining the trained substitution model;
s3, constructing an antagonistic sample based on the obtained surrogate model;
and S4, carrying out black box attack on the brain-computer interface system of the EEG by using the confrontation sample.
2. The black box attack method for the brain-computer interface system according to claim 1, wherein the method of step S2 includes the following steps:
s21, initializing the generated sample set to be an empty set;
s22, sequentially selecting two types of EEG sample sets in the current surrogate model training set, and initializing the middle generation sample set as an empty set;
s23, randomly selecting two different types of EEG samples in the two types of currently selected EGG sample sets, iteratively generating two different types of samples positioned at two sides of a decision boundary of the two types of currently selected samples on a feature space through a bisection method, calculating to obtain samples on perpendicular bisectors of the two samples, adding the samples into the middle generated sample set, and repeating the step S23 to iterate until the number of the samples in the middle generated sample set reaches a preset number of samples;
s24, adding the intermediate generation sample set into the generation sample set;
s25, repeating the steps S22-S24 until the combination of every two EGG samples in the surrogate model training set is completely covered;
s26, inquiring the target model to generate sample labels in the sample set, adding the generated sample set and the labels into the current surrogate model training set, and updating the surrogate model training set;
s27, retraining the surrogate model based on the surrogate model training set;
and S28, repeating the steps S21-S27 to iterate until the iteration number reaches the upper limit of the iteration number, and obtaining the trained substitution model.
3. The method of claim 1, wherein the number of EEG samples corresponding to each class of label in the pre-collected EEG sample set is greater than or equal to 1.
4. The brain-computer interface system black box attack method according to claim 1, wherein the classification model is a deep learning model.
5. The brain-computer interface system black box attack method according to claim 1, wherein the constructed countermeasure sample is:
Figure FDA0002235709870000021
wherein x isiFor samples in the pre-collected EEG sample set,. epsilon.is the perturbation upper bound, J is the loss function of the resulting surrogate model, and θ is the surrogate model parameter, y'iFor querying the surrogate model for the resulting xiThe label of (1).
6. The black box attack method for brain-computer interface system according to any one of claims 1 to 5, applied to the field of EEG-based brain-computer interface system security.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the attack method according to any one of claims 1 to 6.
CN201910982682.3A 2019-10-16 2019-10-16 Black box attack method for brain-computer interface system Active CN110837637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910982682.3A CN110837637B (en) 2019-10-16 2019-10-16 Black box attack method for brain-computer interface system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910982682.3A CN110837637B (en) 2019-10-16 2019-10-16 Black box attack method for brain-computer interface system

Publications (2)

Publication Number Publication Date
CN110837637A true CN110837637A (en) 2020-02-25
CN110837637B CN110837637B (en) 2022-02-15

Family

ID=69575402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910982682.3A Active CN110837637B (en) 2019-10-16 2019-10-16 Black box attack method for brain-computer interface system

Country Status (1)

Country Link
CN (1) CN110837637B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291828A (en) * 2020-03-03 2020-06-16 广州大学 HRRP (high resolution ratio) counterattack method for sample black box based on deep learning
CN112085055A (en) * 2020-08-05 2020-12-15 清华大学 Black box attack method based on migration model Jacobian array feature vector disturbance
CN112256133A (en) * 2020-10-28 2021-01-22 华中科技大学 Pollution attack method of brain-computer interface system based on EEG
CN112989361A (en) * 2021-04-14 2021-06-18 华南理工大学 Model security detection method based on generation countermeasure network
CN113407939A (en) * 2021-06-17 2021-09-17 电子科技大学 Substitution model automatic selection method facing black box attack, storage medium and terminal
WO2021189364A1 (en) * 2020-03-26 2021-09-30 深圳先进技术研究院 Method and device for generating adversarial image, equipment, and readable storage medium
CN113642029A (en) * 2021-10-12 2021-11-12 华中科技大学 Method and system for measuring correlation between data sample and model decision boundary
US11995155B2 (en) 2021-12-06 2024-05-28 Shenzhen Institutes Of Advanced Technology Adversarial image generation method, computer device, and computer-readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520268A (en) * 2018-03-09 2018-09-11 浙江工业大学 The black box antagonism attack defense method evolved based on samples selection and model
CN109376556A (en) * 2018-12-17 2019-02-22 华中科技大学 Attack method for EEG brain-computer interface based on convolutional neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520268A (en) * 2018-03-09 2018-09-11 浙江工业大学 The black box antagonism attack defense method evolved based on samples selection and model
CN109376556A (en) * 2018-12-17 2019-02-22 华中科技大学 Attack method for EEG brain-computer interface based on convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IAN J. GOODFELLOW.ET AL.: "EXPLAINING AND HARNESSING", 《ARXIV:1412.6572V3》 *
XIAO ZHANG AND DONGRUI WU: "On the Vulnerability of CNN Classifiers", <IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING> *
伍冬睿: "脑机接口黑盒对抗攻击中的主动学习", 《科学网博客》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291828A (en) * 2020-03-03 2020-06-16 广州大学 HRRP (high resolution ratio) counterattack method for sample black box based on deep learning
CN111291828B (en) * 2020-03-03 2023-10-27 广州大学 HRRP (high-resolution redundancy protocol) anti-sample black box attack method based on deep learning
WO2021189364A1 (en) * 2020-03-26 2021-09-30 深圳先进技术研究院 Method and device for generating adversarial image, equipment, and readable storage medium
AU2020437435B2 (en) * 2020-03-26 2023-07-20 Shenzhen Institutes Of Advanced Technology Adversarial image generation method, apparatus, device, and readable storage medium
GB2607647A (en) * 2020-03-26 2022-12-14 Shenzhen Inst Adv Tech Method and device for generating adversarial image, equipment, and readable storage medium
CN112085055B (en) * 2020-08-05 2022-12-13 清华大学 Black box attack method based on transfer model Jacobian array feature vector disturbance
CN112085055A (en) * 2020-08-05 2020-12-15 清华大学 Black box attack method based on migration model Jacobian array feature vector disturbance
CN112256133A (en) * 2020-10-28 2021-01-22 华中科技大学 Pollution attack method of brain-computer interface system based on EEG
CN112989361A (en) * 2021-04-14 2021-06-18 华南理工大学 Model security detection method based on generation countermeasure network
CN112989361B (en) * 2021-04-14 2023-10-20 华南理工大学 Model security detection method based on generation countermeasure network
CN113407939B (en) * 2021-06-17 2022-08-05 电子科技大学 Substitution model automatic selection method facing black box attack, storage medium and terminal
CN113407939A (en) * 2021-06-17 2021-09-17 电子科技大学 Substitution model automatic selection method facing black box attack, storage medium and terminal
CN113642029A (en) * 2021-10-12 2021-11-12 华中科技大学 Method and system for measuring correlation between data sample and model decision boundary
US11995155B2 (en) 2021-12-06 2024-05-28 Shenzhen Institutes Of Advanced Technology Adversarial image generation method, computer device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN110837637B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN110837637B (en) Black box attack method for brain-computer interface system
CN109376556B (en) Attack method for EEG brain-computer interface based on convolutional neural network
CN109754085A (en) Deep reinforcement learning-based large-scale network collapse method, storage device and storage medium
Gragnaniello et al. Perceptual quality-preserving black-box attack against deep learning image classifiers
CN113255816B (en) Directional attack countermeasure patch generation method and device
CN112884204B (en) Network security risk event prediction method and device
CN114387647B (en) Anti-disturbance generation method, device and storage medium
CN114925850B (en) Deep reinforcement learning countermeasure defense method for disturbance rewards
CN109444831B (en) Radar interference decision method based on transfer learning
CN111047054A (en) Two-stage countermeasure knowledge migration-based countermeasure sample defense method
CN112381142A (en) Method and system for generating explainability confrontation sample based on important features
CN112183671A (en) Target attack counterattack sample generation method for deep learning model
US20080147576A1 (en) Data processing apparatus, data processing method data processing program and computer readable medium
CN116992299B (en) Training method, detecting method and device of blockchain transaction anomaly detection model
CN114139604A (en) Online learning-based electric power industrial control attack monitoring method and device
Song et al. On credibility of adversarial examples against learning-based grid voltage stability assessment
CN113034332A (en) Invisible watermark image and backdoor attack model construction and classification method and system
CN111881027A (en) Deep learning model optimization method based on data defense
CN115063652A (en) Black box attack method based on meta-learning, terminal equipment and storage medium
Zhang et al. Research on invasive weed optimization based on the cultural framework
CN116051924A (en) Divide-and-conquer defense method for image countermeasure sample
CN104517141B (en) Radio frequency identification network topology method based on load balance Yu particle cluster algorithm
CN113901456A (en) User behavior security prediction method, device, equipment and medium
CN111882037A (en) Deep learning model optimization method based on network addition/modification
Kurasova et al. Integration of the self-organizing map and neural gas with multidimensional scaling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant