CN112488225A - Learning countermeasure defense model method for quantum fuzzy machine - Google Patents

Learning countermeasure defense model method for quantum fuzzy machine Download PDF

Info

Publication number
CN112488225A
CN112488225A CN202011433028.6A CN202011433028A CN112488225A CN 112488225 A CN112488225 A CN 112488225A CN 202011433028 A CN202011433028 A CN 202011433028A CN 112488225 A CN112488225 A CN 112488225A
Authority
CN
China
Prior art keywords
quantum
quantum fuzzy
sample
fuzzy
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011433028.6A
Other languages
Chinese (zh)
Inventor
张仕斌
黄曦
李同
侯敏
昌燕
闫丽丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202011433028.6A priority Critical patent/CN112488225A/en
Publication of CN112488225A publication Critical patent/CN112488225A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic

Abstract

A method for learning a confrontation defense model by a quantum fuzzy machine comprises the following steps: s1, constructing quantum fuzzy data samples of legal users; s2, simulating a malicious attacker to construct an attack strategy: adding the constructed disturbance to a quantum fuzzy data sample of a legal user to form a quantum fuzzy countermeasure sample of a malicious attacker; s3, submitting the quantum fuzzy data sample of the legal user and the quantum fuzzy countercheck sample of the malicious attacker to a quantum fuzzy machine learning system for training and learning, and making a correct decision by the quantum fuzzy machine learning system; the quantum fuzzy machine learning system comprises a confrontation defense module, and the confrontation defense module is a confrontation sample for defending malicious attackers, so that the quantum fuzzy machine learning system makes a correct decision. The model method can effectively resist the attack of a malicious attacker, improve the safety and robustness of the quantum fuzzy machine learning system, and ensure the safe and reliable operation of the quantum fuzzy machine learning algorithm.

Description

Learning countermeasure defense model method for quantum fuzzy machine
Technical Field
The invention relates to the field of quantum machine learning, fuzzy set theory and network countermeasure, in particular to a quantum machine learning countermeasure defense model.
Background
In recent years, the field of machine learning has achieved many research results and has achieved successful application in many fields, but machine learning also faces many safety hazards. For example, machine learning systems are easily fooled by challenge samples, resulting in erroneous classifications; users who use online machine learning systems for classification have to disclose their data to the server, which can lead to privacy leaks, and worse yet, the widespread use of machine learning is exacerbating these security risks. Currently, many researchers are exploring and studying potential attacks and corresponding defense techniques for deep learning. In the last 3 years, the research on the safety of the machine learning system has increased 61.5% in 2018 and 56.2% in 2019 compared with 2018 in the last 3 years, which also indicates that the safety problem in the field of machine learning and artificial intelligence is more and more important for people.
In classical machine learning, counterattacks and counterdefenses have achieved some success. Szegdy et al first proposed the concept of countering samples and an L-BFGS attack method that adds well-constructed perturbations to the samples to be predicted or classified, which are input to a trained model, resulting in the model giving erroneous outputs with high confidence. Meanwhile, the authors propose that countermeasure training using countermeasure samples can improve the robustness of the deep neural network. Goodfellow et al propose that the linear nature of deep neural networks in high-dimensional space is the root cause for the generation of counterattack, and use FGSM to generate countersamples to attack classification models. The Papernot et al provides a threat model for machine learning aiming at the safety and privacy of the machine learning; carlini et al propose a text-fighting attack model (Deep Search) that can convert any given waveform into any desired target phrase by adding small perturbations; pongtianyu et al propose a reverse cross entropy confrontation defense method, which can effectively resist the confrontation attack; the Shettiran et al propose to use randomization of the input in the prediction process to defend against attacks and mitigate the effects. Samangouei et al use GAN to protect classifiers against sample attacks.
In quantum machine learning, the safety and robustness of a quantum classifier are researched, firstly, a classical counterattack and counterdefense method is introduced into the quantum classifier, and the quantum counterattack and counterdefense method is explored by utilizing the property of quantum mechanics. In 2018, Wiebe et al protected quantum classifiers by quantum information, so that the quantum classifiers are safer and more private. In 2019, Liu et al proposed that quantum classification was vulnerable to interference rejection, and demonstrated that the amount of perturbation required to cause misclassification was inversely proportional to the dimension of the hilbert space, and proposed that the defense resources required for quantum rejection of attacks grew polynomial at least with the dimension of the hilbert space of the system. In 2019, Jiang et al found that a typical phase change classifier based on a deep neural network is extremely vulnerable to adversarial samples, and revealed the vulnerability of the machine learning technology applied to condensed-state physics. In 2019, Casares et al propose a quantum active learning algorithm to defend against sample attacks and achieve exponential acceleration efficiency. In 2019, Lu et al use white-box attack and black-box attack to construct a countersample to attack a quantum classifier, and disclose vulnerability of a QML system in terms of counterdisturbance, and meanwhile, countertraining is used to defend against the attack of the sample, so that the quantum countertraining is verified to enhance the robustness of the variable-component quantum classifier. In 2020, Du et al used quantum noise to protect quantum classifiers and successfully defended against sample attacks.
At present, some achievements have been obtained in security analysis and research aiming at machine learning, but no research has been made on security (such as a latent defense technology) of a quantum fuzzy machine learning algorithm, the quantum fuzzy machine learning algorithm has vulnerability and a lot of defects, both security and robustness need to be improved, and meanwhile, in order to resist attacks of malicious attackers, an confrontation defense model method aiming at quantum fuzzy machine learning is urgently needed.
Disclosure of Invention
In order to solve the above problems, the present invention aims to provide a quantum fuzzy machine learning confrontation defense model method, which can effectively resist the attack of a malicious attacker, improve the security and robustness of a quantum fuzzy machine learning system, ensure the safe and reliable operation of a quantum fuzzy machine learning algorithm, and simultaneously can efficiently, safely and accurately process the complexity and uncertainty problems of large data and enrich the theory and technology of quantum computation and artificial intelligence and the research method.
In order to achieve the above purpose, the invention adopts the technical scheme that:
a method for learning a confrontation defense model by a quantum fuzzy machine is characterized by comprising the following steps:
s1, constructing quantum fuzzy data samples of legal users;
s2, simulating a malicious attacker to construct an attack strategy, and adding constructed disturbance to a quantum fuzzy data sample of a legal user to form a quantum fuzzy countermeasure sample of the malicious attacker;
s3, submitting the legal user quantum fuzzy data sample and the quantum fuzzy countercheck sample of the malicious attacker to a quantum fuzzy machine learning system for training and learning, and making a correct decision by the quantum fuzzy machine learning system;
the quantum fuzzy machine learning system comprises a confrontation defense module, and the confrontation defense module is a confrontation sample for defending a malicious attacker, so that the quantum fuzzy machine learning system makes a correct decision.
Further, the antagonistic defense module is a first type of defense strategy; the first type of defense strategy is a quantum fuzzy machine learning countermeasure sample recognizer which prevents quantum fuzzy countermeasure samples submitted by malicious attackers.
Further, the countermeasure defense module is a second type defense strategy, and the second type defense strategy is a quantum fuzzy machine learning countermeasure sample corrector for correcting quantum fuzzy countermeasure samples of malicious attackers.
Further, the countermeasure defense module is a third type of defense strategy, and the third type of defense strategy is a quantum fuzzy machine learning system with high robustness and can effectively resist quantum fuzzy countermeasure samples of malicious attackers.
Furthermore, the quantum fuzzy machine learning countermeasure sample recognizer adopts a two-classification method to train and test, and recognizes quantum fuzzy data samples submitted by legal users, so as to make correct decisions; and identifying quantum ambiguity submitted by a malicious attacker to resist the sample and preventing the attack of the sample.
Further, the second classification method includes:
dividing the quantum fuzzy data sample into a test set of the quantum fuzzy data sample and a training set of the quantum fuzzy data sample;
constructing all sample fast gradient notation (FGSM) in the test set and the training set to obtain a test set of the quantum fuzzy countermeasure sample and a training set of the quantum fuzzy countermeasure sample;
and identifying the training set of the quantum fuzzy data sample and the training set of the quantum fuzzy countermeasure sample so as to fit the model.
And identifying the test set of the quantum fuzzy data sample and the test set of the quantum fuzzy countermeasure sample, and further verifying the accuracy of the model.
Further, the defense method for quantum fuzzy machine learning to resist the sample corrector comprises the following steps:
inputting a quantum fuzzy mixed sample, wherein the quantum fuzzy mixed sample is the sum of a quantum fuzzy data sample of a legal user and a quantum fuzzy countercheck sample of a malicious attacker;
and correcting the quantum fuzzy mixed sample by adopting a reconstruction method, so that a quantum fuzzy machine learning system makes a correct decision.
Further, the reconstruction method is a method of self-encoding, and includes:
training and modeling the quantum fuzzy data sample to obtain a self-encoder;
mapping the quantum fuzzy countermeasure sample into an original quantum fuzzy data sample through an auto-encoder;
finally, calculating a reconstruction error for the quantum fuzzy mixed sample, and further judging whether the correction is successful;
and (4) delivering the samples after the correction is successful to a quantum fuzzy machine learning system for processing, and making a correct decision.
Further, the high-robustness quantum fuzzy machine learning system defense method comprises the following steps:
merging the training set of legal quantum fuzzy data samples and the quantum fuzzy countermeasure sample;
putting the merged sample into a quantum fuzzy machine learning system for training to complete a fitting model;
finding out the optimal model parameters, and minimizing the attack risk of the quantum fuzzy countermeasure sample;
and finally, the quantum fuzzy machine learning system makes a correct decision.
The invention has the beneficial effects that:
the quantum fuzzy machine learning confrontation defense model is designed on the basis of the confrontation attack model by cross-fusing the quantum fuzzy information management mathematical model and the machine learning confrontation defense model aiming at the problem of improving the safety and the robustness of the quantum fuzzy machine learning algorithm. The model ensures that the quantum fuzzy machine learning algorithm runs safely and reliably, can efficiently, safely and accurately process the complexity and uncertainty problems of big data, and enriches the theory and technology of quantum computation and artificial intelligence and research methods.
The three defense strategies of the invention aim at solving the problems of vulnerability, deficiency and the like of the quantum fuzzy machine learning algorithm, and provide the counter defense model of the quantum fuzzy machine learning by referring to the counter defense research scheme of the traditional machine learning algorithm, thereby effectively restraining the counter attack of a malicious attacker and improving the safety and robustness of the quantum fuzzy machine learning algorithm. Compared with the countermeasure research of the traditional machine learning algorithm, the invention aims to improve the safety and the robustness of the quantum fuzzy machine learning algorithm.
According to the first-class defense model, a quantum fuzzy countermeasure sample can be accurately identified by training and testing a large amount of data, so that the attack of the quantum fuzzy countermeasure sample is blocked, and the defense purpose is achieved; the second type of defense model maps the quantum fuzzy countermeasure sample into an original quantum fuzzy data sample, so that the aim of correcting the quantum fuzzy countermeasure sample is fulfilled, and a quantum fuzzy machine learning system can make a correct decision; the third type of defense model introduces countermeasure training into the quantum fuzzy machine learning system, so that the robustness of the quantum fuzzy machine learning system is improved, and the defense model is wide in application and high in robustness.
Drawings
FIG. 1 is a schematic flow chart of a method for learning, confrontation and defense models by using a quantum fuzzy machine according to the present invention;
FIG. 2 is a schematic flow chart of a first type of defense strategy of the defense modeling method of the invention;
FIG. 3 is a flow chart illustrating a second type of defense strategy of the method for fighting defense models according to the present invention;
FIG. 4 is a flow chart illustrating a third type of defense strategy of the method for fighting defense models according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
Example 1
A method for learning, confronting and defending a model by a quantum fuzzy machine is disclosed, and a flow chart is shown in figure 1 and specifically comprises the following steps:
s1, constructing quantum fuzzy data samples of legal users;
firstly, a fuzzy set is introduced according to a quantum fuzzy mathematical management model, and a classical fuzzy data sample set is as follows: d ═ last<xi,yii(xi)>|xi∈X},yi∈{-1,+1},
Wherein, mui(xi) Is xiSubject to fuzzy set<xiA(xi)>|xiE.g. X, each Xi=(xi1,xi2,...,xim) The m eigenvectors are encoded into a quantum probability amplitude to form a quantum probability amplitude code, and the process can be expressed as:
Figure BDA0002827269030000051
Figure BDA0002827269030000052
representing a normalized vector;
secondly, preparing a quantum fuzzy data sample, wherein the quantum fuzzy data sample is expressed as follows:
Figure BDA0002827269030000053
wherein the content of the first and second substances,
Figure BDA0002827269030000054
xiis the ith data sample, yiThere are only two values (+1 or-1).
S2, simulating a malicious attacker to construct an attack strategy, and adding constructed disturbance to a quantum fuzzy data sample of a legal user to form a quantum fuzzy data countermeasure sample of the malicious attacker;
the attack strategy adopted by a malicious attacker is a fast gradient notation method using iterative computations.
Constructing the quantum fuzzy countermeasure sample first does not require optimizing the parameter space, but rather models the parameters
Figure BDA00028272690300000515
After fixation, the input data space x is optimized, and a best anti-disturbance rho within the epsilon range is searchedadvAfter addition to the quantum blurred data sample, the loss function is maximized:
Figure BDA0002827269030000055
wherein the content of the first and second substances,
Figure BDA0002827269030000056
is a constructed quantum blur countermeasure sample; ε is the range limiting the variation of the opposition disturbance parameter ρ, yiIs a sample xiThe label of the correct classification is then,
Figure BDA0002827269030000057
is a parameter that needs to be fixed during the counter training. Attacker makes quantum blur fight against sample
Figure BDA0002827269030000058
And quantum fuzzy data samples
Figure BDA0002827269030000059
Similarly, a maximum loss function is constructed, the loss function is used for predicting the quality of the model, and the larger the loss function is, the best constructed countermeasure sample is. Therefore, in this model, only a small θ is requiredadvA smaller unitary operator U (theta) can be foundadv) Then, a quantum fuzzy countermeasure sample can be constructed
Figure BDA00028272690300000510
As follows:
Figure BDA00028272690300000511
therefore, in constructing the quantum blur countermeasure sample, it is necessary to fix the model parameters
Figure BDA00028272690300000512
And quantum blurred data samples
Figure BDA00028272690300000513
And updates only the parameter thetaadvTo find the optimum parameters
Figure BDA00028272690300000514
Therefore, the quantum fuzzy countermeasure sample of the malicious attacker can be obtained through the method.
S3, submitting both the legal user quantum fuzzy data sample and the quantum fuzzy countercheck sample of the malicious attacker to a quantum fuzzy machine learning system for training and learning, and making a correct decision by the quantum fuzzy machine learning system;
the quantum fuzzy machine learning system comprises a countermeasure defense module, and the countermeasure defense module is a quantum fuzzy data countermeasure sample for defending the malicious attacker, so that the quantum fuzzy machine learning system can make a correct decision.
Example 2
Based on the above embodiment 1, the flow chart is shown in fig. 1 and 2, when the first type of defense strategy is adopted for the defense module; namely, the quantum fuzzy machine learning confrontation sample recognizer can prevent samples submitted by malicious attackers, thereby achieving the defense purpose.
The defense method of the quantum fuzzy machine learning confrontation sample recognizer is as follows:
the quantum fuzzy machine learning confrontation sample recognizer adopts a two-classification method to train and test, and recognizes quantum fuzzy data samples submitted by legal users
Figure BDA0002827269030000061
Thereby making a correct decision; identifying quantum fuzzy countermeasure samples submitted by malicious attackers
Figure BDA0002827269030000062
Preventing attacks on the sample.
Wherein, the two classification methods comprise:
splitting quantum fuzzy data samples into test sets of quantum fuzzy data samples
Figure BDA0002827269030000063
Training set of sum quantum fuzzy data samples
Figure BDA0002827269030000064
For the test set
Figure BDA0002827269030000065
And training set
Figure BDA0002827269030000066
All the samples are constructed by adopting FGSM method to obtain a test set of quantum fuzzy countermeasure samples
Figure BDA0002827269030000067
Training set of sum quantum fuzzy countermeasure samples
Figure BDA0002827269030000068
Training set for quantum fuzzy data samples
Figure BDA0002827269030000069
Training set of samples against quantum ambiguity
Figure BDA00028272690300000610
Performing recognition so as to train and fit a quantum fuzzy machine learning system; testing set of quantum fuzzy data samples
Figure BDA00028272690300000611
Test set of samples competing with quantum blur
Figure BDA00028272690300000612
And (5) performing identification. For measuringAnd testing the accuracy of the quantum fuzzy machine learning system, and making a correct decision aiming at a correct quantum fuzzy data sample.
Example 3
Based on the above embodiment 1, the flowchart of which is shown in fig. 1 and 3, when the countermeasure defense module adopts the second kind of defense strategy, the second kind of defense strategy is to reconstruct the input quantum fuzzy mixed sample (including the quantum fuzzy data sample and the quantum fuzzy countermeasure sample), so that the quantum fuzzy machine learning system makes a correct decision to achieve the defense purpose.
The defense method of the quantum fuzzy machine learning confrontation sample corrector comprises the following steps:
inputting a quantum fuzzy mixed sample, wherein the quantum fuzzy mixed sample is the sum of a quantum fuzzy data sample of a legal user and a quantum fuzzy countercheck sample of a malicious attacker;
and correcting the quantum fuzzy mixed sample by adopting a reconstruction method, so that a quantum fuzzy machine learning system makes a correct decision.
The reconstruction method is a method adopting self-coding to reconstruct, and comprises the following steps:
training and modeling the quantum fuzzy data sample to obtain a self-encoder;
mapping the quantum fuzzy countermeasure sample into an original quantum fuzzy data sample through an auto-encoder;
finally, calculating a reconstruction error for the quantum fuzzy mixed sample, and further judging whether the correction is successful;
and (4) delivering the samples after the correction is successful to a quantum fuzzy machine learning system for processing, and making a correct decision.
The specific implementation mode is as follows:
the reconstruction method is a method for reconstructing by adopting a training self-encoder, and comprises the following steps:
for quantum fuzzy data sample
Figure BDA0002827269030000071
Training and modeling are carried out to obtain a self-encoder; the self-encoder does not change the normal samples (quantum fuzzy data samples)Present), the quantum fuzzy countermeasure sample will be mapped to the original quantum fuzzy data sample.
Self-encoder requires training quantum fuzzy data samples
Figure BDA0002827269030000072
Is minimal;
wherein the loss function is:
Figure BDA0002827269030000073
Xtrainquantum fuzzy data samples for training
Figure BDA0002827269030000074
Figure BDA0002827269030000075
Accepting input quantum-blurred data samples for an encoder
Figure BDA0002827269030000076
The resulting output value.
To test the generalization capability of the self-encoder, it is necessary to use quantum-fuzzy countersamples in the training set
Figure BDA0002827269030000077
To train the self-encoder.
Calculating reconstruction errors for all samples (including quantum fuzzy data samples and quantum fuzzy countermeasure samples), wherein the calculation formula of the reconstruction errors is as follows;
Figure BDA0002827269030000078
Figure BDA0002827269030000079
are samples input from the encoder (including quantum blurred data samples and quantum blur robust samples). If the reconstruction error is one smallThe threshold value of (1) indicates that the correction is successful, otherwise, the correction fails. And then the reconstructed sample is delivered to a quantum fuzzy machine learning system for processing, and further a correct decision is made.
Example 4
Based on the above embodiment 1, the flowchart of which is shown in fig. 1 and 4, when the countermeasure defense module adopts the third kind of defense strategy, that is, the quantum fuzzy machine learning system with high robustness, the quantum fuzzy countermeasure sample attack of the malicious attacker can be effectively resisted, and the defense purpose is achieved.
The quantum fuzzy machine learning system defense method with high robustness comprises the following steps:
merging the training set of legal quantum fuzzy data samples and the quantum fuzzy countermeasure sample;
and putting the merged sample into a quantum fuzzy machine learning system for training, and completing model fitting so as to enhance the robustness of the quantum fuzzy machine learning system.
Finding out the optimal model parameters, and minimizing the attack risk of the quantum fuzzy countermeasure sample;
in order to minimize the attack risk of quantum blur against the sample, it is necessary to re-optimize the loss function with respect to the parameters
Figure BDA0002827269030000081
And (4) minimizing. The loss function has the effect of predicting the quality of the model, the smaller the loss function is, the better the model is, and the trained model is the quantum fuzzy machine learning system with high robustness. Then the optimized loss function after adding the quantum blur countermeasure sample is as follows:
Figure BDA0002827269030000082
wherein
Figure BDA0002827269030000083
And is
Figure BDA0002827269030000084
Figure BDA0002827269030000085
Is the best model parameter after quantum fuzzy confrontation training. At this time, after merging the training set of the normal quantum fuzzy data sample and the quantum fuzzy countermeasure sample, the new loss function of the training model is:
Figure BDA0002827269030000086
where β is the proportion of the quantum blur countermeasure samples to the entire quantum blur data sample used for training.
Finally, through training of the quantum fuzzy machine learning system, the purpose of minimizing quantum fuzzy and resisting sample attack can be achieved, and therefore the quantum fuzzy machine learning system can make a correct decision.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.

Claims (9)

1. A method for learning a confrontation defense model by a quantum fuzzy machine is characterized by comprising the following steps:
s1, constructing quantum fuzzy data samples of legal users;
s2, simulating a malicious attacker to construct an attack strategy, and adding constructed disturbance to a quantum fuzzy data sample of a legal user to form a quantum fuzzy countermeasure sample of the malicious attacker;
s3, submitting the legal user quantum fuzzy data sample and the quantum fuzzy countercheck sample of the malicious attacker to a quantum fuzzy machine learning system for training and learning, and making a correct decision by the quantum fuzzy machine learning system;
the quantum fuzzy machine learning system comprises a confrontation defense module, and the confrontation defense module is a confrontation sample for defending a malicious attacker, so that the quantum fuzzy machine learning system makes a correct decision.
2. The antagonistic defense model method of claim 1, wherein the antagonistic defense module is a first class of defense strategy; the first type of defense strategy is a quantum fuzzy machine learning countermeasure sample recognizer which prevents quantum fuzzy countermeasure samples submitted by malicious attackers.
3. The confrontation defense model method of claim 1, wherein the confrontation defense module is a second type of defense strategy, the second type of defense strategy is a quantum fuzzy machine learning confrontation sample corrector, and quantum fuzzy confrontation samples of a malicious attacker are corrected.
4. The confrontation defense model method according to claim 1, characterized in that the confrontation defense module is a third class of defense strategy, and the third class of defense strategy is a highly robust quantum fuzzy machine learning system capable of effectively defending against quantum fuzzy confrontation samples of malicious attackers.
5. The confrontation defense model method according to claim 2, characterized in that the quantum fuzzy machine learning confrontation sample recognizer adopts a two-classification method for training and testing, and recognizes quantum fuzzy data samples submitted by legal users, so as to make correct decisions; and identifying quantum ambiguity submitted by a malicious attacker to resist the sample and preventing the attack of the sample.
6. The antagonistic defense model method of claim 5, wherein the two taxonomies comprise:
dividing the quantum fuzzy data sample into a test set of the quantum fuzzy data sample and a training set of the quantum fuzzy data sample;
constructing all samples in the test set and the training set by adopting a fast gradient notation method (FGSM) to obtain a test set of the quantum fuzzy countermeasure samples and a training set of the quantum fuzzy countermeasure samples;
identifying a training set of the quantum fuzzy data sample and a training set of the quantum fuzzy countermeasure sample so as to fit a model;
and identifying the test set of the quantum fuzzy data sample and the test set of the quantum fuzzy countermeasure sample, and further verifying the accuracy of the model.
7. The confrontation defense model method of claim 3, wherein the defense method of the quantum fuzzy machine learning confrontation sample corrector comprises:
inputting a quantum fuzzy mixed sample, wherein the quantum fuzzy mixed sample is the sum of a quantum fuzzy data sample of a legal user and a quantum fuzzy countercheck sample of a malicious attacker;
and correcting the quantum fuzzy mixed sample by adopting a reconstruction method, so that a quantum fuzzy machine learning system makes a correct decision.
8. The method of claim 7, wherein the reconstruction method is a self-coding reconstruction method, and comprises:
training and modeling the quantum fuzzy data sample to obtain a self-encoder;
mapping the quantum fuzzy countermeasure sample into an original quantum fuzzy data sample through an auto-encoder;
finally, calculating a reconstruction error for the quantum fuzzy mixed sample, and further judging whether the correction is successful;
and (4) delivering the samples after the correction is successful to a quantum fuzzy machine learning system for processing, and making a correct decision.
9. The confrontational defense model method according to claim 4, wherein said highly robust quantum fuzzy machine learning system defense method comprises:
merging the training set of legal quantum fuzzy data samples and the quantum fuzzy countermeasure sample;
putting the merged sample into a quantum fuzzy machine learning system for training to complete a fitting model;
finding out the optimal model parameters, and minimizing the attack risk of the quantum fuzzy countermeasure sample;
and finally, the quantum fuzzy machine learning system makes a correct decision.
CN202011433028.6A 2020-12-10 2020-12-10 Learning countermeasure defense model method for quantum fuzzy machine Pending CN112488225A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011433028.6A CN112488225A (en) 2020-12-10 2020-12-10 Learning countermeasure defense model method for quantum fuzzy machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011433028.6A CN112488225A (en) 2020-12-10 2020-12-10 Learning countermeasure defense model method for quantum fuzzy machine

Publications (1)

Publication Number Publication Date
CN112488225A true CN112488225A (en) 2021-03-12

Family

ID=74941133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011433028.6A Pending CN112488225A (en) 2020-12-10 2020-12-10 Learning countermeasure defense model method for quantum fuzzy machine

Country Status (1)

Country Link
CN (1) CN112488225A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297575A (en) * 2021-06-11 2021-08-24 浙江工业大学 Multi-channel graph vertical federal model defense method based on self-encoder
CN113468957A (en) * 2021-05-25 2021-10-01 华东师范大学 Multi-view defense method based on noise reduction self-coding
CN117407922A (en) * 2023-12-11 2024-01-16 成都信息工程大学 Federal learning privacy protection system and method based on quantum noise

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113468957A (en) * 2021-05-25 2021-10-01 华东师范大学 Multi-view defense method based on noise reduction self-coding
CN113297575A (en) * 2021-06-11 2021-08-24 浙江工业大学 Multi-channel graph vertical federal model defense method based on self-encoder
CN113297575B (en) * 2021-06-11 2022-05-17 浙江工业大学 Multi-channel graph vertical federal model defense method based on self-encoder
CN117407922A (en) * 2023-12-11 2024-01-16 成都信息工程大学 Federal learning privacy protection system and method based on quantum noise
CN117407922B (en) * 2023-12-11 2024-03-22 成都信息工程大学 Federal learning privacy protection system and method based on quantum noise

Similar Documents

Publication Publication Date Title
Ji et al. Model-reuse attacks on deep learning systems
Yang et al. Defending model inversion and membership inference attacks via prediction purification
CN112488225A (en) Learning countermeasure defense model method for quantum fuzzy machine
Ortiz-Jiménez et al. Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness
Nesti et al. Detecting adversarial examples by input transformations, defense perturbations, and voting
Jebreel et al. Fl-defender: Combating targeted attacks in federated learning
Liu et al. Adversaries or allies? Privacy and deep learning in big data era
Vidal et al. Online masquerade detection resistant to mimicry
Bakhshi et al. Anomaly detection in encrypted internet traffic using hybrid deep learning
Lu et al. Defense against backdoor attack in federated learning
Macas et al. Adversarial examples: A survey of attacks and defenses in deep learning-enabled cybersecurity systems
Du et al. Spear or shield: Leveraging generative AI to tackle security threats of intelligent network services
Naseer The efficacy of Deep Learning and Artificial Intelligence Framework in Enhancing Cybersecurity, Challenges and Future Prospects
CN113822443A (en) Method for resisting attack and generating resisting sample
CN113222480B (en) Training method and device for challenge sample generation model
Rizvi et al. An evolutionary KNN model for DDoS assault detection using genetic algorithm based optimization
Ranzato et al. Robustness Verification of Decision Tree Ensembles.
Zhu et al. Aec_gan: unbalanced data processing decision-making in network attacks based on ACGAN and machine learning
Kwon et al. Toward Selective Membership Inference Attack against Deep Learning Model
Mori et al. Detection of cloned recognizers: a defending method against recognizer cloning attack
Wang et al. Hyperdetect: A real-time hyperdimensional solution for intrusion detection in iot networks
Das et al. State of the art: Security Testing of Machine Learning Development Systems
Alslman et al. A Robust SNMP-MIB Intrusion Detection System Against Adversarial Attacks
Rajhi et al. Adversarial Training Method for Machine Learning Model in a Resource-Constrained Environment
Shruthy et al. Phishing Prediction on Website Updates with Novel Features Through Machine Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination