CN112162515B - Anti-attack method for process monitoring system - Google Patents
Anti-attack method for process monitoring system Download PDFInfo
- Publication number
- CN112162515B CN112162515B CN202011080541.1A CN202011080541A CN112162515B CN 112162515 B CN112162515 B CN 112162515B CN 202011080541 A CN202011080541 A CN 202011080541A CN 112162515 B CN112162515 B CN 112162515B
- Authority
- CN
- China
- Prior art keywords
- sample
- encoder
- process monitoring
- monitoring system
- subspace
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
- G05B19/0428—Safety, monitoring
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/24—Pc safety
- G05B2219/24024—Safety, surveillance
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses a method for resisting attacks aiming at a process monitoring system. The subspace network consists of a perturbation generator, a forward self-encoder and a reverse self-encoder. The perturbation generator adds extra perturbation to the original samples, and the forward self-encoder and the backward self-encoder provide directions for the perturbation through the information of the subspace. The attacker can be divided into a case 1 and a case 2 according to the data condition in the observation data, and the process monitoring system is subjected to attack resistance and data poisoning. Aiming at the process monitoring model, the invention provides a subspace migration network trained based on an optimization method, which can generate an anti-attack sample with anti-attack capability and poisoning capability and carry out anti-attack on the process monitoring model.
Description
Technical Field
The invention belongs to the field of industrial information security, and particularly relates to an anti-attack method for a process monitoring system.
Background
The process monitoring system is the first line of defense for ensuring the safety of industrial production and is widely applied to various industrial links. Because fault data are of various types and difficult to acquire, data-driven process monitoring systems generally use normal data to complete modeling unsupervised. It maps the normal samples to a subspace, which is the monitoring space of the process monitoring system, and completes the reconstruction. However, the failed samples are mapped into the same subspace, it will not be reconstructed well, i.e. the Squared Prediction Error (SPE) of the failed reconstructed samples and the original samples will be very large. In process monitoring system operation, once the SPE value of an inquiry sample exceeds an expected control limit, it will be judged as a fault sample. The process monitoring system will give an alarm at the first time a faulty sample is found, preventing the fault from causing more serious losses.
A major concern of process monitoring systems is the presence of anomalies within the production system. However, with the advent of the industry big data age, the combination of the originally closed industrial information system and the internet becomes more and more compact and open. This also means that process industry information systems are threatened by external risks and attackers can attack and poison process monitoring systems by acquiring and tampering with the sensor data. Once a process monitoring system loses effectiveness, it poses a significant risk to the safety of an industrial system. Therefore, a counterattack approach to process monitoring systems must be appreciated, which has not been discussed in the past.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide an anti-attack method for a process monitoring system. The invention uses a novel subspace migration network to carry out attack resistance on the process monitoring system and carry out data virus input on the updated process monitoring system.
The purpose of the invention is realized by the following technical scheme: a method for resisting attack aiming at a process monitoring system is characterized in that a subspace migration network generates a resisting sample with the capacities of resisting attack and throwing toxicity simultaneously in an optimized mode. The process monitoring system detects the inquiry sample by default through the statistics of square prediction errors before and after subspace reconstruction, and if the square prediction errors exceed the control limit, the process monitoring system is a fault. The subspace migration network comprises three parts, respectively: a disturbance generator, a forward autoencoder and a reverse autoencoder. Wherein, the disturbance generator is a multilayer perceptron, and the calculation formula of the disturbance generated by the countermeasure sample is represented as g (·); the forward self-encoder and the reverse self-encoder are two dimension-reduced self-encoders with hidden layers smaller than the input and output layers respectively. The attacker attacks the following two cases:
case 1: an attacker observes and obtains a normal sample in the industrial production process by holding the sensorWhere n is the number of observed normal samples and m is the dimension of the sample. At this time, the goal of the subspace migration network is that the generated samples are not detected by the original process monitoring system and are added into the updated database, so that the query samples are falsely detected as faults in the modeling of the updated process monitoring system. At this time, the updated process monitoring system is subject to data poisoning and can no longer be trusted and enabledThe application is as follows.
Case 2: supposing that an attacker observes and obtains a normal sample in the industrial production process by holding the sensorAnd fault samplesWhere n is the number of observed normal samples, m is the dimension of the samples, and n' is the number of observed fault samples. At this time, the goal of the subspace migration network is that the generated countermeasure samples are not detected by the original process monitoring system and are added into the updated database, and the fault samples are considered as normal samples and are missed in the modeling of the updated process monitoring system. At this time, the updated process monitoring system is subject to data poisoning and can no longer detect a fault.
Further, in the case 1, the training process of the subspace migration network is as follows:
step 1.1: according to obtainingAs input, pass the following loss functionTraining results in a forward autoencoder:
wherein the content of the first and second substances,is thatReconstructed samples under a forward autoencoder. The significance of the forward autoencoder is to construct a first subspace of normal data for the perturbation generator.
Step 1.2: will obtainAs input to the perturbation generator, and obtain the confrontation samples after adding perturbation:
at the same time, willAs input, pass the following loss functionTraining results in an inverse autoencoder:
wherein the content of the first and second substances,are reconstructed samples from the inverse auto-encoder. The inverse autoencoder constructs a second subspace of the poisoned data for the perturbation generator to deviate from the sample space of the challenge samples.
Step 1.3: will be provided withObtaining the test loss of the forward self-encoder as the test data of the forward self-encoder
The significance of this is that in this first subspace, the challenge sample will not be detectable by the process monitoring system.
Will be provided withAs test data of the reverse self-encoder, obtaining its test loss under the reverse self-encoder
The significance of the method is that under the second subspace, the process monitoring system updated based on the countersample generates false detection on the normal sample.
Step 1.4: the perturbation generator will be at the loss function LGOPAnd performing optimization training to enable the optimization training to directionally generate the disturbance meeting the two subspaces and obtain a countermeasure sample meeting the condition.
Where α is the weighting factor lost by the forward autoencoder test, β is the weighting factor lost by the reverse autoencoder training, and γ is the weighting factor lost by the reverse autoencoder test.
Step 1.5: repeating steps 1.3-1.4 until challenge samples meeting the requirements are produced.
Further, in the step 1.4, L can be addedGOPMedium additive disturbance loss LperLimiting the size of the disturbance generated by the disturbance generator:
where c is a threshold for setting the disturbance.
Further, in the case 2, the training process of the subspace migration network is as follows:
step 2.1: according to obtainingAs input, pass the following loss functionTraining results in a forward autoencoder:
wherein the content of the first and second substances,is thatReconstructed samples under a forward autoencoder. The significance of the forward autoencoder is to construct a first subspace of normal data for the perturbation generator.
Step 2.2: will obtainAs input to the perturbation generator, and obtain the confrontation samples after adding perturbation:
at the same time, willAs input, pass the following loss functionTraining results in an inverse autoencoder:
wherein the content of the first and second substances,are reconstructed samples from the inverse auto-encoder. The inverse autoencoder constructs a second subspace of the poisoned data for the perturbation generator to enter the sample space of the challenge samples.
Step 2.3: will be provided withObtaining the test loss of the forward self-encoder as the test data of the forward self-encoder
The significance of this is that in this first subspace, the challenge sample will not be detectable by the process monitoring system.
Will be provided withAs test data of the reverse self-encoder, obtaining its test loss under the reverse self-encoder
The significance of this is that in this second subspace, a process monitoring system updated on the basis of countermeasure samples cannot detect a fault.
Step 2.4: the perturbation generator will be at the loss function LGOPThe optimization training is carried out to make it possible to directionally generate the disturbance meeting the two subspacesAnd obtaining a countermeasure sample satisfying the condition.
Where α is the weighting factor lost by the forward autoencoder test, β is the weighting factor lost by the reverse autoencoder training, and γ is the weighting factor lost by the reverse autoencoder test.
Step 2.5: repeating steps 2.3-2.4 until a challenge sample meeting the requirements is produced.
Further, in the step 2.4, L can be addedGOPMedium additive disturbance loss LperLimiting the size of the disturbance generated by the disturbance generator:
where c is a threshold for setting the disturbance.
The invention has the beneficial effects that: the invention utilizes industrial data to make a countermeasure sample by training a subspace migration network. The subspace network consists of a perturbation generator, a forward self-encoder and a reverse self-encoder. The perturbation generator adds extra perturbation to the original samples, and the forward self-encoder and the backward self-encoder provide directions for the perturbation through the information of the subspace. The attacker can be divided into a case 1 and a case 2 according to the data condition in the observation data, and the process monitoring system is subjected to attack resistance and data poisoning. Aiming at the process monitoring model, the invention provides a subspace migration network trained based on an optimization method, which can generate an anti-attack sample with anti-attack capability and poisoning capability and carry out anti-attack on the process monitoring model.
Drawings
FIG. 1 is a schematic diagram of the structure of a subspace migration network;
FIG. 2 is a flow chart of the Tennessee Eastman process;
FIG. 3 is a schematic diagram of SPE statistics under PCA process monitoring system for normal data and its countermeasure samples under case 1;
FIG. 4 is a schematic diagram of SPE statistics for a PCA process monitoring system with query samples under normal data and challenge samples thereof updated after poisoning under case 1;
FIG. 5 is a schematic diagram of SPE statistics under a PCA process monitoring system for fault data and countermeasure samples thereof under case 2;
fig. 6 is a schematic diagram of SPE statistics of the PCA process monitoring system in which challenge samples of the query samples in the normal data and the fault data are updated after poisoning in case 2.
Detailed Description
The method for countering attacks of the process monitoring system proposed by the present invention is further described in detail with reference to the following embodiments.
The invention relates to an anti-attack method aiming at a process monitoring system, which adopts a Subspace Transform Network (STN) and comprises three parts of a disturbance Generator (GOP), a Forward Auto Encoder (FAE) and a Reverse Auto Encoder (RAE). Wherein, the disturbance generator is a multilayer perceptron, and the calculation formula of the disturbance generated by the countermeasure sample is represented as g (·); the forward self-encoder and the reverse self-encoder are two dimension-reduced self-encoders with hidden layers smaller than the input and output layers respectively. The STN generates a countermeasure sample with the capabilities of resisting attacks and throwing the viruses in an optimized mode.
The process monitoring system provided by the invention detects an inquiry sample by default through the statistic of Square Prediction Errors (SPE) before and after subspace reconstruction, and if the SPE exceeds the control limit, the fault is determined.
Case 1:
supposing that an attacker observes and obtains a normal sample in the industrial production process by holding the sensorWhere n is the number of observed normal samples, m is the dimension of the sample,is a real number. The goal of the STN at this time is that the challenge sample generated is not detected by the original process monitoring system and is added to the updated database, causing the challenge sample to be falsely detected as a fault in the modeling of the updated process monitoring system. At this time, the updated process monitoring system is subject to data poisoning and cannot be relied and used any more. The training process for STN is as follows:
step 1: according to the obtained normal sampleAs input, by the following loss functionTraining results in a forward autoencoder FAE:
wherein the content of the first and second substances,is thatReconstructed samples under FAE. The significance of FAE is to construct a first subspace of normal data for the GOP.
Step 2: the obtained normal sampleAs input to the GOP, we get the challenge samples after adding the perturbation g (-) to
At the same time, will fight the sampleAs input, by the following loss functionTraining results in an inverse autoencoder RAE:
wherein the content of the first and second substances,are reconstructed samples of the RAE. The RAE constructs a second subspace of the poisoned data for the GOP, offset from the sample space of the challenge samples.
The optimization of (a) is aimed at rendering the challenge sample undetectable by the process monitoring system in the first subspace.
The optimization of (a) is aimed at causing a false detection of normal samples by a process monitoring system based on a countersample update in the second subspace.
And 4, step 4: putting GOP in a loss function LGOPAnd performing optimization training to directionally generate disturbance meeting two subspaces and obtain a confrontation sample meeting the conditions:
wherein α > 0 is FAE test lossBeta > 0 is the RAE training lossGamma < 0 is the RAE test lossThe weighting factor of (2).
Preferably, L may beGOPMedium additive disturbance loss LperLimiting the size of the GOP produced disturbances:
wherein c is a threshold for setting the disturbance; l isGOP′=LGOP+Lper。
And 5: and repeating the steps 3-4 until a challenge sample meeting the requirements is generated.
Case 2:
supposing that an attacker observes and obtains a normal sample in the industrial production process by holding the sensorAnd fault samplesWhere n is the number of observed normal samples, m is the dimension of the samples, and n' is the number of observed fault samples. The goal of the STN at this time is that the generated challenge samples are not detected by the original process monitoring system and are added to the updated database, leaving the failed samples as normal samples and missed in the modeling of the updated process monitoring system. At this time, the updated process monitoring system is subject to data poisoning and can no longer detect a fault. The training process for STN is as follows:
step 1: according to the obtained normal sampleAs input, by the following loss functionTraining results in a forward autoencoder FAE:
wherein the content of the first and second substances,is thatReconstructed samples under FAE. The significance of FAE is to construct a first subspace of normal data for the GOP.
Step 2: the obtained fault sampleAs input to the GOP, we get the confronted samples after adding the perturbation:
at the same time, will fight the sampleAs input, by the following loss functionTraining results in an inverse autoencoder RAE:
wherein the content of the first and second substances,are reconstructed samples of the RAE. The RAE constructs a second subspace of the poisoned data for the GOP into the sample space of the challenge samples.
In the first subspace, so that the challenge sample will not be detected by the process monitoring systemAnd (6) discharging.
The optimization of (a) is aimed at disabling the process monitoring system based on the updating of the countermeasure sample from detecting a fault in the second subspace.
And 4, step 4: putting GOP in a loss function LGOPAnd performing optimization training to directionally generate disturbance meeting two subspaces and obtain a confrontation sample meeting the conditions:
wherein, alpha > 0 is the weight factor of FAE test loss, beta > 0 is the weight factor of RAE training loss, and gamma > 0 is the weight factor of RAE test loss. Preferably, L may beGOPMedium additive disturbance loss LperLimiting the size of the GOP produced disturbances:
where c is a threshold for setting the disturbance.
And 5: and repeating the steps 3-4 until a challenge sample meeting the requirements is generated.
This is illustrated below with reference to a specific example of the te (tennessee eastman) process. The TE process is a standard data set commonly used in the field of fault diagnosis and fault classification, and the whole data set includes 53 process variables, and the process flow thereof is shown in fig. 2. The process consists of 5 operation units, namely a gas-liquid separation tower, a continuous stirring type reaction kettle, a dephlegmator, a centrifugal compressor, a reboiler and the like, can be expressed by a plurality of algebraic and differential equations, and is mainly characterized by nonlinearity and strong coupling of the process sensing data.
The TE process may artificially set 21 types of faults, wherein the 21 types of faults include 16 types of known faults, 5 types of unknown faults, the types of faults include step changes of flow, slow ramp increase, sticking of valves, and the like, and typical nonlinear faults and dynamic faults are included. For the process, all 53 process variables were used as modeling variables, and failure 14 (reactor cooling water valve sticking) was selected as an anti-attack and data poisoning experiment for both cases of the present application. The method comprises the following steps of dividing the situation into a situation 1 (establishing a PCA process monitoring system by 500 normal samples, establishing an STN model by 250 normal data as an observation normal sample, establishing an updated PCA process monitoring system by a countermeasure sample made of 250 normal samples and using 300 normal samples as an inquiry sample) and a situation 2 (establishing a PCA process monitoring system by 500 normal samples, establishing an STN model by 250 normal data and 250 fault 14 samples as observation normal samples, establishing an updated PCA process monitoring system by a countermeasure sample made of 250 fault 14 samples and using 160 normal samples and 300 fault 14 samples as inquiry samples).
As can be seen from fig. 3, in case 1, the SPE statistics of the normal data and its challenge samples are below the control limit of the process monitoring system, which means that the process detection system cannot detect the challenge samples. FIG. 4 shows the results of updating the process monitoring system after poisoning of the normal and challenge samples, respectively, and it can be seen that the normal challenge sample is considered normal in the non-poisoned process monitoring system; but is considered to be faulty in a post-exposure update process monitoring system. This illustrates that in case 1, the challenge sample generated by the STN has the ability to challenge both the attack and the data poisoning.
As can be seen from fig. 5, in case 2, the SPE statistic of the fault 14 sample far exceeds the control limit of the process monitoring system, which indicates that the process monitoring system can easily detect the fault 14; while the SPE statistics for the challenge samples of the 14 samples failed are all below the control limit, indicating that the process monitoring system cannot effectively detect the challenge samples. FIG. 6 shows the results of the update process monitoring system in case 2 for the fault 14 sample and the challenge sample, respectively, and it can be seen that the update process monitoring system established with the normal sample can well detect the fault 14 query sample; whereas under the updated process monitoring system established by the challenge samples, the SPE statistics of the normal samples and most of the fault 14 challenge samples are under control. This illustrates that in case 2, the challenge sample generated by the STN has the ability to challenge both the attack and the data poisoning.
Claims (5)
1. An anti-attack method for a process monitoring system is characterized in that a subspace migration network generates an anti-sample with anti-attack and virus throwing capabilities simultaneously in an optimized mode; the process monitoring system detects the inquiry sample by default through the statistic of square prediction errors before and after subspace reconstruction, and if the square prediction error exceeds the control limit, the process monitoring system detects the inquiry sample as a fault; the subspace migration network comprises three parts, respectively: a disturbance generator, a forward autoencoder and a reverse autoencoder; wherein, the disturbance generator is a multilayer perceptron, and the calculation formula of the disturbance generated by the countermeasure sample is represented as g (·); the forward self-encoder and the reverse self-encoder are respectively two dimension-reduction self-encoders with hidden layers smaller than the input and output layers; the attacker attacks the following two cases:
case 1: an attacker observes and obtains a normal sample in the industrial production process by holding the sensorWhere n is the number of observed normal samples and m is the dimension of the sample; at the moment, the goal of the subspace migration network is that the generated sample is not detected by the original process monitoring system and is added into the updated database, and the query sample is mistakenly detected as a fault in the modeling of the updated process monitoring system; this is achieved byIn time, the updated process monitoring system is poisoned by data and cannot be relied and used any more;
case 2: supposing that an attacker observes and obtains a normal sample in the industrial production process by holding the sensorAnd fault samplesWherein n is the number of observed normal samples, m is the dimensionality of the samples, and n' is the number of observed fault samples; at the moment, the goal of the subspace migration network is that the generated countermeasure sample is not detected by the original process monitoring system and is added into the updated database, and the fault sample is considered as a normal sample and is missed in the modeling of the updated process monitoring system; at this time, the updated process monitoring system is subject to data poisoning and can no longer detect a fault.
2. The method for protecting against attacks on a process monitoring system according to claim 1, wherein in case 1, the training process of the subspace migration network is:
step 1.1: according to obtainingAs input, pass the following loss functionTraining results in a forward autoencoder:
wherein the content of the first and second substances,is thatReconstructing samples under a forward autoencoder; the significance of the forward self-encoder is to construct a first subspace of normal data for the disturbance generator;
step 1.2: will obtainAs input to the perturbation generator, and obtain the confrontation samples after adding perturbation:
at the same time, willAs input, pass the following loss functionTraining results in an inverse autoencoder:
wherein the content of the first and second substances,is a reconstructed sample of the inverse auto-encoder; the reverse self-encoder constructs a second subspace of the virus-throwing data for the disturbance generator, and the second subspace deviates from the sample space of the countermeasure sample;
step 1.3: will be provided withObtaining the test loss of the forward self-encoder as the test data of the forward self-encoder
The significance of the method is that in the first subspace, the confrontation sample cannot be detected by a process monitoring system;
will be provided withAs test data of the reverse self-encoder, obtaining its test loss under the reverse self-encoder
The significance of the method is that under the second subspace, the process monitoring system updated based on the confrontation samples generates false detection on the normal samples;
step 1.4: the perturbation generator will be at the loss function LGOPPerforming optimization training to enable the optimization training to directionally generate disturbance meeting the two subspaces and obtain a confrontation sample meeting the condition;
wherein, alpha is a weighting factor of the test loss of the forward self-encoder, beta is a weighting factor of the training loss of the reverse self-encoder, and gamma is a weighting factor of the test loss of the reverse self-encoder;
step 1.5: repeating steps 1.3-1.4 until challenge samples meeting the requirements are produced.
3. The method for protecting against attacks on a process monitoring system according to claim 2, wherein said step 1.4 is further performed at LGOPMedium additive disturbance loss LperLimiting the size of the disturbance generated by the disturbance generator:
where c is a threshold for setting the disturbance.
4. The method for protecting against attacks on a process monitoring system according to claim 1, wherein in case 2, the training process of the subspace migration network is:
step 2.1: according to obtainingAs input, pass the following loss functionTraining results in a forward autoencoder:
wherein the content of the first and second substances,is thatReconstructing samples under a forward autoencoder; the significance of the forward self-encoder is to construct a first subspace of normal data for the disturbance generator;
step 2.2: will obtainAs input to the perturbation generator, and obtain the confrontation samples after adding perturbation:
at the same time, willAs input, pass the following loss functionTraining results in an inverse autoencoder:
wherein the content of the first and second substances,is a reconstructed sample of the inverse auto-encoder; constructing a second subspace of the virus-throwing data for the disturbance generator by the reverse self-encoder, and enabling the second subspace to enter a sample space of the countermeasure sample;
step 2.3: will be provided withObtaining the test loss of the forward self-encoder as the test data of the forward self-encoder
The significance of the method is that in the first subspace, the confrontation sample cannot be detected by a process monitoring system;
will be provided withAs test data of the reverse self-encoder, obtaining its test loss under the reverse self-encoder
The significance of the method is that under the second subspace, the process monitoring system updated based on the countermeasure sample cannot detect faults;
step 2.4: the perturbation generator will be at the loss function LGOPPerforming optimization training to enable the optimization training to directionally generate disturbance meeting the two subspaces and obtain a confrontation sample meeting the condition;
wherein, alpha is a weighting factor of the test loss of the forward self-encoder, beta is a weighting factor of the training loss of the reverse self-encoder, and gamma is a weighting factor of the test loss of the reverse self-encoder;
step 2.5: repeating steps 2.3-2.4 until a challenge sample meeting the requirements is produced.
5. The method of claim 4 for countering attacks on process monitoring systems, wherein in step 2.4, the action can also be performed at LGOPMedium additive disturbance loss LperLimiting the size of the disturbance generated by the disturbance generator:
where c is a threshold for setting the disturbance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011080541.1A CN112162515B (en) | 2020-10-10 | 2020-10-10 | Anti-attack method for process monitoring system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011080541.1A CN112162515B (en) | 2020-10-10 | 2020-10-10 | Anti-attack method for process monitoring system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112162515A CN112162515A (en) | 2021-01-01 |
CN112162515B true CN112162515B (en) | 2021-08-03 |
Family
ID=73868016
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011080541.1A Active CN112162515B (en) | 2020-10-10 | 2020-10-10 | Anti-attack method for process monitoring system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112162515B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113361648B (en) * | 2021-07-07 | 2022-07-05 | 浙江大学 | Information fingerprint extraction method for safe industrial big data analysis |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019075771A1 (en) * | 2017-10-20 | 2019-04-25 | Huawei Technologies Co., Ltd. | Self-training method and system for semi-supervised learning with generative adversarial networks |
CN110334806A (en) * | 2019-05-29 | 2019-10-15 | 广东技术师范大学 | A kind of confrontation sample generating method based on production confrontation network |
CN110598400A (en) * | 2019-08-29 | 2019-12-20 | 浙江工业大学 | Defense method for high hidden poisoning attack based on generation countermeasure network and application |
WO2020057867A1 (en) * | 2018-09-17 | 2020-03-26 | Robert Bosch Gmbh | Device and method for training an augmented discriminator |
CN111353548A (en) * | 2020-03-11 | 2020-06-30 | 中国人民解放军军事科学院国防科技创新研究院 | Robust feature deep learning method based on confrontation space transformation network |
WO2020143227A1 (en) * | 2019-01-07 | 2020-07-16 | 浙江大学 | Method for generating malicious sample of industrial control system based on adversarial learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446765A (en) * | 2018-02-11 | 2018-08-24 | 浙江工业大学 | The multi-model composite defense method of sexual assault is fought towards deep learning |
-
2020
- 2020-10-10 CN CN202011080541.1A patent/CN112162515B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019075771A1 (en) * | 2017-10-20 | 2019-04-25 | Huawei Technologies Co., Ltd. | Self-training method and system for semi-supervised learning with generative adversarial networks |
WO2020057867A1 (en) * | 2018-09-17 | 2020-03-26 | Robert Bosch Gmbh | Device and method for training an augmented discriminator |
WO2020143227A1 (en) * | 2019-01-07 | 2020-07-16 | 浙江大学 | Method for generating malicious sample of industrial control system based on adversarial learning |
CN110334806A (en) * | 2019-05-29 | 2019-10-15 | 广东技术师范大学 | A kind of confrontation sample generating method based on production confrontation network |
CN110598400A (en) * | 2019-08-29 | 2019-12-20 | 浙江工业大学 | Defense method for high hidden poisoning attack based on generation countermeasure network and application |
CN111353548A (en) * | 2020-03-11 | 2020-06-30 | 中国人民解放军军事科学院国防科技创新研究院 | Robust feature deep learning method based on confrontation space transformation network |
Non-Patent Citations (3)
Title |
---|
基于自动编码器的统计过程监控方法研究;郭朋举;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200215(第02期);第55-71页 * |
对抗样本生成技术综述;潘文雯 等;《软件学报》;20200131;第31卷(第01期);第67-81页 * |
面向低维工控网数据集的对抗样本攻击分析;周文 等;《计算机研究与发展》;20200413(第04期);第70-79页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112162515A (en) | 2021-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Feng et al. | A Systematic Framework to Generate Invariants for Anomaly Detection in Industrial Control Systems. | |
Kalech | Cyber-attack detection in SCADA systems using temporal pattern recognition techniques | |
CN110647918B (en) | Mimicry defense method for resisting attack by deep learning model | |
Wang et al. | Anomaly detection for industrial control system based on autoencoder neural network | |
Taylor et al. | Anomaly detection in automobile control network data with long short-term memory networks | |
Yang et al. | Anomaly-based intrusion detection for SCADA systems | |
Ghafouri et al. | Adversarial regression for detecting attacks in cyber-physical systems | |
CN112688946B (en) | Method, module, storage medium, device and system for constructing abnormality detection features | |
CN112162515B (en) | Anti-attack method for process monitoring system | |
Iturbe et al. | On the feasibility of distinguishing between process disturbances and intrusions in process control systems using multivariate statistical process control | |
CN116304959B (en) | Method and system for defending against sample attack for industrial control system | |
Jiang et al. | Attacks on data-driven process monitoring systems: Subspace transfer networks | |
Luktarhan et al. | Multi-stage attack detection algorithm based on hidden markov model | |
Ahmed et al. | Host based intrusion detection using RBF neural networks | |
CA3191230A1 (en) | Method for detecting anomalies in time series data produced by devices of an infrastructure in a network | |
Ramadan et al. | A passive isolation of sensor faults from un-stealthy attacks in uncertain nonlinear systems | |
CN115713095A (en) | Natural gas pipeline abnormity detection method and system based on hybrid deep neural network | |
Wang et al. | Catch you if pay attention: Temporal sensor attack diagnosis using attention mechanisms for cyber-physical systems | |
Sun et al. | Antibody concentration based method for network security situation awareness | |
CN113194098A (en) | Water distribution system network physical attack detection method based on deep learning | |
Rasapour et al. | Framework for detecting control command injection attacks on industrial control systems (ics) | |
CN113361648A (en) | Information fingerprint extraction method for safe industrial big data analysis | |
Cui et al. | An Improved Support Vector Machine Attack Detection Algorithm for Industry Controls System | |
US20240045410A1 (en) | Anomaly detection system and method for an industrial control system | |
CN117669651B (en) | ARMA model-based method and ARMA model-based system for defending against sample black box attack |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |