CN118214619B - Gaussian mixture industrial Internet network attack detection system based on residual block - Google Patents
Gaussian mixture industrial Internet network attack detection system based on residual block Download PDFInfo
- Publication number
- CN118214619B CN118214619B CN202410636060.6A CN202410636060A CN118214619B CN 118214619 B CN118214619 B CN 118214619B CN 202410636060 A CN202410636060 A CN 202410636060A CN 118214619 B CN118214619 B CN 118214619B
- Authority
- CN
- China
- Prior art keywords
- module
- representing
- feature
- layer
- gaussian mixture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 31
- 239000000203 mixture Substances 0.000 title claims abstract description 27
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 title claims abstract description 23
- 238000013507 mapping Methods 0.000 claims description 33
- 230000004913 activation Effects 0.000 claims description 27
- 238000009826 distribution Methods 0.000 claims description 23
- 238000010606 normalization Methods 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 15
- 210000002569 neuron Anatomy 0.000 claims description 15
- 238000000034 method Methods 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 2
- 238000004088 simulation Methods 0.000 abstract description 5
- 238000012795 verification Methods 0.000 abstract description 4
- 230000002265 prevention Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 40
- 238000004519 manufacturing process Methods 0.000 description 9
- 230000002159 abnormal effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 206010000117 Abnormal behaviour Diseases 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention relates to a Gaussian mixture industrial Internet network attack detection system based on a residual block, and belongs to the technical field of safety prevention and control of industrial Internet. The problem that a detection system in the prior art cannot realize rapid and effective accurate detection on various industrial Internet network attacks is solved. The residual block model of the system is light and can be easily deployed in the industrial field; meanwhile, a loss function is also provided, and the residual block can be effectively trained; through simulation verification, most of attack states can be detected with high accuracy.
Description
Technical Field
The invention relates to the technical field of safety prevention and control of industrial Internet, in particular to a Gaussian mixture industrial Internet network attack detection system based on a residual block.
Background
In recent years, the rapid development of the industrial internet has changed the appearance of the manufacturing industry. With the continuous evolution of technologies such as the internet of things, artificial intelligence and big data, enterprises increasingly apply intellectualization to production flows. The connection and data exchange of the production equipment enable the production line to be intelligent and efficient, so that the production efficiency is improved, the cost is reduced, and the resource waste is reduced. The brand new production mode promotes the digital transformation of industry, brings wider development space for enterprises, and simultaneously shapes the prospect and direction of future manufacturing industry.
The industry is the key target for network attacks. Industrial network attacks are characterized in that they are performed against physical infrastructure and Industrial Control Systems (ICS) or information technology systems, causing significant damage to critical infrastructure and production facilities, resulting in production breaks, equipment damage and even possibly affecting public safety. The current industrial network attack means relates to multiple levels of industrial hosts, industrial networks, industrial control devices and the like, and attacks are propagated or executed by utilizing multiple ways of the Internet, mobile devices, emails, shared network folders and the like.
The industrial attack detection method based on deep learning is trained by using a neural network and a large amount of data to identify abnormal behaviors and attack modes in the industrial network, and is a hot spot problem of current research. Through the deep learning model, the system can automatically learn and understand the normal operation mode and detect abnormal behaviors which are inconsistent with the normal operation mode, so that potential attacks can be found in time. The method can effectively identify complex and dynamic attack modes, improves the response speed and accuracy of industrial network safety, and provides higher-level protection for an industrial control system.
Disclosure of Invention
In view of the above problems, the invention provides a Gaussian mixture industrial Internet network attack detection system based on a residual block, which solves the problem that the detection system in the prior art cannot realize rapid and effective accurate detection on various industrial Internet network attacks.
The invention provides a Gaussian mixture industrial Internet network attack detection system based on residual blocks, which comprises a plurality of residual blocks and classifiers;
The residual block comprises an initial feature mapping layer, RELU activation function conversion module, a feature deepening mapping layer, a Dropout regularization module and a layer normalization module;
the initial feature mapping layer and the feature deepening mapping layer are used for carrying out linear transformation on input data;
RELU the activation function conversion module is used for carrying out nonlinear conversion on the output data subjected to the linear conversion of the initial feature mapping layer to another space;
The Dropout regularization module is used for randomly discarding the neurons contained in the Dropout regularization module in the training process, and setting the output of the neurons to zero according to preset probability; scaling the output of the retained neurons;
The normalization module is used for carrying out normalization processing on the output characteristics of different layers in the Dropout regularization module;
The classifier includes a first linear layer, a first LeakyRelu activation function assignment module, a second linear layer, a second LeakyRelu activation function assignment module, and a LINEAR LAYER module.
Optionally, the expression of the initial feature mapping layer and the feature deepening mapping layer is:
Wherein, In order to input the data it is possible,Is the output data to the initial feature mapping layer or feature deepening mapping layer,As a matrix of weights, the weight matrix,Is biased.
Optionally, the expression RELU for activating the function conversion module is:
Wherein, Representing RELU the output of the activate function.
Optionally, the Dropout regularization module includes a Dropout layer including a neural network.
Alternatively, the expression of the normalization process is:
Wherein, Feature element vectors for all feature dimensions of the input feature,Is the corresponding mean value of the values,Representing the corresponding variance of the time of day,Is a bias term; And As a learnable parameter, the initial values are 1 and 0 respectively; the presentation layer normalization module outputs data.
Optionally, the expression of LeakyRelu activation function assignment module is:
Wherein, Is the output of the LeakyRelu activation function,Is the input to the activation function; Representing the negative part slope of the activation function.
Optionally, a loss function module is further included, the loss function module including a loss function module of the gaussian mixture encoder and a cross entropy loss function module corresponding to the classification task, for training the detection system.
Optionally, the expression of the loss function module of the gaussian mixture encoder is:
Wherein, A loss value representing gaussian mixture self-encoding; variable representing a given real input Hidden variableProbability distribution of (2); Is a hidden variable Probability distribution of (2); representing hidden variable loss; representing a reconstruction loss term; represents KL divergence; Representing the computational expectations.
Alternatively, KL divergenceThe expression of (2) is:
Wherein, Representing the true probability distribution of the random variable X; A model predictive probability distribution representing a random variable X; Representing that the random variable X is equal to the sample value True probabilities of (2); Representing that the random variable X is equal to the sample value Model predictive probability of (c).
Optionally, the expression of the cross entropy loss function module is:
Wherein, A loss value representing cross entropy; is the tag of the true output of the i-th category, For detecting the tag output by the system.
Compared with the prior art, the invention has at least the following beneficial effects: the invention provides a residual block-based Gaussian mixture self-encoder industrial attack detection system, wherein a residual block model of the system is light and can be easily deployed in the industrial field; meanwhile, a loss function is also provided, and the residual block can be effectively trained; through simulation verification, most of attack states can be detected with high accuracy.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention.
FIG. 1 is a schematic diagram of a detection system of the present invention;
FIG. 2 is a schematic diagram of a residual block of the present invention;
fig. 3 is a schematic diagram of a classifier of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other. In addition, the invention may be practiced otherwise than as specifically described and thus the scope of the invention is not limited by the specific embodiments disclosed herein.
1-3, A residual block-based Gaussian mixture industrial Internet network attack detection system is disclosed, and comprises a plurality of residual blocks and classifiers.
The residual block comprises an initial feature mapping layer, RELU an activation function conversion module, a feature deepening mapping layer, a Dropout regularization module and a layer normalization module.
Further, the initial feature mapping layer and the feature deepening mapping layer are basic neural network layers for deep learning, and are composed of a group of weights and biases and are used for carrying out linear transformation on input data; when data is input to the initial feature mapping layer or the feature deepening mapping layer, matrix multiplication and addition operation are performed on the input data.
Illustratively, the input data is network traffic data in an industrial internet network, such as IP information, transmission time, TCP (transmission control protocol, transfer Control Protocol) window advertisements, TCP sequences, etc.
Specifically, the input data is multiplied by the weight, and the bias is added to obtain the output data, and the expression is:
Wherein, To input data to the initial feature mapping layer or feature deepening mapping layer,Is the output data to the initial feature mapping layer or feature deepening mapping layer,As a matrix of weights, the weight matrix,Is biased.
Further, RELU activates a function conversion module for non-linearly transforming the output data, which is subjected to the initial feature mapping layer linear transformation, to another space.
Specifically, when nonlinear transformation is performed, a negative value is changed to zero, and a positive value is kept unchanged, thereby introducing nonlinear characteristics. The nonlinear transformation of the invention is helpful for network learning of more complex features and representations, and improves the characterization capability of the model.
The RELU activation function conversion module of the invention introduces nonlinear characteristics to the detection system and gives the ability of the network to learn and fit complex patterns.
Further, the output data input RELU of the initial feature mapping layer activates the function conversion module, and the expression of RELU activates the function conversion module is:
Wherein, Output data representing RELU activation functions,Input data representing RELU activation functions (i.e., output data of the initial feature mapping layer).
Further, the dropoff regularization module comprises a dropoff layer, wherein the dropoff layer comprises a neural network and is used for randomly discarding some neurons (nodes) in the neural network during the training of the neural network, and setting the output of the neurons to zero with a certain probability; the output of the retained neurons is scaled.
Further, the scaling ratio of scaling the output of the reserved neuron is the probability of setting the output of the neuron to zero, preferably, the scaling ratio is 0-1, and the value of the output probability is 0-1.
Further, for Dropout layer, the scaled expression is:
Wherein, Representing input data for the Dropout layer; Is the output data of the Dropout layer; p represents the probability of discarding; 1-p represents the probability of retention; mask is a binary mask; the mask is 0 or 1, if 0, the corresponding element is set to zero, if 1, the corresponding element is reserved.
Specifically, the output data of the feature deepening mapping layer is randomly discarded, and the quantity of the output data is discarded according to the super-parameter probability, preferably, the quantity is set to be 0.2 in the training process, and the output is discarded by 20%.
Specifically, during training, the Dropout layer is made to randomly select some neurons for each epoch and set their outputs to 0, with each neuron having the same probability of being discarded (or deactivated).
According to the invention, the Dropout regularization module is used, so that the dependence of the neural network on specific neurons can be reduced, the neural network is more robust, and the risk of overfitting is reduced. By randomly discarding some neurons, the neural network becomes more robust during the training process, forcing the network to learn a more robust representation of the features.
Further, the layer normalization module is a Layer normalization layer normalization module, and is used for normalization processing of different layers in the neural network, and the average value and the variance of each input feature (output feature of the Dropout regularization module) in each feature dimension are calculated, and all the input features are normalized by using the average value and the variance, so that the average value and the variance of each input feature are relatively stable in the training process in each feature dimension, and the expression is as follows:
Wherein, Feature element vectors for all feature dimensions of the input feature,Is the corresponding mean value of the values,Representing the corresponding variance of the time of day,Is a bias term; And As a learnable parameter, the initial values are 1 and 0 respectively; the presentation layer normalization module outputs data.
Specifically, the input characteristic of the input layer normalization module is the sum of the output data of the Dropout regularization module and a residual value, wherein the residual value is a residual error when the input data is input into the initial characteristic mapping layer and when the Dropout regularization module is output.
In particular, the method comprises the steps of,Is a small number, and is a bias term added to the denominator, preventing the denominator from being 0.
Further, referring to fig. 3, the classifier is mainly composed of a first linear layer, a first LeakyRelu activation function assignment module, a second linear layer, a second LeakyRelu activation function assignment module and a LINEAR LAYER module (classification output linear module); the classifier is connected with the residual block through a first linear layer. The first linear layer and the second linear layer of the classifier are the same as the linear layers in the residual block, and are not described here again. LeakyRelu the expression of the activation function assignment module is:
Wherein, Is the output of the LeakyRelu activation function,Is the input to the activation function; Representing the negative part slope of the activation function.
The LeakyRelu activation function assignment module adopted by the invention expands the range of LeakyRelu functions, which range from minus infinity to plus infinity, and is usuallyThe value of (2) is about 0.01; by inputtingTo give a negative input to the very small linear component) To adjust the zero gradient problem for negative values; leakyRelu the activate function assignment module assigns a non-zero slope to negative values in the output values of the linear layer above it.
Further, the system also comprises a loss function module, wherein the loss function module comprises a loss function module of the Gaussian mixture encoder and a cross entropy loss function module corresponding to classification tasks, and is used for training the detection system.
The expression of the loss function module of the Gaussian mixture encoder is as follows:
Wherein, A loss value representing gaussian mixture self-encoding; variable representing a given real input Hidden variableProbability distribution of (2); representing hidden variables; Variables representing real inputs; Is a hidden variable Probability distribution of (2); Representing hidden variable loss, characterizing potential distributions learned by the encoder Prior distributionDifferences between; A reconstruction loss term representing the difference in output and input of the gaussian mixture from the encoder; represents KL divergence; Representing the computational expectations.
Illustratively, the variables are truly enteredThe network traffic data parameters are input; hidden variableIs to input network flow data parametersIs of the dimension-reducing, hidden variableProbability distribution of (2)Obeying a mixed gaussian distribution.
Calculation of KL divergenceThe expression of (2) is:
Wherein, Representing the true probability distribution of the random variable X; A model predictive probability distribution representing a random variable X; Representing that the random variable X is equal to the sample value True probabilities of (2); Representing that the random variable X is equal to the sample value Model predictive probability of (c).
Hidden variableProbability distribution of (2)Is a mixture gaussian distribution, expressed as:
Wherein, Representing the total number of Gaussian distributions; Represent the first The mean value of the individual gaussian distributions,First, theVariance of the gaussian distribution; Indicating compliance with a gaussian distribution.
The expression of the cross entropy loss function module is:
Wherein, A loss value representing cross entropy; is the tag of the true output of the i-th category, For detecting the tag output by the system.
Obtaining a total loss function value based on the loss value and the cross entropy loss value of the Gaussian mixture encoder, wherein the expression is as follows:
。
In another aspect of the present invention, a detection method using the detection system is provided, an evaluation index is established, and detection is performed based on the evaluation index.
The evaluation index is specifically as follows:
(True Positive): the true is abnormal, and the detection is abnormal. I.e. the anomalous data is correctly identified.
(FALSE NEGATIVE): the reality is abnormal, and the detection is normal. I.e. the exception data is not reported.
(False Positive): true is normal, and abnormal is detected. I.e. the normal data is misreported.
(True Negative): the true is normal, and the detection is normal. I.e. normal data is correctly identified.
The accuracy is defined as:
recall is defined as:
the reconciliation mean of the precision and recall is Value:
Wherein, The index is more reflective of performance because it is the harmonic mean of the precision and recall.
In order to further explain the effectiveness of the residual block-based Gaussian mixture industrial Internet network attack detection system, the system is subjected to simulation verification, in particular to simulation verification on unsw-nb15 data sets, wherein the data sets comprise a plurality of different network attack modes, and the method comprises the following steps: shellcode attacks, fuzzers attacks, exploit attacks, etc. The corresponding simulation results are shown in the following table, and the invention is compared with the CNN-LSTM network (convolutional neural network-Long Short-Term Memory network, convolutional Neural Network-Long Short-Term Memory) and Adaboost (Adaptive Boosting, self-adaptive lifting method) by algorithm.
Table 1 experimental results and comparison
From experimental results, we can see that the highest score can be obtained on F1-score and precision by the proposed method, and the effectiveness of the algorithm is verified.
By the method, engine fault diagnosis under continuous degradation and multi-flight state is realized, the model after field generalization has stronger anti-interference capability on an extreme data set, is easier to adapt to various complex environments in actual conditions, solves the problems that the distribution of the engine data set is continuously changed and labeling is lacking, realizes unsupervised migration to a continuous target domain, and reduces the requirement on data sample size.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly under and obliquely below the second feature, or simply means that the first feature is less level than the second feature.
In the present invention, the terms "first," "second," "third," "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" refers to two or more, unless explicitly defined otherwise.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (6)
1. A Gaussian mixture industrial Internet network attack detection system based on residual blocks is characterized by comprising a plurality of residual blocks, a classifier and a loss function module;
The residual block comprises an initial feature mapping layer, RELU activation function conversion module, a feature deepening mapping layer, a Dropout regularization module and a layer normalization module;
the initial feature mapping layer and the feature deepening mapping layer are used for carrying out linear transformation on input data;
RELU the activation function conversion module is used for carrying out nonlinear conversion on the output data subjected to the linear conversion of the initial feature mapping layer to another space;
The Dropout regularization module is used for randomly discarding the neurons contained in the Dropout regularization module in the training process, and setting the output of the neurons to zero according to preset probability; scaling the output of the retained neurons;
The normalization module is used for carrying out normalization processing on the output characteristics of different layers in the Dropout regularization module;
The classifier comprises a first linear layer, a first LeakyRelu activation function assignment module, a second linear layer, a second LeakyRelu activation function assignment module and a LINEAR LAYER module;
The loss function module comprises a loss function module of the Gaussian mixture encoder and a cross entropy loss function module corresponding to classification tasks, and is used for training the detection system;
The expression of the loss function module of the Gaussian mixture encoder is as follows:
Wherein, A loss value representing gaussian mixture self-encoding; variable representing a given real input Hidden variableProbability distribution of (2); Is a hidden variable Probability distribution of (2); representing hidden variable loss; representing a reconstruction loss term; represents KL divergence; Representing a computational desire;
KL divergence The expression of (2) is:
Wherein, Representing the true probability distribution of the random variable X; A model predictive probability distribution representing a random variable X; Representing that the random variable X is equal to the sample value True probabilities of (2); Representing that the random variable X is equal to the sample value Model predictive probability of (2);
the expression of the cross entropy loss function module is:
Wherein, A loss value representing cross entropy; is the tag of the true output of the i-th category, For detecting the tag output by the system.
2. The gaussian mixture industrial internet network attack detection system according to claim 1, wherein the expression of the initial feature mapping layer and the feature deepening mapping layer is:
Wherein, In order to input the data it is possible,Is the output data to the initial feature mapping layer or feature deepening mapping layer,As a matrix of weights, the weight matrix,Is biased.
3. The gaussian mixture industrial internet network attack detection system according to claim 2, wherein the expression of RELU activating the function conversion module is:
Wherein, Representing RELU the output of the activate function.
4. The gaussian mixture industrial internet network attack detection system according to claim 3, wherein the Dropout regularization module comprises a Dropout layer comprising a neural network.
5. The gaussian mixture industrial internet attack detection system according to claim 4, wherein the expression of the normalization process is:
Wherein, Feature element vectors for all feature dimensions of the input feature,Is the corresponding mean value of the values,Representing the corresponding variance of the time of day,Is a bias term; And As a learnable parameter, the initial values are 1 and 0 respectively; the presentation layer normalization module outputs data.
6. The gaussian mixture industrial internet attack detection system according to claim 5, wherein the expression of the LeakyRelu activation function assignment module is:
Wherein, Is the output of the LeakyRelu activation function,Is the input to the activation function; Representing the negative part slope of the activation function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410636060.6A CN118214619B (en) | 2024-05-22 | 2024-05-22 | Gaussian mixture industrial Internet network attack detection system based on residual block |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410636060.6A CN118214619B (en) | 2024-05-22 | 2024-05-22 | Gaussian mixture industrial Internet network attack detection system based on residual block |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118214619A CN118214619A (en) | 2024-06-18 |
CN118214619B true CN118214619B (en) | 2024-07-16 |
Family
ID=91450931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410636060.6A Active CN118214619B (en) | 2024-05-22 | 2024-05-22 | Gaussian mixture industrial Internet network attack detection system based on residual block |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118214619B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757351A (en) * | 2022-04-24 | 2022-07-15 | 北京理工大学 | Defense method for resisting attack by deep reinforcement learning model |
CN116978096A (en) * | 2023-07-21 | 2023-10-31 | 华中科技大学 | Face challenge attack method based on generation challenge network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10931702B2 (en) * | 2018-04-24 | 2021-02-23 | Cyberfortress, Inc. | Vulnerability profiling based on time series analysis of data streams |
CN110784481B (en) * | 2019-11-04 | 2021-09-07 | 重庆邮电大学 | DDoS detection method and system based on neural network in SDN network |
-
2024
- 2024-05-22 CN CN202410636060.6A patent/CN118214619B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757351A (en) * | 2022-04-24 | 2022-07-15 | 北京理工大学 | Defense method for resisting attack by deep reinforcement learning model |
CN116978096A (en) * | 2023-07-21 | 2023-10-31 | 华中科技大学 | Face challenge attack method based on generation challenge network |
Also Published As
Publication number | Publication date |
---|---|
CN118214619A (en) | 2024-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
See et al. | Applying soft computing approaches to river level forecasting | |
Xu et al. | Deep entity classification: Abusive account detection for online social networks | |
WO2024051183A1 (en) | Backdoor detection method based on decision shortcut search | |
CN111988329A (en) | Network intrusion detection method based on deep learning | |
Zhang | Neural networks for data mining | |
Mezei et al. | Credit risk evaluation in peer-to-peer lending with linguistic data transformation and supervised learning | |
CN113283902A (en) | Multi-channel block chain fishing node detection method based on graph neural network | |
CN113609482A (en) | Back door detection and restoration method and system for image classification model | |
CN117272195A (en) | Block chain abnormal node detection method and system based on graph convolution attention network | |
CN118214619B (en) | Gaussian mixture industrial Internet network attack detection system based on residual block | |
Li et al. | Autoencoder-based anomaly detection in streaming data with incremental learning and concept drift adaptation | |
Kishikawa et al. | Prediction of stock trends by using the wavelet transform and the multi-stage fuzzy inference system optimized by the GA | |
Masulli et al. | Learning Techniques for Supervised Fuzzy Classifiers | |
Hancock et al. | GANNET: Genetic design of a neural net for face recognition | |
CN114265954B (en) | Graph representation learning method based on position and structure information | |
CN113255977B (en) | Intelligent factory production equipment fault prediction method and system based on industrial Internet | |
CN113887633B (en) | Malicious behavior identification method and system for closed source power industrial control system based on IL | |
CN112581177B (en) | Marketing prediction method combining automatic feature engineering and residual neural network | |
Minku et al. | Evolutionary strategies and genetic algorithms for dynamic parameter optimization of evolving fuzzy neural networks | |
Plazas et al. | Proposal of a computational intelligence prediction model based on Internet of Things technologies | |
Abraham et al. | Performance analysis of connectionist paradigms for modeling chaotic behavior of stock indices | |
CN113268782A (en) | Machine account identification and camouflage countermeasure method based on graph neural network | |
Meesad | Quantitative measures of a fuzzy expert system | |
Jemili | A Fuzzy Colored Petri-Net Approach for Hybrid Intrusion Prediction | |
CN116055169A (en) | Integrated learning network intrusion detection method based on DHR architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |