CN115118450A - Incremental dynamic weight value integrated learning intrusion detection method fusing multilevel features - Google Patents

Incremental dynamic weight value integrated learning intrusion detection method fusing multilevel features Download PDF

Info

Publication number
CN115118450A
CN115118450A CN202210534998.8A CN202210534998A CN115118450A CN 115118450 A CN115118450 A CN 115118450A CN 202210534998 A CN202210534998 A CN 202210534998A CN 115118450 A CN115118450 A CN 115118450A
Authority
CN
China
Prior art keywords
training
confidence network
new
classifier
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210534998.8A
Other languages
Chinese (zh)
Other versions
CN115118450B (en
Inventor
罗森林
陆永鑫
潘丽敏
吴杭颐
王沛冉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210534998.8A priority Critical patent/CN115118450B/en
Publication of CN115118450A publication Critical patent/CN115118450A/en
Application granted granted Critical
Publication of CN115118450B publication Critical patent/CN115118450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to an incremental dynamic weight value integrated learning intrusion detection method fusing multilevel characteristics, and belongs to the field of computer and information science and technology network space security. Firstly, reconstructing topological characteristics of flow through a mixed time window, and simultaneously establishing sensitive data flow direction characteristics for the flow in the time window; secondly, fusing topological features based on an attention mechanism to generate feature vectors; then training a primary classifier and a deep confidence network, and adding historical knowledge in the deep confidence network training process; and finally, generating a dynamic weight by using the deep belief network to construct a training set and train a secondary classifier. The invention effectively relieves the problems that high-concealment attacks cannot be detected by single-level features in the integrated learning intrusion detection method, the dynamic weight generation is inaccurate and the detection performance is reduced due to the fact that the dynamic weight generation is not sustainable, and effectively improves the classification accuracy of an intrusion detection model.

Description

Incremental dynamic weight value integrated learning intrusion detection method fusing multilevel features
Technical Field
The invention relates to an incremental dynamic weight integrated learning intrusion detection method fusing multilevel characteristics, belonging to the technical field of computer and information science.
Background
In recent years, with the rapid development of internet technology, network attacks are becoming more complicated and hidden. In the face of increasingly severe network attack behaviors, it is difficult to ensure the security of the network only by means of static security protection technologies such as firewalls and the like. The intrusion detection technology is used as a dynamic security protection technology, various attack behaviors in a network can be effectively sensed by monitoring network information data in real time, response decisions are provided for security management personnel, and the intrusion detection technology has important significance in the active defense level of a network space.
The existing intrusion detection method is mainly based on an abnormal machine learning detection method, but the traditional single machine learning model has the problems of low accuracy and high false alarm rate, and cannot effectively cope with complicated and variable network attack modes, so that the integrated learning method is gradually introduced into the task of intrusion detection. However, in the existing ensemble learning method, a majority vote method with a fixed ticket number or a fixed base classifier weight method is adopted in an ensemble integration stage, and the classifier discrimination of a fixed large-weight main guide set exists. For the above problems, the existing research is dedicated to provide dynamic weights for the base classifier of ensemble learning, for example, clustering the training samples first, and then generating dynamic weights according to the fitness of the base classifier to each cluster; or selecting a neighbor sample of the test sample through a K neighbor algorithm, evaluating the local performance of each base learning machine in the neighbor sample, dynamically determining the weight of each base learning machine, and the like. However, the existing dynamic weight generation method generally aims at attack types or test sample types, which causes that the generated dynamic weight cannot sufficiently reflect the characteristics of a single sample, and the existing method only aims at the current data set to generate the dynamic weight, so that no knowledge association exists between new and old dynamic weights, and finally the model detection performance is reduced. In addition, some application layer attacks often simulate normal network behaviors, the similarity between the statistical features of the attack traffic and the normal traffic is high, and the existing integrated learning intrusion detection method only learns the original features of a data set in feature engineering and cannot effectively detect the attacks, so that the usability of the model is reduced.
In summary, the problems of the existing ensemble learning intrusion detection method are as follows: 1. the generation mode of the dynamic weight is specific to the type of the sample instead of a single input sample, and the accumulation and reuse of historical knowledge are not considered, so that the generated dynamic weight is not accurate enough and has no sustainability, and the detection precision is reduced; 2. learning is performed based on the same-level traffic statistical characteristics and protocol characteristics, and application layer attacks with high concealment cannot be effectively detected, so that usability is reduced. Therefore, the invention provides an incremental dynamic weight value integrated learning intrusion detection method fusing multi-level features.
Disclosure of Invention
The invention aims to provide an incremental dynamic weight value integrated learning intrusion detection method fusing multilevel characteristics, aiming at the problems that a single-level characteristic cannot detect high-concealment attack in the integrated learning intrusion detection method, the dynamic weight value generation is not accurate and the detection performance is reduced due to the fact that the single-level characteristic does not have sustainability.
The design principle of the invention is as follows: firstly, reconstructing topological characteristics of flow through a mixed time window, and establishing sensitive data flow direction characteristics for the flow in the time window; secondly, processing topological features based on an attention mechanism to generate feature vectors; then, training a primary classifier and a deep confidence network, and adding historical knowledge in the deep confidence network training process; and finally, generating a dynamic weight by using the deep belief network to construct a training set and train a secondary classifier.
The technical scheme of the invention is realized by the following steps:
step 1, reconstructing topological characteristics of flow through a mixed time window, and establishing sensitive data flow direction characteristics for the flow in the time window.
Step 1.1, selecting n pieces of data before the time sequence of the input data S to form a temporary subset S'.
Step 1.2, calculating the time difference between the input data S and each piece of data in the temporary subset S', and adding the data into the topology construction subset S when the time difference is smaller than the maximum time difference delta t to construct the topological characteristic of the directed graph.
And step 1.3, arranging the port numbers related to the sample traffic to construct data flow direction characteristics, and initializing to 0.
And step 1.4, checking whether the characteristics of the sensitive files and directories of the flow access system in the S are larger than 0, and setting the corresponding flow characteristic value to be 1 if the characteristics are larger than 0.
And 2, processing the topological features based on the attention mechanism to generate feature vectors.
And 2.1, calculating the characteristic similarity coefficients between the nodes and the neighbor nodes one by one, and carrying out normalization processing on the characteristic similarity coefficients to obtain the attention coefficient.
And 2.2, performing graph embedding calculation on the topological features in the form of directed graphs by using a multi-head attention mechanism and combining the attention coefficients to obtain flow topological neighborhood features in the form of vectors.
And 2.3, carrying out vector series splicing and dimensionality reduction on the original features, the flow topological features and the sensitive data flow direction features to obtain final feature vectors.
And 3, training the primary classifier and the deep confidence network, and adding historical knowledge in the deep confidence network training process.
And 3.1, randomly and averagely dividing an original training set into two parts, marking the two parts as a training set A and a training set B, generating m subsets for the training set A by using a random non-return sampling method, and training m primary classifiers by using the m subsets.
Step 3.2, classifying the training set B by using the trained m primary classifiers, and obtaining an m-dimensional prediction probability vector { p } by each sample through the m primary classifiers i1 ,p i2 ,…,p im }。
And 3.3, constructing a label for each sample according to the classification correctness and the prediction probability, and training a deep confidence network based on the samples and the constructed label values.
And 3.4, if the new deep confidence network D ' is an updating stage, firstly, transmitting the parameters of the old deep confidence network D to the new deep confidence network D ', dividing a new sample set (increment set) into A ' and B ' according to the step 3.1, training a new primary classifier group, inputting the increment set B ' into the old deep confidence network D to obtain m-dimensional correct classification probability, constructing a label value based on the m-dimensional correct classification probability, the increment set B ' and the new primary classifier group, and training the new deep confidence network D '.
And 4, generating a dynamic weight by using the deep belief network to construct a training set and train a secondary classifier.
And 4.1, the deep confidence network generates a dynamic weight for the primary classifier based on the sample, the output of the primary classifier is weighted firstly, and then the weighted output and the original sample label form a training set of a secondary classifier.
And 4.2, training a secondary classifier by using the training set.
Advantageous effects
Compared with the traditional intrusion detection method by integrated learning, the method has the advantages that the flow topological characteristic and the sensitive data flow direction characteristic are fused, the dynamic weight is accurately generated aiming at the characteristic of a single input sample, the historical knowledge is reused by the dynamic weight, the flow behavior characteristic can be more comprehensively represented by the multilevel characteristic, the influence caused by the wrong classification of the base classifier can be accurately and stably relieved by the incremental dynamic weight, and finally the detection performance of the model can be effectively improved.
Drawings
FIG. 1 is a schematic diagram of an incremental dynamic weight ensemble learning intrusion detection method with multi-level features fused according to the present invention.
Detailed Description
To better illustrate the objects and advantages of the present invention, embodiments of the method of the present invention are described in further detail below with reference to examples.
The UNSW-NB15 data set is selected as experimental data, 9 attacks are contained in the data set, 4 application layer attack categories including Analysis, Backdoors, Reconnaisnce and Exploits and benign sample data are selected to form the data set of the experiment, and the detection effect of the method on the high-concealment application layer attacks can be effectively verified. The training set and the test set are divided by 2: 1.
The results were evaluated using Precision (Precision), Recall (Recall), F1(F1-Measure) values and Precision (Accuracy). Because the experiment is a multi-classification task, the typical formula is popularized by using arithmetic mean, and the calculation method is shown in formulas (1) to (4):
Figure BDA0003647454180000031
Figure BDA0003647454180000032
Figure BDA0003647454180000041
Figure BDA0003647454180000042
where N is the total number of classes, tp i Number of positive classes judged as positive classes for the ith class classification, fp i Number of negative classes judged as positive classes in the ith class classification, fn i Number of positive classes judged as negative classes in the ith class classification, tn i The number of negative classes in the classification for the ith class is judged as negative classes.
The experiment is carried out on a computer and a GPU server, and the computer is configured as follows: intel (R) core (TM) i9-9900 eight cores, CPU main frequency 3.10GHz, 32GB RAM, operating system is 64-bit Win 10; the server is configured to: GTX 1080Ti, RAM256G, operating system is Linux Ubuntu64 bit.
The specific process is as follows:
step 1, reconstructing topological characteristics of flow through a mixed time window, and establishing sensitive data flow direction characteristics for the flow in the time window.
Step 1.1, selecting input data s i N data before time sequence of (1) constitute a temporary subset S' i
Step 1.2, calculating input data s i And temporary subset S' i Time difference Δ t of each data in the ij =t i -t j When Δ t is ij When the time difference is less than the maximum time difference delta t, adding the data into the topology construction subset S i . The topology construction module will be based on the input data s i Constructed separatelyTwo topological features in the form of directed graphs, i.e. with s i Single source traffic topology with identical source IPs and multi-source traffic topology without specifying source IPs.
Step 1.3, arranging the Port numbers related to the sample flow to construct data flow direction characteristics, wherein the specific form is { Port 23 →Port 23 ,Port 23 →Port 25 ,Port 23 →Port 80 …, the left side of the arrow indicates the source port of the packet corresponding to the access sensitive data, and the right side of the arrow indicates the destination port from which the sensitive data is eventually sent out, the feature vector is initialized to a zero vector.
Step 1.4, check S i And if the characteristics of the medium flow access system sensitive file and the directory are larger than 0, recording a source port and marking sensitive data if the characteristics are larger than 0, tracking to obtain a target port, and setting the corresponding flow characteristic value to be 1.
And 2, processing the topological features based on the attention mechanism to generate feature vectors.
Step 2.1, calculating the characteristic similarity coefficient e between the node and the neighbor node one by one ij =a([Wh i ||Wh j ]) W is essentially a weight matrix, and feature enhancement is achieved by mapping the attributes of node i and node j to a high-dimensional space, a ([. |).]) The method has the effects that the high-dimensional characteristics mapped by the nodes i and the nodes j are spliced and mapped into a real number by using a single-layer feedforward neural network. Pair characteristic similarity coefficient e using softmax function pair ij Carrying out normalization processing to obtain attention coefficient alpha ij See formula (5).
Figure BDA0003647454180000043
And 2.2, when the directed graph is subjected to embedded calculation, calculating the attention coefficient of the K paths by using a multi-head attention mechanism, wherein the calculation process is shown in a formula (6), and obtaining the flow topological neighborhood characteristics in a vector form.
Figure BDA0003647454180000051
And 2.3, carrying out vector series splicing on the original features, the single-source flow topological features, the multi-source flow topological features and the sensitive data flow direction features, and reducing the dimension of the high-dimensional features by using a principal component analysis method to obtain a final feature vector.
And 3, training the primary classifier and the deep confidence network, and adding historical knowledge in the deep confidence network training process.
And 3.1, randomly and averagely dividing an original training set into two parts, marking the two parts as a training set A and a training set B, generating m subsets for the training set A by using a random non-return sampling method, and training m primary classifiers by using the m subsets. The primary classifier selects a single hidden layer multi-layered perceptron and is trained using a back propagation algorithm.
Step 3.2, classifying the training set B by using the trained m primary classifiers, and obtaining an m-dimensional prediction probability vector { p } by each sample through the m primary classifiers i1 ,p i2 ,…,p im }. For example, the primary classifier j classifies the samples i to obtain the prediction probability p ij
And 3.3, training the deep confidence network.
Step 3.3.1, only retaining samples which are correctly classified by at least one primary classifier to form a set C, and constructing a label value of an m-dimensional vector for each sample in the set C according to the following construction rules: the initial value of the m-dimensional vector label value corresponding to the sample i is the m-dimensional prediction probability vector { p i1 ,p i2 ,…,p im }. If the sample i is correctly classified by the primary classifier j, the corresponding value p is retained ij Otherwise, the corresponding value is set to 0.
And 3.3.2, the constructed sample set is used as a training set for training the deep confidence network, and the deep confidence network can accurately provide corresponding dynamic weights for the m primary classifiers aiming at each sample input. The primary classifier is intended to be provided with the correct classification probability for a particular sample.
And 3.4, when the classifier needs to train and predict a new data set, performing incremental learning by using a mode of updating the deep confidence network.
And 3.4.1, transmitting the old deep confidence network D parameter to the new deep confidence network D ', so that the deep confidence network D' can continue training on the basis of the deep confidence network D, and the training efficiency is increased.
Step 3.4.2, the new sample set (increment set) is also divided into an increment set A 'and an increment set B' according to step 3.1, a new primary classifier group is obtained based on the training of the increment set A ', and a new m-dimensional vector label value is constructed based on the new primary classifier group and the increment set B', and the rule is equal to the rule of step 3.2 and step 3.3.1.
Step 3.4.3, inputting the old depth confidence network D into the increment set B 'to obtain an m-dimensional correct classification probability vector { p' i1 ,p' i2 ,…,p' im And (4) summing and normalizing the new m-dimensional vector label values generated in the step 3.4.2 to form a new training set for training the deep confidence network D'.
And 4, generating a dynamic weight by using the deep belief network to construct a training set and train a secondary classifier.
Step 4.1, inputting the sample i into a depth confidence network to obtain an m-dimensional vector { P } i1 ,P i2 ,…,P im Is input into a primary classifier j to obtain a prediction probability p ij . Weighted value { P ij *p i1 ,P i2 *p i2 ,…,P im *p im And the feature value of the ith sample of the secondary classifier training set is used as the sample label of the original sample.
And 4.2, training a secondary classifier by using the training set. The secondary classifier also selects a multi-layer perceptron with a single hidden layer, and the training method uses a ten-fold cross-validation method.
And (3) testing results: in the experiment, an incremental dynamic weight integrated learning intrusion detection method fusing multi-level features is used for classifying samples in an intrusion detection data set UNSW-NB15, and compared with the three traditional integrated learning methods, the result shows that the detection performance of the intrusion detection classifier can be effectively improved by the method disclosed by the invention, and the table 1 shows.
Table 1. dynamic weight integrated learning intrusion detection method test experiment result fused with multi-level features
Figure BDA0003647454180000061
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (4)

1. An incremental dynamic weight value integrated learning intrusion detection method fused with multilevel characteristics is characterized by comprising the following steps:
step 1, reconstructing topological characteristics of flow through a mixed time window, and establishing sensitive data flow direction characteristics for the flow in the time window;
step 2, processing topological features based on an attention mechanism to generate feature vectors, firstly, calculating feature similarity coefficients between nodes and neighbor nodes one by one, carrying out normalization processing on the feature similarity coefficients to obtain attention coefficients, then, carrying out graph embedding calculation on the topological features in a directed graph form by utilizing a multi-head attention mechanism and combining the attention coefficients to obtain flow topological neighborhood features in a vector form, and finally carrying out vector series splicing and dimension reduction on the original features, the flow topological features and sensitive data flow direction features to obtain final feature vectors;
step 3, training a primary classifier and a deep confidence network, and adding historical knowledge in the deep confidence network training process;
step 3.1, firstly, randomly and averagely dividing an original training set into two parts, marking the two parts as a training set A and a training set B, generating m subsets for the training set A by using a random non-return sampling method, training m primary classifiers by using the m subsets, then classifying the training set B by using the trained m primary classifiers, and obtaining a sample i through the m primary classifiersTo an m-dimensional predictive probability vector p i1 ,p i2 ,…,p im Constructing a label for each sample according to the classification correctness and the prediction probability, and training a depth confidence network based on the samples and the constructed label values;
3.2, if the new deep confidence network D ' is an updating stage, firstly, transmitting the parameters of the old deep confidence network D to the new deep confidence network D ', dividing a new sample set (increment set) into A ' and B ' according to the step 3.1, training a new primary classifier group, inputting the increment set B ' into the old deep confidence network D to obtain m-dimensional correct classification probability, constructing a label value based on the m-dimensional correct classification probability, the increment set B ' and the new primary classifier group, and training a new deep confidence network D ';
and 4, generating dynamic weights by using the deep confidence network to construct a training set and train a secondary classifier, firstly, generating the dynamic weights for the primary classifier by using the deep confidence network based on the samples, weighting the output of the primary classifier, then forming the training set of the secondary classifier by using the output of the primary classifier and the original sample labels, and then training the secondary classifier by using the training set.
2. The method according to claim 1, wherein the incremental dynamic weight integrated learning intrusion detection method is characterized in that: in the step 1, a mixed time window is used for reconstructing topological characteristics of flow, namely n pieces of data before the time sequence of input data S are selected to form a temporary subset S ', the time difference between the input data S and each piece of data in the temporary subset S' is calculated, when the time difference is smaller than the maximum time difference delta t, the data are added into a topological construction subset S, and then topological characteristics of a directed graph are reconstructed; the specific form of the established sensitive data flow direction feature is { Port 23 →Port 23 ,Port 23 →Port 25 ,Port 23 →Port 80 …, the left side of the arrow indicates the source port of the flow packet corresponding to the access sensitive data, the right side of the arrow indicates the destination port from which the sensitive data is finally sent out, the feature vector is initialized to zero vector, and the corresponding feature value is set to 1 according to whether the flow accesses the sensitive file and the directory mark and tracks.
3. The method according to claim 1, wherein the incremental dynamic weight integrated learning intrusion detection method is characterized in that: in step 3, the following method is used for constructing a deep confidence network training set, only samples which are correctly classified by at least one primary classifier are reserved to form a set C, label values of m-dimensional vectors are constructed for each sample in the set C, and the construction rules are as follows: the initial value of the m-dimensional vector label value corresponding to the sample i is the m-dimensional prediction probability vector { p i1 ,p i2 ,…,p im That if the sample i is correctly classified by the primary classifier j, the corresponding value p is retained ij Otherwise, the corresponding value is set to 0.
4. The method according to claim 1, wherein the incremental dynamic weight integrated learning intrusion detection method is characterized in that: in the step 3, incremental learning is carried out by using a mode of updating the depth confidence network, firstly, the old depth confidence network D parameter is transmitted to a new depth confidence network D ', a new sample set (an incremental set) is also divided into an incremental set A' and an incremental set B 'according to the step 3.1, and a new primary classifier group is obtained based on the training of the incremental set A'; then, constructing a new m-dimensional vector label value based on the new primary classifier group and the increment set B'; finally, the increment set B 'is input into the old depth confidence network D to obtain a correct classification probability vector { p' i1 ,p' i2 ,…,p' im And adding and normalizing the new m-dimensional vector label value to form a new training set for training the deep confidence network D'.
CN202210534998.8A 2022-05-17 2022-05-17 Incremental dynamic weight integrated learning intrusion detection method integrating multistage features Active CN115118450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210534998.8A CN115118450B (en) 2022-05-17 2022-05-17 Incremental dynamic weight integrated learning intrusion detection method integrating multistage features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210534998.8A CN115118450B (en) 2022-05-17 2022-05-17 Incremental dynamic weight integrated learning intrusion detection method integrating multistage features

Publications (2)

Publication Number Publication Date
CN115118450A true CN115118450A (en) 2022-09-27
CN115118450B CN115118450B (en) 2024-01-05

Family

ID=83326485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210534998.8A Active CN115118450B (en) 2022-05-17 2022-05-17 Incremental dynamic weight integrated learning intrusion detection method integrating multistage features

Country Status (1)

Country Link
CN (1) CN115118450B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582813A (en) * 2009-06-26 2009-11-18 西安电子科技大学 Distributed migration network learning-based intrusion detection system and method thereof
CN108023876A (en) * 2017-11-20 2018-05-11 西安电子科技大学 Intrusion detection method and intruding detection system based on sustainability integrated study
CN108093406A (en) * 2017-11-29 2018-05-29 重庆邮电大学 A kind of wireless sense network intrusion detection method based on integrated study
WO2020143226A1 (en) * 2019-01-07 2020-07-16 浙江大学 Industrial control system intrusion detection method based on integrated learning
CN113922985A (en) * 2021-09-03 2022-01-11 西南科技大学 Network intrusion detection method and system based on ensemble learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582813A (en) * 2009-06-26 2009-11-18 西安电子科技大学 Distributed migration network learning-based intrusion detection system and method thereof
CN108023876A (en) * 2017-11-20 2018-05-11 西安电子科技大学 Intrusion detection method and intruding detection system based on sustainability integrated study
CN108093406A (en) * 2017-11-29 2018-05-29 重庆邮电大学 A kind of wireless sense network intrusion detection method based on integrated study
WO2020143226A1 (en) * 2019-01-07 2020-07-16 浙江大学 Industrial control system intrusion detection method based on integrated learning
CN113922985A (en) * 2021-09-03 2022-01-11 西南科技大学 Network intrusion detection method and system based on ensemble learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴佳洁 等: "基于TCN和注意力机制的异常检测和定位算法", 信息网络安全, vol. 21, no. 11 *

Also Published As

Publication number Publication date
CN115118450B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
CN101582813B (en) Distributed migration network learning-based intrusion detection system and method thereof
Chang et al. Intrusion detection by backpropagation neural networks with sample-query and attribute-query
Yang et al. Detecting stealthy domain generation algorithms using heterogeneous deep neural network framework
CN113010895B (en) Vulnerability hazard assessment method based on deep learning
CN113254930B (en) Back door confrontation sample generation method of PE (provider edge) malicious software detection model
CN112948578B (en) DGA domain name open set classification method, device, electronic equipment and medium
Einy et al. Network Intrusion Detection System Based on the Combination of Multiobjective Particle Swarm Algorithm‐Based Feature Selection and Fast‐Learning Network
CN115277189B (en) Unsupervised intrusion flow detection and identification method based on generation type countermeasure network
Zhang et al. A scalable network intrusion detection system towards detecting, discovering, and learning unknown attacks
CN113343123B (en) Training method and detection method for generating confrontation multiple relation graph network
Malik et al. Performance evaluation of classification algorithms for intrusion detection on nsl-kdd using rapid miner
Ghosh et al. A cloud intrusion detection system using novel PRFCM clustering and KNN based dempster-shafer rule
CN116796326B (en) SQL injection detection method
Coli et al. DDoS attacks detection in the IoT using deep gaussian-bernoulli restricted boltzmann machine
CN115118450B (en) Incremental dynamic weight integrated learning intrusion detection method integrating multistage features
Sujana et al. Temporal based network packet anomaly detection using machine learning
Liang et al. Automatic security classification based on incremental learning and similarity comparison
Naoum et al. Hybrid system of learning vector quantization and enhanced resilient backpropagation artificial neural network for intrusion classification
Dai et al. Balancing Robustness and Covertness in NLP Model Watermarking: A Multi-Task Learning Approach
Behjat et al. Feature subset selection using binary gravitational search algorithm for intrusion detection system
Alrawashdeh et al. Optimizing Deep Learning Based Intrusion Detection Systems Defense Against White-Box and Backdoor Adversarial Attacks Through a Genetic Algorithm
Xie et al. Research and application of intrusion detection method based on hierarchical features
Gurumurthy et al. Hybrid pigeon inspired optimizer-gray wolf optimization for network intrusion detection
Ye et al. Intrusion detection model based on conditional generative adversarial networks
Xu et al. [Retracted] IoT‐Oriented Distributed Intrusion Detection Methods Using Intelligent Classification Algorithms in Spark

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant