CN117675351A - Abnormal flow detection method and system based on BERT model - Google Patents

Abnormal flow detection method and system based on BERT model Download PDF

Info

Publication number
CN117675351A
CN117675351A CN202311663784.1A CN202311663784A CN117675351A CN 117675351 A CN117675351 A CN 117675351A CN 202311663784 A CN202311663784 A CN 202311663784A CN 117675351 A CN117675351 A CN 117675351A
Authority
CN
China
Prior art keywords
training
model
feature vector
expert
bert
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311663784.1A
Other languages
Chinese (zh)
Inventor
朱俊芳
郭超
韦崴
宋文芳
张招亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronics Industry Engineering Co ltd
Original Assignee
China Electronics Industry Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronics Industry Engineering Co ltd filed Critical China Electronics Industry Engineering Co ltd
Priority to CN202311663784.1A priority Critical patent/CN117675351A/en
Publication of CN117675351A publication Critical patent/CN117675351A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides an abnormal flow detection method and system based on a BERT model, wherein the detection method comprises the following steps: step S1: acquiring a training data source; step S2: preprocessing the input training data source; step S3: training a BERT pre-training model to generate a first feature vector; step S4: training a multi-layer perceptron model to generate a second feature vector; step S5: connecting and flattening the first feature vector and the second feature vector to generate a combined feature vector; step S6: training a MoE mixed expert model to obtain an abnormality detection result. The trained BERT pre-training model, the multi-layer perceptron model and the MoE mixed expert model are adopted for detecting abnormal flow during prediction, and the abnormal flow detection efficiency and accuracy can be improved without the help of an expert in the use process.

Description

Abnormal flow detection method and system based on BERT model
Technical Field
The invention relates to the technical field of computer network security, in particular to an abnormal flow detection method and system based on a BERT model.
Background
The internet also hosts a number of security challenges and network threats while providing social benefits. The network threat often uses network traffic as a carrier for a large amount of network traffic data generated by internet users, and serious consequences can be caused after the network threat is attacked. Threat detection based on network traffic can perform network defense in the first stage of attack, and is one of the main methods of a network threat detection system.
The traditional network traffic anomaly detection method is mainly divided into a rule-based method and a feature engineering-based method, and the rule-based detection technology can only generate rules for the existing intrusion behaviors to be matched, and does not have the capability of detecting the newly-appearing unknown threats. The detection technology based on machine learning needs to artificially construct a limited feature set, and the effect of a machine learning model is greatly dependent on feature quality, so that an intruder is difficult to detect after changing an attack strategy. The existing threat detection technology has insufficient detection capability on network traffic anomalies, so that the detection result has low precision.
Disclosure of Invention
In view of the foregoing, the present invention has been made to provide a BERT model-based abnormal traffic detection method and system that overcomes or at least partially solves the foregoing problems.
According to an aspect of the present invention, there is provided a BERT model-based abnormal flow detection method, the detection method comprising:
step S1: acquiring a training data source;
step S2: preprocessing the input training data source;
step S3: training a BERT pre-training model to generate a first feature vector;
step S4: training a multi-layer perceptron model to generate a second feature vector;
step S5: connecting and flattening the first feature vector and the second feature vector to generate a combined feature vector;
step S6: training a MoE mixed expert model to obtain an abnormality detection result.
Optionally, the step S1: the training data source acquisition specifically comprises the following steps:
the system comprises a flow training data set and a log label file, wherein the flow training data is captured real-time network flow, and the log label file comprises flow quintuple information and corresponding attack types.
Optionally, the step S2: preprocessing the input training data source comprises: and (3) correlating the network cut flow and the flow cleaning with the label to obtain a session flow packet sequence and a session flow label.
Optionally, the step S3: training the BERT pre-training model, generating the first feature vector specifically includes:
generating a session stream embedding vector from the session stream packet sequence, wherein the session stream embedding vector comprises Token embedding, position embedding and segment embedding;
sending the session stream embedded vector into a BERT pre-training model for non-supervision training to obtain the trained BERT pre-training model;
the output vector of the trained BERT pre-training model is used as the first feature vector.
Optionally, the step S4: training the multi-layer perceptron model, the generating the second feature vector specifically comprising:
acquiring session flow characteristic information from the session flow packet sequence;
normalizing the extracted session stream characteristic information;
sending the session flow characteristic information and the session flow label into a multi-layer perceptron model for supervision training to obtain the trained multi-layer perceptron model;
and outputting the intermediate layer vector of the trained multi-layer perceptron model as a second feature vector.
Optionally, the step S6: training a MoE hybrid expert model, wherein the obtaining of the abnormality detection result specifically comprises the following steps:
building MoE hybrid expert models, wherein the MoE hybrid expert models comprise expert networks and a gating network, each expert network is a multi-layer sensor with ReLu activation, and the gating network determines the weight of each expert model;
training a MoE hybrid expert model, including gate control network training and expert model training;
in training of the gating network, parameters of the gating network are adjusted by minimizing errors between the predicted output and the real labels.
Optionally, the gating network and the expert network comprise a multi-layer perceptron, the parameters of each task being different.
Optionally, in the training of the expert model, according to the type of the expert model, a corresponding training algorithm is used for parameter optimization.
The invention also provides an abnormal flow detection system based on the BERT model, and the abnormal flow detection method based on the BERT model is applied, and the detection system comprises the following steps:
the original flow acquisition module is used for acquiring a training data source;
the data preprocessing module is used for preprocessing the input training data source;
the abnormal flow detection module is used for training the BERT pre-training model and generating a first feature vector; training a multi-layer perceptron model to generate a second feature vector; connecting and flattening the first feature vector and the second feature vector to generate a combined feature vector; and the detection result generation module is used for training the MoE mixed expert model to obtain an abnormal detection result.
The invention provides an abnormal flow detection method and system based on a BERT model, wherein the detection method comprises the following steps: step S1: acquiring a training data source; step S2: preprocessing the input training data source; step S3: training a BERT pre-training model to generate a first feature vector; step S4: training a multi-layer perceptron model to generate a second feature vector; step S5: connecting and flattening the first feature vector and the second feature vector to generate a combined feature vector; step S6: training a MoE mixed expert model to obtain an abnormality detection result. The trained BERT pre-training model, the multi-layer perceptron model and the MoE mixed expert model are adopted for detecting abnormal flow during prediction, and the abnormal flow detection efficiency and accuracy can be improved without the help of an expert in the use process.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an overall block diagram of a BERT model-based abnormal flow detection method according to an exemplary embodiment of the present invention;
FIG. 2 is a block diagram of a BERT pre-training model shown in accordance with an exemplary embodiment;
FIG. 3 is a block diagram of a multi-layer perceptron model, shown in accordance with an exemplary embodiment;
FIG. 4 is a block diagram of a MoE hybrid expert model, shown in accordance with an exemplary embodiment;
FIG. 5 is a block diagram illustrating a network traffic based abnormal traffic detection apparatus according to an exemplary embodiment;
fig. 6 is a block diagram illustrating an abnormal traffic detection apparatus based on network traffic according to an exemplary embodiment.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terms "comprising" and "having" and any variations thereof in the description embodiments of the invention and in the claims and drawings are intended to cover a non-exclusive inclusion, such as a series of steps or elements.
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings and the examples.
As shown in fig. 1, step S1 is to obtain a training data source, including a traffic training data set and a log tag file, where the traffic training data is a captured real-time network traffic, and the log tag file includes traffic quintuple information and a corresponding attack type.
In this embodiment, the training network traffic data set is a data set labeled by the reference CSE-CIC-IDS2018, and is also traffic data generated when the actual network transmits data, and in this embodiment, these traffic training data are used as training samples for model training. The log tag file contains flow quintuple information and corresponding attack types.
S2, preprocessing an input training data set, including network cut-off, flow cleaning and label association, to obtain a session stream packet sequence and a session stream label
S21, the network cut-stream cuts the flow data packet into two types of units of stream and conversation according to five-tuple, and recombines the message of the same conversation according to the time stamp information to form a complete network conversation stream, each conversation stream comprises a forward stream byte sequence and a reverse stream byte sequence.
S22, carrying out association fusion on the quintuple information of the session stream and the quintuple information of the log tag file to obtain the tag class corresponding to the session stream.
In this embodiment, the split cap cut tool is used to perform network cut on the training data set, and the lengths of the forward stream byte sequence and the reverse stream byte sequence are set to 768.
S3, training a BERT pre-training model, wherein the model is used for generating the feature vector 1 (see FIG. 2). Comprising the following steps:
s31, generating a session stream token sequence according to bytes from the session stream packet sequence, and further generating a corresponding session stream embedded vector for the session stream token sequence as an input vector of the BERT pre-training model.
The session stream token sequence refers to converting hexadecimal session stream bytes into a token representation for pre-training. In this embodiment, each token unit ranges from 0 to 65535, i.e., dictionary size |v| is denoted 65536. Special tags [ CLS ], [ SEP ], [ PAD ] and [ MASK ] are added for training tasks. Wherein the first of each sequence is marked [ CLS ], and the mark [ PAD ] is a padding symbol to meet the minimum length requirement. The forward stream byte sequence and the reverse stream byte sequence are separated by a [ SEP ]. The token MASK appears during pre-training to learn the context of the traffic.
The session stream embedding vector is obtained by performing Token embedding, position embedding and segment embedding according to the session stream Token sequence and adding.
Wherein the token is embedded: each token is converted into a corresponding word vector.
Each Token is converted to a fixed length vector representation using a pre-trained Word vector model (e.g., word2Vec, gloVe, etc.).
Position embedding: the position code is a fixed length vector that represents the position information of Token in the sequence of stream bytes.
Segment embedding: indicating whether the Token belongs to the forward stream a or the reverse stream B.
S32, sending the session stream embedded vector into a BERT pre-training model for non-supervision training, and obtaining the trained BERT pre-training model.
The BERT pre-training model is a multi-layer bidirectional transducer encoder, and self-supervision learning of unlabeled traffic is realized to obtain a general traffic representation of a data packet level.
In this embodiment, the transducer architecture of the pre-training model is implemented using a 12-layer encoder, the dimension of each input token is set to 512, and the number of input tokens is 768, and all sub-layers and embedded layers in the model generate an output with dimension 512.
The pre-training model is realized through two unsupervised pre-training tasks, the context relation between the captured flow bytes is completed through a prediction mask token task, and the prediction of the same session flow is completed through a prediction homology task.
The predictive MASK token task randomly MASKs a proportion of the input tokens and replaces them with special tokens MASK, then predicts the tokens at the MASK locations. A cross entropy loss function is used as our loss function.
Wherein T is i Represents the i-th masked token (1 or 0),the Encoder output representing the transducer is T i Is a probability of (2).
In this embodiment, a pre-training model is implemented using a tensor2tensor library. The proportion of the masked input tokens is set to 15%.
The predictive homology task captures the association between different data packets by predicting whether different segments belong to the same session stream, 50% of the time B being the next segment of a and 50% of the time being random segments from other session streams when segments a and B are selected. Cross entropy is also used as a loss function.
Wherein S is y Representing the y-th segment (1 or 0) in the training set,representation and S y Segments in the same session flow, and S y Probability of being in the same session flow.
S33, taking the output vector of the trained BERT pre-training model as the characteristic vector 1.
S4, training a multi-layer perceptron model, wherein the model is used for generating the feature vector 2 (see figure 3). Comprising the following steps:
s41, obtaining the session stream characteristic information from the session stream packet sequence;
in this embodiment, the CICFlowMeter-V3 parsing tool is used to extract features and statistical information in the network traffic, and 80 features are extracted in total, including a transport layer feature, an application layer feature, a network behavior feature, and the like. Typical characteristics include:
the traffic duration, the total number of forward packets, the total size of reverse packets, the maximum size of reverse packets, the average size of reverse packets, the standard deviation size of reverse packets, the average time between adjacent packets of a forward stream, the maximum time between adjacent packets of a reverse stream, the number of times PSH flags are set in forward stream packets (UDP is 0), the number of times URG flags are set in forward stream packets (UDP is 0), the total number of bytes of reverse stream header, the total size of reverse packets, the maximum size of reverse packets, the minimum size of reverse packets, the average size of reverse packets, the standard deviation size of reverse packets, the average time between adjacent packets of a reverse stream, the maximum time between adjacent packets of a reverse stream, the number of times PSH flags are set in reverse stream packets (UDP is 0), the number of times URG flags are set in reverse stream packets (UDP is 0), the total byte rate of reverse stream header, the total byte rate of reverse stream, i.e., number of packets transmitted per second), stream packet rate (stream packet rate, i.e., number of packets transmitted per second), average time of forward and reverse flows, standard deviation of forward and reverse flows, maximum time of forward and reverse flows, number of packets with at least 1 byte of TCP data payload in forward direction, number of bytes sent backward in the initial window.
S42, normalizing the extracted session stream characteristic information;
in this example, the normalization of features was performed using standard deviation normalization Z-score Normalization.
And S43, sending the session flow characteristic information and the session flow label into a multi-layer perceptron model for supervision training to obtain the trained multi-layer perceptron model.
The MLP model is built, the multi-layer perceptron model is composed of a plurality of neuron layers, the multi-layer perceptron model comprises an input layer, a plurality of fully-connected hidden layers and an output layer, neurons between adjacent layers adopt a fully-connected mode, the output of the hidden layers is converted through an activating function Sigmoid function, and a classifier equation is realized by adopting softmax.
Training an MLP model, training through forward propagation and backward propagation algorithms by using label sample data, and iteratively optimizing model parameters until the model converges.
S43, outputting the feature vector, and taking the last layer of the hidden layer of the MLP model as the feature vector 1 after multi-layer training.
In this embodiment, a TensorFlow is used for training, and an MLP model of a 5-layer hidden layer is constructed.
The number of neurons in the input layer is the total number of features 80, the 1 st hidden layer is 128-dimensional, the 2 nd hidden layer is 64-dimensional, the 3 rd hidden layer is 16-dimensional, and the dimension of the output layer is the category number 2.
S44, the middle layer vector of the trained multi-layer perceptron model is output as a feature vector 2.
And S5, connecting and flattening the feature vector 1 and the feature vector 2 to generate a combined feature vector which is used as the input of the MoE hybrid expert model.
And S6, training a MoE hybrid expert model to obtain a final abnormality detection result (see FIG. 4).
And sending the combined feature vector and the conversation flow label into a training MoE mixed expert model for supervision training to obtain the trained MoE mixed expert model.
S61, building a MoE hybrid expert model, wherein the MoE hybrid expert model consists of expert networks and a gating network, each expert network is a multi-layer perceptron with ReLu activation, and the gating network determines the weight of each expert model. The gating network and the expert network are realized by a multi-layer sensor, and the parameters of each task are different.
The combined feature vector is respectively input into each expert network for modeling, and the output of each expert is obtained and expressed as e 1 (x),e 2 (x)...e n (x)。
In this embodiment, n=4 is used.
Modeling the combined feature vector input gate network to obtain the probability (i.e. weight) of each expert network being selected as w=g 1 (x),g 2 (x)...g n (x)。
The output of each expert network is summed as a final output according to a probability weighting.
S62, training a MoE hybrid expert model, including gate control network training and expert model training. In training of the gating network, parameters of the gating network are adjusted by minimizing errors between the predicted output and the real labels. In the training of the expert model, according to the type of the expert model, a corresponding training algorithm is used for parameter optimization.
As shown in fig. 5, fig. 5 is a block diagram schematically illustrating an abnormal flow detection apparatus based on a BERT model according to an exemplary embodiment, and as shown in fig. 5, the abnormal flow detection apparatus 5 includes: an original flow acquisition module 51, a data preprocessing module 52, and an abnormal flow detection module 53.
The original flow acquisition module 51 is configured to implement real-time access of network flow data acquired by an access point;
the data preprocessing module 52 is used for preprocessing operations such as network session stream restoration and flow cleaning of the network flow data;
the abnormal flow detection module 53 is configured to receive and process the data, send the data to a trained abnormal flow detection model, and calculate an abnormal label result of the flow.
As shown in fig. 6, fig. 6 is a network traffic-based abnormal traffic detection device, and as shown in fig. 6, the present application provides a network traffic-based abnormal traffic detection device 6, including: the above-described method is implemented by the memory 61, the processor 62, and the computer program stored on the memory 61 and running on the processor 62, when the processor 62 executes the computer program.
The beneficial effects are that: the core model comprises a BERT pre-training model, a multi-layer perceptron model and a MoE mixed expert model. In addition, the trained BERT pre-training model, the multi-layer perceptron model and the MoE mixed expert model are adopted for abnormal flow detection during prediction, so that the expert help is provided in the use process, and the abnormal flow detection efficiency and accuracy can be improved.
The BERT pre-training model pre-trains deep context datagram level flow representation from large-scale unlabeled data, and the model can be applied to other fields such as encryption flow classification, app application identification and the like as a general pre-training model.
The BERT pre-training model and the multi-layer perceptron model can adopt a parallel distributed computing mode, so that the utilization rate of computing resources is improved.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the invention is not limited to the particular embodiments disclosed, but is intended to cover all modifications, equivalents, alternatives, and improvements within the spirit and principles of the invention.

Claims (9)

1. The abnormal flow detection method based on the BERT model is characterized by comprising the following steps of:
step S1: acquiring a training data source;
step S2: preprocessing the input training data source;
step S3: training a BERT pre-training model to generate a first feature vector;
step S4: training a multi-layer perceptron model to generate a second feature vector;
step S5: connecting and flattening the first feature vector and the second feature vector to generate a combined feature vector;
step S6: training a MoE mixed expert model to obtain an abnormality detection result.
2. The abnormal flow detection method based on the BERT model according to claim 1, wherein the step S1: the training data source acquisition specifically comprises the following steps:
the system comprises a flow training data set and a log label file, wherein the flow training data is captured real-time network flow, and the log label file comprises flow quintuple information and corresponding attack types.
3. The abnormal flow detection method based on the BERT model according to claim 1, wherein the step S2: preprocessing the input training data source comprises: and (3) correlating the network cut flow and the flow cleaning with the label to obtain a session flow packet sequence and a session flow label.
4. The abnormal flow detection method based on the BERT model according to claim 2, wherein the step S3: training the BERT pre-training model, generating the first feature vector specifically includes:
generating a session stream embedding vector from the session stream packet sequence, wherein the session stream embedding vector comprises Token embedding, position embedding and segment embedding;
sending the session stream embedded vector into a BERT pre-training model for non-supervision training to obtain the trained BERT pre-training model;
the output vector of the trained BERT pre-training model is used as the first feature vector.
5. The abnormal flow detection method based on the BERT model according to claim 2, wherein the step S4: training the multi-layer perceptron model, the generating the second feature vector specifically comprising:
acquiring session flow characteristic information from the session flow packet sequence;
normalizing the extracted session stream characteristic information;
sending the session flow characteristic information and the session flow label into a multi-layer perceptron model for supervision training to obtain the trained multi-layer perceptron model;
and outputting the intermediate layer vector of the trained multi-layer perceptron model as a second feature vector.
6. The abnormal flow detection method based on the BERT model according to claim 1, wherein the step S6: training a MoE hybrid expert model, wherein the obtaining of the abnormality detection result specifically comprises the following steps:
building MoE hybrid expert models, wherein the MoE hybrid expert models comprise expert networks and a gating network, each expert network is a multi-layer sensor with ReLu activation, and the gating network determines the weight of each expert model;
training a MoE hybrid expert model, including gate control network training and expert model training;
in training of the gating network, parameters of the gating network are adjusted by minimizing errors between the predicted output and the real labels.
7. The BERT model-based abnormal traffic detection method of claim 6, wherein the gating network and the expert network comprise multi-layer perceptrons, the parameters of each task being different.
8. The BERT model-based abnormal flow detection method of claim 6, wherein in the training of the expert model, the parameter optimization is performed using a corresponding training algorithm according to the type of the expert model.
9. An abnormal flow detection system based on a BERT model, applying the abnormal flow detection method based on the BERT model according to any one of claims 1-8, wherein the detection system comprises:
the original flow acquisition module is used for acquiring a training data source;
the data preprocessing module is used for preprocessing the input training data source;
the abnormal flow detection module is used for training the BERT pre-training model and generating a first feature vector; training a multi-layer perceptron model to generate a second feature vector; connecting and flattening the first feature vector and the second feature vector to generate a combined feature vector;
and the detection result generation module is used for training the MoE mixed expert model to obtain an abnormal detection result.
CN202311663784.1A 2023-12-06 2023-12-06 Abnormal flow detection method and system based on BERT model Pending CN117675351A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311663784.1A CN117675351A (en) 2023-12-06 2023-12-06 Abnormal flow detection method and system based on BERT model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311663784.1A CN117675351A (en) 2023-12-06 2023-12-06 Abnormal flow detection method and system based on BERT model

Publications (1)

Publication Number Publication Date
CN117675351A true CN117675351A (en) 2024-03-08

Family

ID=90070946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311663784.1A Pending CN117675351A (en) 2023-12-06 2023-12-06 Abnormal flow detection method and system based on BERT model

Country Status (1)

Country Link
CN (1) CN117675351A (en)

Similar Documents

Publication Publication Date Title
Zhang et al. A multiple-layer representation learning model for network-based attack detection
Xu et al. A method of few-shot network intrusion detection based on meta-learning framework
Yuan et al. DeepDefense: identifying DDoS attack via deep learning
Li et al. Byte segment neural network for network traffic classification
Bu et al. Encrypted network traffic classification using deep and parallel network-in-network models
Fiore et al. Network anomaly detection with the restricted Boltzmann machine
CN111914873A (en) Two-stage cloud server unsupervised anomaly prediction method
CN109450842A (en) A kind of network malicious act recognition methods neural network based
CN113469234A (en) Network flow abnormity detection method based on model-free federal meta-learning
CN114462520A (en) Network intrusion detection method based on traffic classification
Vinayakumar et al. Secure shell (ssh) traffic analysis with flow based features using shallow and deep networks
CN112949702B (en) Network malicious encryption traffic identification method and system
CN113935426A (en) Method and device for detecting abnormal data traffic of power internet of things
CN114615093A (en) Anonymous network traffic identification method and device based on traffic reconstruction and inheritance learning
CN114330544A (en) Method for establishing business flow abnormity detection model and abnormity detection method
CN111565156A (en) Method for identifying and classifying network traffic
Kim et al. Deep RNN-based network traffic classification scheme in edge computing system
Kulyadi et al. Anomaly detection using generative adversarial networks on firewall log message data
CN116662184B (en) Industrial control protocol fuzzy test case screening method and system based on Bert
CN117318980A (en) Small sample scene-oriented self-supervision learning malicious traffic detection method
CN115277888B (en) Method and system for analyzing message type of mobile application encryption protocol
Zhang et al. Network traffic classification method based on improved capsule neural network
CN117675351A (en) Abnormal flow detection method and system based on BERT model
CN111130942A (en) Application flow identification method based on message size analysis
CN109508544A (en) A kind of intrusion detection method based on MLP

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination