CN116668198B - Flow playback test method, device, equipment and medium based on deep learning - Google Patents

Flow playback test method, device, equipment and medium based on deep learning Download PDF

Info

Publication number
CN116668198B
CN116668198B CN202310943173.6A CN202310943173A CN116668198B CN 116668198 B CN116668198 B CN 116668198B CN 202310943173 A CN202310943173 A CN 202310943173A CN 116668198 B CN116668198 B CN 116668198B
Authority
CN
China
Prior art keywords
data
model
output
network
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310943173.6A
Other languages
Chinese (zh)
Other versions
CN116668198A (en
Inventor
阮峰
牛云峰
张文鹏
张英男
李佳崇
宋智强
彭天洋
雷威华
王沈意
许小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhengfeng Information Technology Co ltd
Original Assignee
Nanjing Zhengfeng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhengfeng Information Technology Co ltd filed Critical Nanjing Zhengfeng Information Technology Co ltd
Priority to CN202310943173.6A priority Critical patent/CN116668198B/en
Publication of CN116668198A publication Critical patent/CN116668198A/en
Application granted granted Critical
Publication of CN116668198B publication Critical patent/CN116668198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Probability & Statistics with Applications (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)
  • Debugging And Monitoring (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a flow playback test method, a device, equipment and a readable storage medium based on deep learning, wherein the method comprises the following steps: collecting network flow data in the running process of an application program, and preprocessing the flow data; classifying and verifying network flow data by using a convolutional neural network CNN to distinguish normal flow from abnormal flow, and determining the key direction of analysis and test; modeling and predicting a flow data sequence by using a long-short-term memory model LSTM, and detecting abnormal behaviors and security holes in a network; the multi-task learning model MTL-ATT with the attention mechanism is introduced, CNN and LSTM share model parameters, and the generalization capability and the test efficiency of the model are improved. The application plays back the application program network flow data by using a deep learning algorithm to simulate and reproduce the situation that the user uses the application program in the real world, better evaluate the performance, quality and stability of the application program and provide more accurate suggestion and solution.

Description

Flow playback test method, device, equipment and medium based on deep learning
Technical Field
The application relates to the technical field of deep learning, in particular to a flow playback testing method, device, equipment and medium based on deep learning.
Background
With the rapid development of the internet, network security threats are increasing, the traditional security defense means cannot meet the current security demands, and more intelligent and self-adaptive security technologies are required to be adopted. The flow playback technology is a method capable of reproducing real network flow, can be used for performance and security test of a network system, and can comprehensively evaluate the performance and security of the system and find potential problems and vulnerabilities in the system by simulating the real network flow.
In the existing flow playback technology, all the collected flow data are recorded, and the recorded flow data are transmitted to a test platform for flow playback test, but the flow data for flow playback test are large, and the test efficiency is lower. With the continuous development and evolution of network systems, the playback testing technology based on traffic is also continuously evolving and developing, and becomes an important means for testing and evaluating the network systems.
Disclosure of Invention
The application aims to solve the technical problem of providing a flow playback test method, a device, equipment and a medium based on deep learning, which combine the deep learning technology and the flow playback technology to realize efficient and accurate automatic flow analysis and safety assessment.
The application adopts the following technical scheme:
a flow playback test method based on deep learning comprises the following four steps:
s1, collecting network flow data in the running process of an application program, and preprocessing the collected network flow data in the running process of the application program, wherein the preprocessing comprises data cleaning, normalization and feature extraction operations;
s2, classifying and verifying network flow data of an application program by using a convolutional neural network CNN, distinguishing normal and abnormal flows, and determining analysis and testing key directions;
s3, modeling and predicting a flow data sequence by using a long-short-term memory model LSTM, detecting abnormal behaviors and security holes in a network, using the established characteristic representation model for flow playback test, and evaluating the security performance of a network system;
s4, introducing a multi-task learning model MTL-ATT with an attention mechanism, sharing model parameters by an LSTM model and a CNN, learning a general feature representation, and completing flow data classification, sequence modeling and prediction.
Specifically, in step S1, the preprocessing of the network traffic data during the running process of the acquisition application program by using the data cleaning, normalization and feature extraction operations includes the following sub-steps:
s1.1, data cleaning: removing noise, missing values and abnormal value data by cleaning network flow data, wherein the data cleaning method comprises a filter, interpolation and abnormal value detection technology;
s1.2, normalization: in the network flow data, the value ranges of different flow data are different, and the data are scaled to the same range by using a normalization method, wherein the method comprises the following steps: min-max normalization or z-score normalization, in particular, for eigenvalues x,
min-max normalized Min-Max Normalization is converted using the following equation (1):
wherein x' is a normalized eigenvalue, min (x) and max (x) are the minimum and maximum values of the eigenvalue x, respectively, and the min-max normalization normalizes the data to between 0 and 1;
the z-score normalization is converted using the following equation (2):
wherein x' is a normalized eigenvalue, mean (x) and std (x) are the mean and standard deviation of the eigenvalue x, respectively, and z-score normalization scales the eigenvalue to a range with mean of 0 and standard deviation of 1;
s1.3, feature extraction:
extracting features from network traffic data, comprising: the size of the data packet, the time stamp, the protocol type, the source IP address and the destination IP address;
the method for extracting and using the characteristics comprises the following steps: fourier transform, wavelet transform and discrete cosine transform methods;
extracting features from the frequency domain of the network traffic data, comprising: spectrum, power spectrum and phase spectrum, wherein frequency domain features are features extracted from signal frequency components.
Further, in step S2, the network traffic data of the application program is classified and verified by using a convolutional neural network CNN, where the convolutional neural network CNN is composed of an input layer, a convolutional layer, a pooling layer, a full connection layer, and an output layer.
Specifically, the convolution layer analyzes each small block in the neural network to obtain features with higher abstraction degree, and the form is the following formula (3):
wherein l is the current layer, f is the ReLU activation function, M j For the convolution window corresponding to the jth convolution kernel,representing a first layer input matrix,/a>Representing a convolution kernel matrix of a first layer, b being the bias of the current layer;
and (3) a pooling layer, which reduces the number of nodes in the final full-connection layer and reduces parameters in the whole neural network, wherein the form is as follows formula (4):
wherein down () is a subsampling function, and each n×n-sized region in the input image is weighted and summed, and the size of the output image becomes the size of the input imageBeta is the network multiplicative parameter of the output characteristic mapping diagram;
the full-connection layer is positioned at the end of the convolutional neural network model, calculates the final output result of the network, and trains a classifier in the layer by a classification task, specifically: the learned high-level features are used as input of a classifier, the output result is a classification result, the network traffic data is classified, normal traffic and abnormal traffic are distinguished, and the key direction of analysis and test is determined.
In particular, the training process of the CNN model is forward propagation and backward propagation of the network, the forward propagation obtains a predicted value, and the backward propagation is used for updating a variable, and correcting parameters of the model, which is specifically as follows:
the forward propagation form of the convolutional layer is as follows equation (5):
wherein l is the current layer, f is the ReLU activation function, M j For the convolution window corresponding to the jth convolution kernel,representing a first layer input matrix,/a>Representing a convolution kernel matrix of a first layer, b being the bias of the current layer;
in the back propagation algorithm, the overall cost function is defined as the following equation (6):
wherein ,as cross entropy expression, y j Is a predicted value;regularization for weight ωA regularization term, which uses regularization parameters to carry out quantization adjustment on the square sum of the weights;
the adjustment direction of the weight parameter is as follows formula (7):
wherein ,En The total cost function, omega is a weight, and eta is a learning rate;
the adjustment direction of the bias parameter is formula (8):
further, in step S3, modeling and predicting the traffic data sequence by using the long-short-term memory LSTM, and detecting abnormal behaviors and security holes in the network, specifically:
the LSTM network comprises an input gate, a forgetting gate, an output gate and a memory cell, wherein the input gate, the forgetting gate and the output gate are processed according to a sigmoid function, the memory cell is processed according to a tanh function, each gate is provided with a weight matrix, a mode of input data is learned, and the mode is stored in the memory cell, so that a characteristic representation model for processing time sequence data characteristics is constructed, and the specific algorithm is as follows:
an input gate controlling the flow of input information, the calculation formula is as follows (formula 9):
i t =σ(W xi x t +W hi h t-1 +W ci C t-1 +b i ) (9)
wherein ,it Indicating the output of the input gate at time t, x t Indicating the input of time t, h t-1 A hidden layer output representing the last time, C t-1 Indicating the output of the memory cell at the previous time, W xi Representing the weight of the input gate, W hi Representing input gate output layer weights, W ci Representing the weight of the input gate memory cells, b i Representing an input gate bias term, sigma being a sigmoid function;
forgetting the gate, controlling the flow of information of the memory cells, and calculating the following formula (10):
wherein ,ft Indicating the output of the forgetting gate at time t, W xf Weight indicating forgetting gate, W hf Indicating forgetting gate output layer weight, W cf Weight representing amnestic memory cells, b f Representing a forget gate bias term;
the memory cell stores information, and the calculation formula is as follows (11):
C t =f t ×C t-1 +i t tanh(W xc x t +W hc h t-1 +b c ) (11)
wherein ,Ct Output of memory cell at time t, W xc Representing the weight of memory cells, W hc Representing the output layer weight of the memory cell, b c Representing a memory cell bias term;
an output gate controlling the flow of the output of the memory cell, the calculation formula being as follows (12):
o t =σ(W xo x t +W ho h t-1 +W co C t +b o ) (12)
wherein ,ot Representing the output of the output gate, W xo Representing the weight of the output gate, W ho Representing output gate output layer weights, b o Representing an output gate bias term;
the output of the LSTM network is obtained by memory cells and output gates, and the calculation formula is as follows (13):
h t =o t ×tanh(C t-1 ) (13)
wherein ,ht The hidden layer output at time t is indicated.
Finally, a strategy of a multi-task learning model MTL-ATT with an attention mechanism is used, a plurality of tasks share a model, specifically, multi-task learning based on CNN and LSTM is jointly used as a multi-input multi-output model, CNN is used for extracting characteristics of image data, LSTM is used for processing characteristics of time sequence data, two tasks share partial parameters including weight parameters of a convolution layer and a circulation layer, CNN and LSTM jointly learn general characteristic representation, the number of tasks is increased, performance and robustness of the model are improved, and flow data classification, sequence modeling and prediction are completed.
In particular, the application also provides a flow playback testing device based on deep learning, which comprises:
the data acquisition unit is used for acquiring network flow data in the running process of the application program;
the data processing unit is used for modeling and analyzing the flow data sequence and comprises data preprocessing, analysis and construction of a convolutional neural network CNN and a long-short-term memory model LSTM; preprocessing the collected network flow data in the running process of the application program, including data cleaning, normalization and feature extraction operation; classifying and verifying network flow data of an application program by using a convolutional neural network CNN, distinguishing normal and abnormal flows, and determining analysis and testing key directions; modeling and predicting a flow data sequence by using a long-short-term memory model LSTM, and detecting abnormal behaviors and security holes in a network; the playback test unit is used for using the established characteristic representation model for flow playback test and evaluating the security performance of the network system; the method comprises the steps of using a multi-task learning model MTL-ATT strategy with an attention mechanism, enabling a plurality of tasks to share a model, specifically, multi-task learning based on CNN and LSTM, jointly using the model as a multi-input multi-output model, enabling CNN to be used for extracting features of image data, enabling LSTM to be used for processing features of time series data, enabling two tasks to share parameters, enabling CNN and LSTM to jointly learn general feature representation, increasing the number of tasks, improving performance and robustness of the model, and completing flow data classification, sequence modeling and prediction.
The present application also provides a computer device comprising: one or more processors, memory, and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the steps in the deep learning based flow playback test method described above.
The present application also includes a computer readable storage medium having stored thereon a computer program loaded by a processor to perform the steps of the deep learning based flow playback testing method described above.
Compared with the prior art, the technical scheme provided by the application has the following technical effects: 1: the test efficiency and accuracy are improved: the application uses the deep learning technology to analyze the flow and extract the characteristics, which can automate the testing process and improve the testing efficiency and the testing accuracy.
2: the multi-task learning is realized: the application adopts the MTL-ATT technology, realizes multi-task learning, can learn and forecast a plurality of test tasks at the same time, and improves the test efficiency and the test quality.
3: the test coverage rate is improved: the application uses the flow playback technology, can simulate the real network flow, cover more test scenes and test cases, and improve the test coverage rate and test effect.
4: improving the usability of test data: the application uses LSTM technology to process the test data in sequence, thereby improving the usability and expression ability of the test data and improving the test effect and test accuracy.
Drawings
FIG. 1 is a flow chart of a flow playback testing method based on deep learning.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the application will be further elaborated in conjunction with the accompanying drawings, and the described embodiments are only a part of the embodiments to which the present application relates. All non-innovative embodiments in this example by others skilled in the art are intended to be within the scope of the application.
The application provides a flow playback test method based on deep learning, which is shown in fig. 1 and comprises the following four steps: s1, collecting network flow data in the running process of an application program, and preprocessing the collected network flow data in the running process of the application program, wherein the preprocessing comprises data cleaning, normalization and feature extraction operations;
s2, classifying and verifying network flow data of an application program by using a convolutional neural network CNN to distinguish normal flow from abnormal flow, determining the key direction of analysis and test, and rapidly screening out potential performance and quality problems, accelerating test flow and improving test efficiency by the operations;
s3, modeling and predicting a flow data sequence by using a long-short-term memory model LSTM, detecting abnormal behaviors and security holes in a network, using the established characteristic representation model for flow playback test, and evaluating the security performance of a network system;
s4, introducing a multi-task learning model MTL-ATT with an attention mechanism, sharing model parameters by an LSTM model and a CNN, and learning a general feature representation to improve generalization capability and test efficiency of the model, wherein training time and test cost of the model can be reduced by sharing the model parameters in similar tasks, and meanwhile accuracy and robustness of the model are improved, and flow data classification, sequence modeling and prediction are completed.
In particular, in one embodiment of the application,
after collecting network traffic data in the running process of an application program, preprocessing of the data is needed, including data cleaning, normalization, feature extraction and other operations. These operations can improve the performance and accuracy of the model to better classify, model and predict, including the following sub-steps:
s1.1, data cleaning: during the data acquisition process, problems such as noise, missing values, outliers and the like may occur, and these problems may affect the performance and accuracy of the model. Therefore, it is necessary to clean the data before preprocessing, and to remove data such as noise, missing values, and abnormal values, by using techniques such as filters, interpolation, and abnormal value detection.
S1.2, normalization: normalization is an important operation in data preprocessing that can scale data to the same extent for better comparison and classification. In the network traffic data, the value ranges of different traffic data may be different, so that normalization operation is required, and normalization is implemented by using methods such as minimum-maximum normalization, z-score normalization and the like.
Min-max normalization (Min-Max Normalization): min-max normalization is a simple normalization method that scales data to a range between 0 and 1. Specifically, for one eigenvalue x, the min-max normalization can be converted using the following formula:
where x' is the normalized eigenvalue, min (x) and max (x) are the minimum and maximum values of the eigenvalue x, respectively. The min-max normalization can normalize the data to between 0 and 1, and is applicable to data with relatively close eigenvalue distributions.
z-score Standardization (Standard): the z-score normalization is a common normalization method that can scale the eigenvalues to within 0 mean and 1 standard deviation. Specifically, for one eigenvalue x, z-score normalization can be converted using the following formula:
where x' is the normalized eigenvalue, mean (x) and std (x) are the mean and standard deviation of the eigenvalue x, respectively. The z-score normalization can make the mean value of the data be 0 and the standard deviation be 1, and is suitable for the data with scattered characteristic value distribution comparison.
S1.3, feature extraction: features need to be extracted from network traffic data before classification, modeling, and prediction can be performed. Feature extraction is the process of converting raw data into feature vectors that can be used for classification, modeling, and prediction. In network traffic data, the characteristics that can be extracted include the size of the data packet, the timestamp, the protocol type, the source IP address, the destination IP address, etc. The feature extraction is implemented by using frequency domain feature methods including fourier transform, wavelet transform, discrete cosine transform and the like, and features extracted from the frequency domain of the network traffic data include frequency spectrum, power spectrum, phase spectrum and the like. The frequency domain features can extract useful features from the frequency components of the signal and thus have a high accuracy in network traffic classification and identification.
Then, in one embodiment of the application,
the Convolutional Neural Network (CNN) is used to classify and verify the network traffic data of the application program to distinguish normal and abnormal traffic and to determine the key direction of analysis and testing. Through the operations, potential performance and quality problems can be rapidly screened out, the test flow is quickened, and the test efficiency is improved. The basic structure of the CNN consists of an input layer, a convolution layer, a pooling layer, a full connection layer and an output layer.
A convolutional layer is the most important part of a convolutional neural network, also called a filter or kernel, and consists of a plurality of feature planes, each consisting of a plurality of neurons. The input of each node in the convolution layer is only a small block of the neural network of the upper layer, and the length and the width of the small block are manually specified and are called as convolution kernels. The convolutional layer attempts to analyze each patch in the neural network more deeply to obtain more abstract features. The form of the convolution layer is the following:
wherein l is the current layer, f is the ReLU activation function, M j For the convolution window corresponding to the jth convolution kernel,representing a first layer input matrix,/a>Representing a convolution kernel matrix of a first layer, b being the bias of the current layer;
in particular, the activation function may also use sigmoid, tanh functions.
The pooling layer, also called the sampling layer, is also composed of a plurality of feature planes, immediately following the convolution layer, each of which uniquely corresponds to one of the feature planes of the previous layer. The pooling layer neural network does not change the number of the feature planes of the previous layer, but can reduce the size of each feature plane. The number of nodes in the final full-connection layer can be further reduced through the pooling layer, so that the purpose of reducing parameters in the whole neural network is achieved. The pooling layer is of the formula:
where down () is a subsampling function. The subsampling function is typically a weighted sum of each n x n size region in the input image so that the size of the output image becomes the size of the input imageThe output characteristic map has its own network multiplicative parameter beta and offset b.
The full connection layer is usually located at the final position of the convolutional neural network model and is used for calculating the final output result of the network. In the classification task, a classifier is usually trained in the layer, the learned high-level features are used as input of the classifier, and the output result is a classification result. And classifying the network traffic data through a CNN model to distinguish normal traffic from abnormal traffic, and determining the key direction of analysis and test. Through the operations, potential performance and quality problems can be rapidly screened out, the test flow is quickened, and the test efficiency is improved.
Specifically, the training process of the CNN model mainly comprises forward propagation and backward propagation of the network, wherein the forward propagation obtains a predicted value, and the backward propagation is used for updating a variable and correcting a parameter of the model.
The forward propagation of convolutional neural networks is the same as that of normal neural networks. The forward propagation form of the convolutional layer is the following:
the activation function is preferably a ReLU function.
First, in the back propagation algorithm, the overall cost function used in this embodiment is defined as:
the 1 st item on the right is a conventional cross entropy expression, the 2 nd item is a regularization item, and a regularization parameter is used to quantitatively adjust the square sum of weights, and in this embodiment, the regularization parameter is preferably used to be 0.01.
The back propagation algorithm optimizes the values of parameters in the convolutional neural network according to the defined loss function, so that the loss function of the convolutional neural network model on the training data set reaches a smaller value.
The adjustment direction of the weight parameter is as follows:
the adjustment direction of the bias parameter is as follows:
where η is the learning rate, the learning rate preferably used in this embodiment is 0.5.
Further, in a preferred embodiment of the present application, in order to improve the effect and efficiency of the flow playback, long-term memory (LSTM) is used to model and predict the flow data sequence, so as to detect abnormal behaviors and security holes in the network, and finally, the established feature representation model is used for the flow playback test to evaluate the security performance of the network system.
The long-short-term memory (LSTM) model is a Recurrent Neural Network (RNN), which is an artificial neural network used to process sequence data. The gate structure of LSTM can effectively slow down the possible gradient disappearance or explosion in long sequence problems, because it can learn the long-term dependency and memorize the needed information, and the gradient disappearance can make the model unable to learn the long-term dependency, thus the model is unable to memorize the needed information. Although this phenomenon is not completely eradicated, it is superior to the conventional RNN in terms of longer sequence problems. LSTM are well suited for sorting, processing and predicting based on time series data because they can memorize data sequences over a long period of time, LSTM have a special memory structure that can store long-term dependencies and thus have excellent long-term and short-term dependencies that can help models memorize patterns for longer periods of time. LSTM is able to capture long-term dependencies in data through the use of multiple layers of memory cells. The memory cells are connected in a chain, wherein each cell is responsible for memorizing a particular piece of information, and the output of one cell is used as the input of the next cell. These connections allow the network to learn complex data patterns over time and make accurate predictions.
LSTM networks include input gates (input gates), forget gates (foget gates), output gates (output gates), and memory cells (memory cells), where the input gates, forget gates, and output gates are all processed according to a sigmoid function and the memory cells are processed according to a tanh function. Each gate has its own weight matrix, which can learn the pattern of the input data and store it in the memory unit. At each step, the LSTM will determine the information to be retained and the information to be discarded, thereby memorizing the most recently learned information, specifically as follows:
the input gate controls the flow of input information, and the calculation formula is as follows:
i t =σ(W xi x t +W hi h t-1 +W ci C t-1 +b i )
wherein ,it Indicating the output of the input gate at time t, x t Indicating the input of time t, h t-1 A hidden layer output representing the last time, C t-1 Indicating the output of the memory cell at the previous time, W xi Representing the weight of the input gate, W hi Representing input gate output layer weights, W ci Representing the weight of the input gate memory cells, b i Representing an input gate bias term;
forgetting the door, controlling the flow of information of the memory cells, and calculating the following formula:
f t =σ(W xf x t +W hf h t-1 +W cf C t-1 +b f )
wherein ,ft Indicating the output of the forgetting gate at time t, W xf Weight indicating forgetting gate, W hf Indicating forgetting gate output layer weight, W cf Weight representing amnestic memory cells, b f Representing a forget gate bias term;
the memory cell stores information and the calculation formula is as follows:
C t =f t ×C t-1 +i t tanh(W xc x t +W hc h t-1 +b c )
wherein ,Ct Output of memory cell at time t, W xc Representing the weight of memory cells, W hc Representing the output layer weight of the memory cell, b c Representing a memory cell bias term;
an output gate controlling the flow of the output of the memory cell, the calculation formula is as follows:
o t =σ(W xo x t +W ho h t-1 +W co C t +b o )
wherein ,ot Representing the output of the output gate, W xo Representing the weight of the output gate, W ho Representing output gate output layer weights, b o Representing an output gate bias term;
the output of the LSTM network is obtained through the memory cells and the output gate, and the calculation formula is as follows:
h t =o t ×tanh(C t-1 )
wherein ,ht The hidden layer output at time t is indicated.
Then, for the shortcomings of low utilization rate, poor generalization capability, high computational complexity and poor interpretability of the single-task learning data, a strategy of a multi-task learning model (MTL-ATT) with an attention mechanism is used. In multi-task learning, a plurality of tasks share a model, so that data can be better utilized, generalization performance is improved, and computational complexity is reduced.
In one embodiment of the application, a multitasking learning model with attention mechanism (Multi-Task Learning with Attention, MTL-Att) is chosen. The model learns common features by combining features of the individual tasks using a attentiveness mechanism and shares the features among the tasks. In the training process, the model learns the relation among tasks and dynamically adjusts the weight of the attention mechanism, so that the performance of each task is improved. The method for introducing the multi-task learning can share the model parameters of the CNN and the LSTM so as to improve the generalization capability and the test efficiency of the model. In multi-task learning, one model can be used for processing a plurality of similar tasks at the same time, and the generalization capability and the testing efficiency of the model are improved by sharing model parameters.
In CNN and LSTM based multitasking, they can be used together as a multiple-input multiple-output model. Among them, CNN may be used to extract features of image data, and LSTM may be used to process features of time-series data. Two tasks may share a portion of the parameters, such as the weight parameters of the convolutional layer and the loop layer, to reduce the number of parameters of the model and training time. Meanwhile, the two tasks respectively have independent full-connection layers so as to adapt to different output requirements.
By sharing model parameters, the CNN and LSTM can learn some general feature representations together, thereby improving generalization capability and test efficiency of the model. In practical applications, the performance and robustness of the model can be further improved by increasing the number of tasks.
In particular, it should be noted that the embodiment of the present application further provides a flow playback testing device based on deep learning, including:
the data acquisition unit is used for acquiring network flow data in the running process of the application program;
the data processing unit is used for modeling and analyzing the flow data sequence and comprises a convolutional neural network CNN and a long-short-term memory model LSTM;
and the playback test unit is used for using the established characteristic representation model for flow playback test and evaluating the security performance of the network system.
The embodiment of the application also provides a computer device which integrates any of the flow playback testing devices based on deep learning, and the computer device comprises: one or more processors; a memory; and one or more applications, wherein the one or more applications are stored in the memory and configured to perform the steps of the flow playback test method described in any of the flow playback test method embodiments described above by the processor.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application also provide a computer-readable storage medium, which may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like. On which a computer program is stored, which is loaded by a processor to perform the steps of any of the deep learning based flow playback test methods provided by the embodiments of the present application.
The idea of the application is as follows: the application relates to a flow playback testing method based on a deep learning technology. The method utilizes CNN, LSTM and other technologies in deep learning to analyze network traffic and extract features, thereby realizing traffic playback test. Meanwhile, the application adopts the MTL-ATT technology, realizes multi-task learning, can learn and forecast a plurality of test tasks at the same time, and improves the test efficiency and the test quality. The application aims to improve the performance and the safety of a network system, and discover potential problems and loopholes in the system by simulating real network traffic so as to improve and optimize the network system.
The principles and embodiments of a flow playback testing method, apparatus, computer device and storage medium based on deep learning provided by the present application are described above, but only the preferred embodiments of the present application should be noted: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the application.

Claims (10)

1. The flow playback testing method based on deep learning is characterized by comprising the following steps of:
s1, collecting network flow data in the running process of an application program, and preprocessing the collected network flow data in the running process of the application program, wherein the preprocessing comprises data cleaning, normalization and feature extraction operations;
s2, classifying and verifying network flow data of an application program by using a convolutional neural network CNN, distinguishing normal and abnormal flows, and determining analysis and testing key directions;
s3, modeling and predicting a flow data sequence by using a long-short-term memory model LSTM, detecting abnormal behaviors and security holes in a network, using the established characteristic representation model for flow playback test, and evaluating the security performance of a network system;
s4, introducing a multi-task learning model MTL-ATT with an attention mechanism, sharing model parameters by an LSTM model and a CNN, learning a general feature representation, and completing flow data classification, sequence modeling and prediction.
2. The flow playback test method based on deep learning as set forth in claim 1, wherein in step S1, the collected network flow data in the running process of the application program is preprocessed, and the method includes the following sub-steps:
s1.1, data cleaning: removing noise, missing values and abnormal value data by cleaning network flow data, wherein the data cleaning method comprises a filter, interpolation and abnormal value detection technology;
s1.2, normalization: in the network flow data, the value ranges of different flow data are different, and the data are scaled to the same range by using a normalization method;
s1.3, feature extraction: extracting features from network traffic data, comprising: the size of the data packet, the time stamp, the protocol type, the source IP address and the destination IP address; extracting frequency domain features from signal frequency components, comprising: spectrum, power spectrum and phase spectrum characteristics; the feature extraction and use method comprises the following steps: fourier transform, wavelet transform and discrete cosine transform methods.
3. The method for flow playback testing based on deep learning according to claim 2, wherein in step S1.2, the normalization method comprises: min-max normalization or z-score normalization, in particular, for eigenvalues x,
min-max normalized Min-Max Normalization is converted using the following equation (1):
wherein x' is a normalized eigenvalue, min (x) and max (x) are the minimum and maximum values of the eigenvalue x, respectively, and the min-max normalization normalizes the data to between 0 and 1;
the z-score normalization is converted using the following equation (2):
where x' is the normalized eigenvalue, mean (x) and std (x) are the mean and standard deviation of the eigenvalue x, respectively, and z-score normalization scales the eigenvalue to within a range of 0 mean and 1 standard deviation.
4. The flow playback test method based on deep learning as set forth in claim 3, wherein in step S2, a convolutional neural network CNN is used to classify and verify network flow data of an application program, where the convolutional neural network CNN is composed of an input layer, a convolutional layer, a pooling layer, a full connection layer, and an output layer, and specifically:
and the convolution layer analyzes each small block in the neural network to obtain characteristics with higher abstraction degree, and the form is as the following formula (3):
wherein l is the current layer, f is the ReLU activation function, M j For the convolution window corresponding to the jth convolution kernel,representing a first layer input matrix,/a>Representing a convolution kernel matrix of a first layer, b being the bias of the current layer;
and (3) a pooling layer, which reduces the number of nodes in the final full-connection layer and reduces parameters in the whole neural network, wherein the form is as follows formula (4):
wherein down () is a subsampling function, n× for each of the input imagesn-sized region weighted sum whose output image size becomes the input image sizeBeta is a network multiplicative parameter of the output characteristic map;
the full-connection layer is positioned at the end of the convolutional neural network model, calculates the final output result of the network, and trains a classifier in the layer by a classification task, specifically: the learned high-level features are used as input of a classifier, the output result is a classification result, the network traffic data is classified, normal traffic and abnormal traffic are distinguished, and the key direction of analysis and test is determined.
5. The flow playback test method based on deep learning as set forth in claim 4, wherein the training process of the CNN model is forward propagation and backward propagation of a network convolution layer, the forward propagation obtaining a predicted value, and the backward propagation being used for updating a variable, and correcting parameters of the model, specifically as follows:
the forward propagation form of the convolutional layer is as follows equation (5):
in the back propagation algorithm, the overall cost function is defined as the following equation (6):
wherein ,as cross entropy expression, y j Is a predicted value; omega is weight value, < >>For a regularization term obtained by the weight omega, carrying out quantization adjustment on the square sum of the weights by using regularization parameters;
the adjustment direction of the weight parameter is as follows formula (7):
wherein ,En As an overall cost function, η is a learning rate;
the adjustment direction of the bias parameter is formula (8):
6. the method according to claim 5, wherein in step S3, the long-short-term memory model LSTM is used to model and predict the flow data sequence, and the LSTM model includes: the input gate, the forget gate, the output gate and the memory cell are processed according to a sigmoid function, the memory cell is processed according to a tanh function, each gate is provided with a weight matrix, the mode of input data is learned, and the memory cell is stored with the following specific algorithm:
an input gate controlling the flow of input information, the calculation formula is as follows (formula 9):
i t =σ(W xi x t +W hi h t-1 +W ci C t-1 +b i ) (9)
wherein ,it Indicating the output of the input gate at time t, x t Indicating the input of time t, h t-1 A hidden layer output representing the last time, C t-1 Indicating the output of the memory cell at the previous time, W xi Representing the weight of the input gate, W hi Representing input gate output layer weights, W ci Representing the weight of the input gate memory cells, b i Representing the input gate bias term,sigma is a sigmoid function;
forgetting the gate, controlling the flow of information of the memory cells, and calculating the following formula (10):
f t =σ(W xf x t +W hf h t-1 +W cf C t-1 +b f ) (10)
wherein ,ft Indicating the output of the forgetting gate at time t, W xf Weight indicating forgetting gate, W hf Indicating forgetting gate output layer weight, W cf Weight representing amnestic memory cells, b f Representing a forget gate bias term;
the memory cell stores information, and the calculation formula is as follows (11):
C t =f t ×C t-1 +i t tanh(W xc x t +W hc h t-1 +b c ) (11)
wherein ,Ct Output of memory cell at time t, W xc Representing the weight of memory cells, W hc Representing the output layer weight of the memory cell, b c Representing a memory cell bias term;
an output gate controlling the flow of the output of the memory cell, the calculation formula being as follows (12):
o t =σ(W xo x t +W ho h t-1 +W co C t +b o ) (12)
wherein ,Ot Representing the output of the output gate, W xo Representing the weight of the output gate, W ho Representing output gate output layer weights, b o Representing an output gate bias term;
the output of the LSTM network is obtained by memory cells and output gates, and the calculation formula is as follows (13):
h t =o t ×tanh(C t-1 ) (13)
wherein ,ht The hidden layer output at time t is indicated.
7. The flow playback test method based on deep learning according to claim 6, wherein a multitask learning model MTL-ATT strategy with a attentive mechanism is used, a plurality of tasks share a model, specifically, multitask learning based on CNN and LSTM is used together as a multiple-input multiple-output model, CNN is used for extracting features of image data, LSTM is used for processing features of time series data, two tasks share parameters, the parameters include weight parameters of a convolution layer and a circulation layer, CNN and LSTM jointly learn a general feature representation, the number of tasks is increased, performance and robustness of the model are improved, and flow data classification and sequence modeling and prediction are completed.
8. A flow playback testing device based on deep learning, characterized in that the flow playback testing device comprises: the data acquisition unit is used for acquiring network flow data in the running process of the application program;
the data processing unit is used for modeling and analyzing the flow data sequence and comprises data preprocessing, analysis and construction of a convolutional neural network CNN and a long-short-term memory model LSTM; preprocessing the collected network flow data in the running process of the application program, including data cleaning, normalization and feature extraction operation; classifying and verifying network flow data of an application program by using a convolutional neural network CNN, distinguishing normal and abnormal flows, and determining analysis and testing key directions; modeling and predicting a flow data sequence by using a long-short-term memory model LSTM, and detecting abnormal behaviors and security holes in a network; the playback test unit is used for using the established characteristic representation model for flow playback test and evaluating the security performance of the network system; the method comprises the steps of using a multi-task learning model MTL-ATT strategy with an attention mechanism, enabling a plurality of tasks to share a model, specifically, multi-task learning based on CNN and LSTM, jointly using the model as a multi-input multi-output model, enabling CNN to be used for extracting features of image data, enabling LSTM to be used for processing features of time series data, enabling two tasks to share parameters, enabling CNN and LSTM to jointly learn general feature representation, increasing the number of tasks, improving performance and robustness of the model, and completing flow data classification, sequence modeling and prediction.
9. A computer device, the computer device comprising: one or more processors, memory, and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the steps in the deep learning based traffic playback testing method of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon a computer program, the computer program being loaded by a processor to perform the steps in the deep learning based flow playback test method of any one of claims 1 to 7.
CN202310943173.6A 2023-07-31 2023-07-31 Flow playback test method, device, equipment and medium based on deep learning Active CN116668198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310943173.6A CN116668198B (en) 2023-07-31 2023-07-31 Flow playback test method, device, equipment and medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310943173.6A CN116668198B (en) 2023-07-31 2023-07-31 Flow playback test method, device, equipment and medium based on deep learning

Publications (2)

Publication Number Publication Date
CN116668198A CN116668198A (en) 2023-08-29
CN116668198B true CN116668198B (en) 2023-10-20

Family

ID=87710061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310943173.6A Active CN116668198B (en) 2023-07-31 2023-07-31 Flow playback test method, device, equipment and medium based on deep learning

Country Status (1)

Country Link
CN (1) CN116668198B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117319246A (en) * 2023-09-25 2023-12-29 江苏省秦淮河水利工程管理处 Water conservancy network flow monitoring system based on multisource data
CN117620345B (en) * 2023-12-28 2024-06-07 诚联恺达科技有限公司 Data recording system of vacuum reflow oven

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209933A (en) * 2019-12-25 2020-05-29 国网冀北电力有限公司信息通信分公司 Network traffic classification method and device based on neural network and attention mechanism
CN112100614A (en) * 2020-09-11 2020-12-18 南京邮电大学 CNN _ LSTM-based network flow anomaly detection method
CN112653684A (en) * 2020-12-17 2021-04-13 电子科技大学长三角研究院(衢州) Abnormal flow detection method based on multi-path feature perception long-term and short-term memory
CN113660196A (en) * 2021-07-01 2021-11-16 杭州电子科技大学 Network traffic intrusion detection method and device based on deep learning
WO2022057260A1 (en) * 2020-09-15 2022-03-24 浙江大学 Industrial control system communication network anomaly classification method
CN114816962A (en) * 2022-06-27 2022-07-29 南京争锋信息科技有限公司 ATTENTION-LSTM-based network fault prediction method
CA3177585A1 (en) * 2021-04-16 2022-10-16 Strong Force Vcn Portfolio 2019, Llc Systems, methods, kits, and apparatuses for digital product network systems and biology-based value chain networks
CN115296853A (en) * 2022-07-06 2022-11-04 国网山西省电力公司信息通信分公司 Network attack detection method based on network space-time characteristics
WO2022259125A1 (en) * 2021-06-07 2022-12-15 Telefonaktiebolaget Lm Ericsson (Publ) Unsupervised gan-based intrusion detection system using temporal convolutional networks, self-attention, and transformers
CN115766125A (en) * 2022-11-01 2023-03-07 中国电子科技集团公司第五十四研究所 Network flow prediction method based on LSTM and generation countermeasure network
WO2023056808A1 (en) * 2021-10-08 2023-04-13 中兴通讯股份有限公司 Encrypted malicious traffic detection method and apparatus, storage medium and electronic apparatus
WO2023087069A1 (en) * 2021-11-18 2023-05-25 Canopus Networks Pty Ltd Network traffic classification

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11927965B2 (en) * 2016-02-29 2024-03-12 AI Incorporated Obstacle recognition method for autonomous robots
CN111130839B (en) * 2019-11-04 2021-07-16 清华大学 Flow demand matrix prediction method and system
US11341236B2 (en) * 2019-11-22 2022-05-24 Pure Storage, Inc. Traffic-based detection of a security threat to a storage system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209933A (en) * 2019-12-25 2020-05-29 国网冀北电力有限公司信息通信分公司 Network traffic classification method and device based on neural network and attention mechanism
CN112100614A (en) * 2020-09-11 2020-12-18 南京邮电大学 CNN _ LSTM-based network flow anomaly detection method
WO2022057260A1 (en) * 2020-09-15 2022-03-24 浙江大学 Industrial control system communication network anomaly classification method
CN112653684A (en) * 2020-12-17 2021-04-13 电子科技大学长三角研究院(衢州) Abnormal flow detection method based on multi-path feature perception long-term and short-term memory
CA3177585A1 (en) * 2021-04-16 2022-10-16 Strong Force Vcn Portfolio 2019, Llc Systems, methods, kits, and apparatuses for digital product network systems and biology-based value chain networks
WO2022259125A1 (en) * 2021-06-07 2022-12-15 Telefonaktiebolaget Lm Ericsson (Publ) Unsupervised gan-based intrusion detection system using temporal convolutional networks, self-attention, and transformers
CN113660196A (en) * 2021-07-01 2021-11-16 杭州电子科技大学 Network traffic intrusion detection method and device based on deep learning
WO2023056808A1 (en) * 2021-10-08 2023-04-13 中兴通讯股份有限公司 Encrypted malicious traffic detection method and apparatus, storage medium and electronic apparatus
WO2023087069A1 (en) * 2021-11-18 2023-05-25 Canopus Networks Pty Ltd Network traffic classification
CN114816962A (en) * 2022-06-27 2022-07-29 南京争锋信息科技有限公司 ATTENTION-LSTM-based network fault prediction method
CN115296853A (en) * 2022-07-06 2022-11-04 国网山西省电力公司信息通信分公司 Network attack detection method based on network space-time characteristics
CN115766125A (en) * 2022-11-01 2023-03-07 中国电子科技集团公司第五十四研究所 Network flow prediction method based on LSTM and generation countermeasure network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Short-Term Traffic Flow Prediction: An Integrated Method of Econometrics and Hybrid Deep Learning;Zeyang Cheng等;IEEE Transactions on Intelligent Transportation Systems ( Volume: 23, Issue: 6, June 2022);全文 *
一种数据增强与混合神经网络的异常流量检测;连鸿飞;张浩;郭文忠;;小型微型计算机系统(第04期);全文 *
基于卷积循环神经网络的城市区域车流量预测模型;薛佳瑶;陈海勇;周刚;;信息工程大学学报(第02期);全文 *
基于机器学习的网络异常流量的生成与检测方法的研究;金映言;硕士电子期;全文 *
融合CNN与BiLSTM的网络入侵检测方法;刘月峰;蔡爽;杨涵晰;张晨荣;计算机工程(第12期);第2节 *

Also Published As

Publication number Publication date
CN116668198A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN116668198B (en) Flow playback test method, device, equipment and medium based on deep learning
CN112784881B (en) Network abnormal flow detection method, model and system
Rashid et al. Using accuracy measure for improving the training of LSTM with metaheuristic algorithms
Saffari et al. Fuzzy-ChOA: an improved chimp optimization algorithm for marine mammal classification using artificial neural network
CN110675623A (en) Short-term traffic flow prediction method, system and device based on hybrid deep learning
CN107133948A (en) Image blurring and noise evaluating method based on multitask convolutional neural networks
CN106682502A (en) Intrusion intension recognition system and method based on hidden markov and probability inference
CN106656357B (en) Power frequency communication channel state evaluation system and method
CN114821204B (en) Meta-learning-based embedded semi-supervised learning image classification method and system
CN113591728A (en) Electric energy quality disturbance classification method based on integrated deep learning
CN109120435A (en) Network link quality prediction technique, device and readable storage medium storing program for executing
CN112949391B (en) Intelligent security inspection method based on deep learning harmonic signal analysis
US20030204368A1 (en) Adaptive sequential detection network
CN116346639A (en) Network traffic prediction method, system, medium, equipment and terminal
Legrand et al. Study of autoencoder neural networks for anomaly detection in connected buildings
CN114925938A (en) Electric energy meter running state prediction method and device based on self-adaptive SVM model
CN117473275B (en) Energy consumption detection method for data center
CN113837122B (en) Wi-Fi channel state information-based contactless human body behavior recognition method and system
CN117894389A (en) SSA-optimized VMD and LSTM-based prediction method for concentration data of dissolved gas in transformer oil
CN112836432A (en) Indoor particle suspended matter concentration prediction method based on transfer learning
CN116188834B (en) Full-slice image classification method and device based on self-adaptive training model
CN111797690A (en) Optical fiber perimeter intrusion identification method and device based on wavelet neural network grating array
Greenwood Training multiple-layer perceptrons to recognize attractors
CN115964258A (en) Internet of things network card abnormal behavior grading monitoring method and system based on multi-time sequence analysis
CN114882289A (en) SAR target open set identification method based on self-adaptive determination rejection criterion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant