CN111160427B - Method for detecting mass flow data type based on neural network - Google Patents

Method for detecting mass flow data type based on neural network Download PDF

Info

Publication number
CN111160427B
CN111160427B CN201911300824.XA CN201911300824A CN111160427B CN 111160427 B CN111160427 B CN 111160427B CN 201911300824 A CN201911300824 A CN 201911300824A CN 111160427 B CN111160427 B CN 111160427B
Authority
CN
China
Prior art keywords
flow
data
image
original
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911300824.XA
Other languages
Chinese (zh)
Other versions
CN111160427A (en
Inventor
赵玉媛
吴振豪
陈钟
李青山
杨可静
兰云飞
吴琛
李洪生
王晓青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guoxin Yunfu Technology Co ltd
Boya Technology Beijing Co ltd Xin'an
Original Assignee
Beijing Guoxin Yunfu Technology Co ltd
Boya Technology Beijing Co ltd Xin'an
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guoxin Yunfu Technology Co ltd, Boya Technology Beijing Co ltd Xin'an filed Critical Beijing Guoxin Yunfu Technology Co ltd
Priority to CN201911300824.XA priority Critical patent/CN111160427B/en
Publication of CN111160427A publication Critical patent/CN111160427A/en
Application granted granted Critical
Publication of CN111160427B publication Critical patent/CN111160427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting the type of mass flow data based on a neural network, and relates to the technical field of information processing. The method comprises the steps of firstly, carrying out flow type marking on original flow to serve as an original training data set, cutting the original flow as a unit by taking sessions as a unit, independently forming a flow data packet sequence by each session, processing the flow data packet sequence in length to obtain a plurality of equal-length data packets, carrying out graphical operation on the data packet data, and stacking the data packet data into flow image three-dimensional data according to a time sequence; and then sending the preprocessed flow image three-dimensional data into a flow classification model based on a 3D convolutional neural network, training and storing the model, and detecting the accuracy of the model. And carrying out the same preprocessing operation on the flow data to be classified, and sending the flow data to the trained model to obtain a classification result. The method for detecting the type of the mass flow data can receive the mass flow data and can quickly and accurately classify the type of the mass flow data.

Description

Method for detecting mass flow data type based on neural network
Technical Field
The invention relates to the technical field of information processing, in particular to a method for detecting mass flow data types based on a neural network.
Background
With the rapid development of the internet, the network traffic is explosively increased along with the rapid increase of the number of internet users, and tools and methods for processing mass data are developed, but the methods and tools proposed today still have a great problem in accuracy for processing mass data, especially mass data containing time and space concepts.
In terms of traffic detection, the types of traffic attacks on networks today are endless. For the problem that network traffic data is classified as a hot problem in the current information technology field, there are four traffic classification methods: port-based, deep packet inspection-based, statistics-based, and behavior-based; the method based on statistics and behavior is a classical machine learning method, and the goal of classifying the flow is realized by selecting features in the existing data; whereas both port-based and deep packet inspection-based rely on rules. Due to the complexity of the original data, the machine learning method often misjudges the classification result by paying too much attention to useless and redundant data in the original data; on the contrary, the rule-based judgment method excessively focuses on manually extracted high-dimensional features, so that some relatively important features in the original data set are ignored; how to extract as much useful data as possible from the original data set and ignore redundant data becomes a critical problem to be solved in the current flow detection process.
The development of the image detection field is rapid, the image representation of the text data to learn the data characteristics becomes an effective method in the machine learning at present, and a large number of documents show that the application of the conversion idea is effective, but there is a problem that the data characteristics embodied by the converted two-dimensional image are spatial characteristics, so that only part of the text data with the spatial characteristics can be processed, and although the 3D convolutional neural network can represent the text data with the spatial characteristics and the temporal characteristics, the accuracy rate of the characteristic extraction is still a great gap compared with the two-dimensional image processing method.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method for detecting the type of mass traffic data based on a neural network, aiming at the defects of the prior art, so as to realize effective detection of the type of mass traffic data.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a method for detecting mass flow data types based on a neural network comprises the following steps:
step 1, carrying out flow type marking on original flow data to be used as an original training data set; the original training data comprises malicious traffic and normal traffic;
step 2, cutting the original training data set by taking a conversation as a unit to obtain a conversation data set;
step 3, dividing the session data set obtained in the step 2 by taking the data packet as a unit to obtain a flow data sequence ordered according to time, and correcting the length of the flow data sequence to make the length of the flow data sequence consistent;
step 4, processing the flow data sequence after length correction into an image set;
step 5, arranging the image sets obtained in the step 4 according to a time sequence, and stacking the image sets in a time dimension according to the arrangement sequence to obtain preprocessed flow image three-dimensional data;
step 6, building a flow classification model based on the 3D convolutional neural network, sending the flow image three-dimensional data obtained in the step 5 into the flow classification model for training, and storing the trained flow classification model;
step 6.1, carrying out hard-wired operation on the input flow image three-dimensional data, and carrying out information acquisition operation on each picture of the flow image three-dimensional data;
the hard-wired operation performed on the flow image three-dimensional data specifically comprises the following steps:
extracting 3 required channel information characteristics for each image in the image three-dimensional data, wherein the 3 channel information is gray scale (gray), abscissa gradient (gradient-x) and ordinate gradient (gradient-y), and storing the information in the image three-dimensional data according to a time sequence to finally obtain new image three-dimensional data with the channel number being 3 times of that of the original image three-dimensional data;
step 6.2, carrying out convolution operation on the processed flow image three-dimensional data for 3 times by respectively utilizing 3 convolution kernels with different sizes;
step 6.3, performing feature fusion operation on the 3 convolution results obtained in the step 6.2, namely fusing the processing result of the small convolution kernel and the processing result of the large convolution kernel, so as to update the convolution result processed by the large convolution kernel;
step 6.4, performing down-sampling operation on the 3 convolution results obtained in the step 6.3;
step 6.5, the operations of the steps 6.2 to 6.4 are repeatedly executed, a one-dimensional vector is finally obtained, the vector is input into a full connection layer for calculation, a final classification result is obtained, and the training of a flow classification model is completed;
step 7, carrying out preprocessing operation from the step 2 to the step 5 on the tested flow data, inputting a preprocessing result into the flow classification model trained in the step 6 to obtain a classification result, and testing the classification accuracy of the obtained flow classification model;
and 8, preprocessing the flow data to be classified in the steps 2 to 5, inputting the preprocessing result into the trained flow classification model to obtain a classification result, and classifying the flow data.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: the method for detecting the type of the mass flow data based on the neural network utilizes the high efficiency and the high correctness of the 3D convolutional neural network in the aspect of mass data processing, combines the characteristics of timeliness and spatiality of the flow image three-dimensional data processed by the 3D convolutional neural network, solves the difficult problem of difficult processing of the mass flow data, and also solves the time sequence problem which is difficult to solve by a common two-dimensional network; according to the traffic classification model based on the 3D convolutional neural network, the feature extraction work is effectively carried out by carrying out feature fusion on the results of different convolutional kernel convolutional processing, the effectiveness of feature extraction of the 3D convolutional neural network is improved, and therefore the accuracy of traffic type classification is improved.
Drawings
Fig. 1 is a flowchart of a method for detecting a mass traffic data type based on a neural network according to an embodiment of the present invention;
fig. 2 is a specific illustration diagram of a method for detecting a mass traffic data type based on a neural network according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention, but are not intended to limit the scope of the invention.
A method for detecting a mass traffic data type based on a neural network, as shown in fig. 1 and 2, includes the following steps:
step 1, carrying out flow type marking on original flow data to be used as an original training data set; the original training data comprises malicious traffic and normal traffic;
in this embodiment, malicious traffic in the collected original traffic data is divided into 5 categories, and the 5 types of malicious traffic have the characteristics of a large amount of network interaction and are not easy to distinguish, and are ARP attack, DNS hijacking, communication behavior after certificate forgery, R2L, and U2L, respectively. The 5 categories can be subdivided, for example, ARP attack is subdivided into IP address conflict, ARP flooding attack, ARP spoofing attack, ARP scanning attack, virtual host attack, and the like.
Step 2, cutting the original training data set by taking a conversation as a unit to obtain a conversation data set;
step 3, segmenting the session data set obtained in the step 2 by taking a data packet as a unit to obtain a flow data sequence which is sequenced according to time, correcting the length of the flow data sequence, and enabling the length of the flow data sequence to be consistent on the basis of ensuring that the flow data sequence is slightly adjusted;
in this embodiment, the length of the data packet is 784 bytes, and when the data length of the data packet does not reach the set data length, a zero value is added at the end of the data, and data interception is performed if the data length is greater than the set data length;
step 4, processing the flow data sequence obtained in the step 3 after length correction into an image set;
in this embodiment, each preprocessed data packet data is converted into 16-system data, each data packet data is converted into a two-dimensional array with a width of 256 bytes, and finally, the two-dimensional array formed by each data packet data is converted into a gray-scale image, so as to obtain an image set.
Step 5, arranging the images in the image set obtained in the step 4 according to a time sequence, and stacking the image set in a time dimension according to the arrangement sequence to obtain flow image three-dimensional data after preprocessing;
step 6, building a flow classification model based on the 3D convolutional neural network, sending the flow image three-dimensional data obtained in the step 5 into the flow classification model for training, and storing the trained flow classification model;
6.1, carrying out hard-wired operation on the input three-dimensional data of the flow image, and carrying out information acquisition operation on each picture of the three-dimensional data of the flow image;
the hard-wired operation on the three-dimensional data of the flow image specifically comprises the following steps:
extracting 3 required channel information characteristics for each image in the image three-dimensional data, wherein the 3 channel information characteristics are gray scale (gray), abscissa gradient (gradient-x) and ordinate gradient (gradient-y), and storing the information in the image three-dimensional data according to a time sequence to finally obtain new image three-dimensional data with the channel number being 3 times of that of the original image three-dimensional data;
step 6.2, performing convolution operation on the flow image three-dimensional data obtained by processing by respectively adopting 3 convolution kernels with different sizes;
in this embodiment, 3 convolution kernels with the sizes of 7 × 3, 7 × 6, and 7 × 12 are selected, and the difference between the 3 convolution kernels is that in the convolution process, the first convolution kernel processes 3 adjacent pictures at a time, the second convolution kernel processes 6 adjacent pictures at a time, and the third convolution kernel processes 12 adjacent pictures at a time; namely, the time span of the first convolution kernel processing is less than that of the second convolution kernel processing and less than that of the third convolution kernel processing, so that 3 different flow image three-dimensional data are respectively obtained after convolution operation, wherein the length of the first flow image three-dimensional data is longest, and the length of the third flow image three-dimensional data is shortest.
Step 6.3, performing feature fusion operation on the 3 convolution results obtained in the step 6.2, namely fusing the processing result of the small convolution kernel to the processing result of the large convolution kernel so as to update the convolution result processed by the large convolution kernel;
since 3 convolution kernels with the sizes of 7 × 3, 7 × 6 and 7 × 12 are selected in this embodiment, when the first convolution kernel processes 3 pictures at a time, the second convolution kernel processes 6 pictures at a time, and the third convolution kernel processes 12 pictures at a time, these convolution operations all realize feature extraction on the time dimension, when the first convolution kernel completes 2 convolutions, the size of the obtained three-dimensional data of the flow image is consistent with that of the three-dimensional data of the flow image obtained by the one-time convolution operation of the second convolution kernel, in this embodiment, the two-time convolution result of the first convolution kernel and the one-time convolution result of the second convolution kernel are subjected to an image data feature fusion operation, and the specific feature fusion principle is as follows: taking each image convolved by the second convolution kernel as a template, and taking each corresponding image convolved by the first convolution kernel as an original image; firstly, cutting the template picture into 1/500 of the original size to form a template set; then, template matching is carried out on each template in the template set in the original image, and the principle formula of template matching is as follows:
Figure BDA0002321727510000041
wherein, R (x, y) represents a correlation value of the template pattern at a corresponding position (x, y) of the original image, T '(x', y ') represents a value of the template pattern after the normalization operation of the value range of (x', y '), and I' (x + x ', y + y') represents a value of the original image after the normalization operation of the value range of (x ', y') with respect to the position (x, y);
and matching the correlation between the template domain and the original image by utilizing a cosine similarity principle, wherein a correlation value is 1 to represent perfect matching, a correlation value is-1 to represent poor matching, and a correlation value is 0 to represent no correlation, a threshold value is set to be 0.8 by calculating the correlation value of the template domain at the corresponding position of the original image, if the correlation value is smaller than the threshold value, namely the feature of the region is not successfully matched in the original image, the feature of the region in the template domain is abandoned, and if the correlation value is larger than the threshold value, the feature of the region is reserved, and finally, an image of a matching result is obtained as a result of the convolution of a second convolution kernel.
Step 6.4, performing down-sampling operation on the 3 convolution results obtained in the step 6.3;
in this embodiment, the final convolution result of the third convolution kernel is the final result of completing the time kernel space feature.
Step 6.5, the operations of the steps 6.2 to 6.4 are repeatedly executed, a one-dimensional vector is finally obtained, the vector is input into the full-connection layer for calculation, a final classification result is obtained, and the training of the flow classification model is completed;
step 7, preprocessing the tested data flow in the steps 2 to 5, inputting the preprocessed result into the flow classification model trained in the step 6 to obtain a classification result, and testing the classification accuracy of the flow classification model;
in this embodiment, the flow data different from the original flow data is subjected to the operations of step 2 to step 5 to serve as the preprocessed test data, the data is sent to the model trained in step 6 to obtain a training result, and the training result is compared with the actual classification result, so that the accuracy of the model is up to 89.4%.
And 8, preprocessing the flow data to be classified in the steps 2 to 5, inputting the preprocessing result into the trained flow classification model to obtain a classification result, and classifying the flow data.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.

Claims (2)

1. A method for detecting the type of mass flow data based on a neural network is characterized in that: the method comprises the following steps:
step 1, carrying out flow type marking on original flow data to be used as an original training data set; the original flow data comprises malicious flow and normal flow;
step 2, cutting the original training data set by taking a conversation as a unit to obtain a conversation data set;
step 3, dividing the session data set obtained in the step 2 by taking the data packet as a unit to obtain a flow data sequence ordered according to time, and correcting the length of the flow data sequence to make the length of the flow data sequence consistent;
step 4, processing the flow data sequence after length correction into an image set;
step 5, arranging the obtained image sets according to a time sequence, and stacking the image sets in a time dimension according to the arrangement sequence to obtain preprocessed flow image three-dimensional data;
step 6, building a flow classification model based on the 3D convolutional neural network, sending the flow image three-dimensional data obtained in the step 5 into the flow classification model for training, and storing the trained flow classification model;
step 7, preprocessing the tested flow data from the step 2 to the step 5, inputting the preprocessed result into the flow classification model trained in the step 6 to obtain a classification result, and testing the classification accuracy of the flow classification model;
step 8, preprocessing the flow data to be classified in the steps 2 to 5, inputting the preprocessing result into a trained flow classification model to obtain a classification result, and realizing the classification of the flow data;
the specific method of the step 6 comprises the following steps:
6.1, carrying out hard-wired operation on the input three-dimensional data of the flow image, and carrying out information acquisition operation on each picture of the three-dimensional data of the flow image;
6.2, carrying out convolution operation on the processed flow image three-dimensional data for 3 times by respectively utilizing convolution kernels with 3 different sizes;
step 6.3, performing feature fusion operation on the 3 convolution results obtained in the step 6.2, namely fusing the processing result of the small convolution kernel and the processing result of the large convolution kernel, so as to update the convolution result processed by the large convolution kernel;
taking each image convolved by the second convolution kernel as a template, and taking each corresponding image convolved by the first convolution kernel as an original image; firstly, cutting a die layout to form a template set; then, template matching is carried out on each template picture in the template set in the original picture; the principle formula of template matching is as follows:
Figure FDA0003865861510000011
wherein, R (x, y) represents a correlation value of the template pattern at a corresponding position (x, y) of the original image, T '(x', y ') represents a value of the template pattern after the normalization operation of the value range of (x', y '), and I' (x + x ', y + y') represents a value of the original image after the normalization operation of the value range of (x ', y') with respect to the position (x, y);
matching the correlation between the template domain and the original image by utilizing a cosine similarity principle, wherein a correlation value is 1 to represent perfect matching, a correlation value is-1 to represent poor matching, and a correlation value is 0 to represent no correlation, a threshold value is set to be 0.8 by calculating the correlation value of the template domain at the corresponding position of the original image, if the correlation value is smaller than the threshold value, namely the feature of the region is not successfully matched in the original image, the feature of the region in the template domain is abandoned, and if the correlation value is larger than the threshold value, the feature of the region is reserved, and finally an image of a matching result is obtained as a result of the convolution of a second convolution kernel;
step 6.4, performing down-sampling operation on the 3 convolution results obtained in the step 6.3;
and 6.5, repeatedly executing the operations of the steps 6.2 to 6.4 to finally obtain a one-dimensional vector, inputting the vector into the full-connection layer for calculation to obtain a final classification result, and finishing the training of the flow classification model.
2. The method for detecting the types of the mass flow data based on the neural network as claimed in claim 1, wherein the method comprises the following steps: the specific method for performing hard-wired operation on the input flow image three-dimensional data in step 6.1 is as follows:
extracting 3 required channel information characteristics for each image in the image three-dimensional data, wherein the 3 channel information is gray scale, horizontal coordinate gradient and vertical coordinate gradient respectively, and storing the information in the image three-dimensional data according to a time sequence to finally obtain new image three-dimensional data with the channel number 3 times that of the original image three-dimensional data.
CN201911300824.XA 2019-12-17 2019-12-17 Method for detecting mass flow data type based on neural network Active CN111160427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911300824.XA CN111160427B (en) 2019-12-17 2019-12-17 Method for detecting mass flow data type based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911300824.XA CN111160427B (en) 2019-12-17 2019-12-17 Method for detecting mass flow data type based on neural network

Publications (2)

Publication Number Publication Date
CN111160427A CN111160427A (en) 2020-05-15
CN111160427B true CN111160427B (en) 2023-04-18

Family

ID=70557439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911300824.XA Active CN111160427B (en) 2019-12-17 2019-12-17 Method for detecting mass flow data type based on neural network

Country Status (1)

Country Link
CN (1) CN111160427B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560935B (en) * 2020-12-11 2022-04-01 上海集成电路装备材料产业创新中心有限公司 Method for improving defect detection performance
CN113037646A (en) * 2021-03-04 2021-06-25 西南交通大学 Train communication network flow identification method based on deep learning
CN113794601B (en) * 2021-08-17 2024-03-22 中移(杭州)信息技术有限公司 Network traffic processing method, device and computer readable storage medium
CN114615172B (en) * 2022-03-22 2024-04-16 中国农业银行股份有限公司 Flow detection method and system, storage medium and electronic equipment
CN115277098B (en) * 2022-06-27 2023-07-18 深圳铸泰科技有限公司 Network flow abnormality detection device and method based on intelligent learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109361617A (en) * 2018-09-26 2019-02-19 中国科学院计算机网络信息中心 A kind of convolutional neural networks traffic classification method and system based on network payload package
CN110177122A (en) * 2019-06-18 2019-08-27 国网电子商务有限公司 A kind of method for establishing model and device identifying network security risk
CN110543890A (en) * 2019-07-22 2019-12-06 杭州电子科技大学 Deep neural network image matching method based on characteristic pyramid

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11645835B2 (en) * 2017-08-30 2023-05-09 Board Of Regents, The University Of Texas System Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109361617A (en) * 2018-09-26 2019-02-19 中国科学院计算机网络信息中心 A kind of convolutional neural networks traffic classification method and system based on network payload package
CN110177122A (en) * 2019-06-18 2019-08-27 国网电子商务有限公司 A kind of method for establishing model and device identifying network security risk
CN110543890A (en) * 2019-07-22 2019-12-06 杭州电子科技大学 Deep neural network image matching method based on characteristic pyramid

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Human Action Recognition Model Inspired by Multiple Scale Temporal Segments Model Fusion;Hailan Kuang等;《2019 11th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA)》;20191007;第389-393页 *
基于多尺度卷积神经网络的立体匹配方法;习路等;《计算机工程与设计》;20180916(第09期);第2918-2922页 *
基于深度卷积神经网络的网络流量分类方法;王勇等;《通信学报》;20180125(第01期);第14-23页 *
基于深度学习的网络流量分类技术研究;陈晔欣;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190815;第12-16、26页 *

Also Published As

Publication number Publication date
CN111160427A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111160427B (en) Method for detecting mass flow data type based on neural network
US10896349B2 (en) Text detection method and apparatus, and storage medium
Wu et al. Deep matching and validation network: An end-to-end solution to constrained image splicing localization and detection
CN111340191B (en) Bot network malicious traffic classification method and system based on ensemble learning
Mahmood et al. Copy‐move forgery detection technique for forensic analysis in digital images
Cozzolino et al. Image forgery detection through residual-based local descriptors and block-matching
Zhang et al. Feature reintegration over differential treatment: A top-down and adaptive fusion network for RGB-D salient object detection
CN107590491B (en) Image processing method and device
US11914639B2 (en) Multimedia resource matching method and apparatus, storage medium, and electronic apparatus
CN109558821B (en) Method for calculating number of clothes of specific character in video
CN108154149B (en) License plate recognition method based on deep learning network sharing
CN109993040A (en) Text recognition method and device
CN102737122B (en) Method for extracting verification code image from webpage
CN110427972B (en) Certificate video feature extraction method and device, computer equipment and storage medium
CN110610230A (en) Station caption detection method and device and readable storage medium
CN108960186B (en) Advertising machine user identification method based on human face
CN107369086A (en) A kind of identity card stamp system and method
CN111091122A (en) Training and detecting method and device for multi-scale feature convolutional neural network
Chen et al. Steganalysis of LSB matching using characteristic function moment of pixel differences
CN110599487A (en) Article detection method, apparatus and storage medium
Lin et al. A traffic sign recognition method based on deep visual feature
CN106599910A (en) Printing file discriminating method based on texture recombination
JP2018013887A (en) Feature selection device, tag relevant area extraction device, method, and program
Khuspe et al. Robust image forgery localization and recognition in copy-move using bag of features and SVM
CN115019052A (en) Image recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant