CN108848068A - Based on deepness belief network-Support Vector data description APT attack detection method - Google Patents

Based on deepness belief network-Support Vector data description APT attack detection method Download PDF

Info

Publication number
CN108848068A
CN108848068A CN201810526022.XA CN201810526022A CN108848068A CN 108848068 A CN108848068 A CN 108848068A CN 201810526022 A CN201810526022 A CN 201810526022A CN 108848068 A CN108848068 A CN 108848068A
Authority
CN
China
Prior art keywords
data
rbm
formula
feature
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810526022.XA
Other languages
Chinese (zh)
Inventor
张文杰
韩德志
王军
毕坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN201810526022.XA priority Critical patent/CN108848068A/en
Publication of CN108848068A publication Critical patent/CN108848068A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Abstract

The invention discloses based on deepness belief network-Support Vector data description APT attack detection method, wherein deepness belief network (DBN) for Feature Dimension Reduction and extracts outstanding feature vector, and Support Vector data description (SVDD) is used for the classification and detection of data.In the DBN training stage, after obtaining standard data set, using DBN model to its dimensionality reduction.Low layer is limited Boltzmann machine (RBM) and carries out being received the characterization for learning more abstract complexity in the simple characterization transmitted from low layer RBM using high-rise RBM after preliminary dimensionality reduction, and weight is adjusted repeatedly with the backpropagation of BP (Back Propagation) neural network, until extracting the outstanding data of feature.The processed data of DBN are divided into training set and test set again, data set is provided and is trained to SVDD and recognition detection, testing result is finally obtained.The attack detection method is suitable for unsupervised data volume greatly and the attack Data Detection with high dimensional feature, is suitble to APT attack detecting, can obtain excellent testing result.

Description

Based on deepness belief network-Support Vector data description APT attack detection method
Technical field
The present invention relates to cloud computing security technology areas, and in particular to is retouched based on deepness belief network-supporting vector data The APT attack detection method stated.
Background technique
Under the overall background of global network level of informatization high speed development, have concealment, permeability and targetedly high Grade duration threatens (APT, advanced persistent threat) prestige caused by all kinds of high-grade information safety systems The side of body is got worse, and the organized APT attack for specific objective is increasing, national, enterprise network information system sum number It faces a severe challenge according to safety.For example, China Great Wall network in 2008 suffers from the attack infiltration of U.S. Department of Defense's network hacker, It is implanted back door and steals information;" shake net " in 2010 by the preparation of many years and latent, successful attack be located at physics every From the industrial control system in Intranet, the sluggish nuclear programme of Iran;" action of night dragon " in 2011 has been stolen multiple transnational The highly sensitive internal file of energy giant company;It is a large amount of that supervirus " flame " in 2012 successfully obtains Middle East various countries Confidential information.As can be seen that APT attack causes grave danger to all kinds of key message infrastructure securities, carry out The work of APT attack defending is very urgent.
APT attack defending work in, attack detecting be security protection and reinforcing premise and foundation and APT attack Most difficult part in defence, therefore detection technique has become the research hotspot in current APT attack defending field.However, from allusion quotation From the point of view of type case, APT attack is long-term, duration network attack, and attack, which is concealed, to be difficult to be sent out in normal behaviour Feel.In addition, traditional attack detecting technology often can only recognition detection go out more apparent exception or attack, for examining Survey that APT attack of the concealment in normal behaviour can not obtain ideal testing result and data volume can not to be effectively treated very big Data set.So traditional attack detecting technology can not effectively detect APT attack mostly.The existing attack detecting side APT Case divides sandbox method, method for detecting abnormality and three kinds of full flow auditing method.Sandbox method mainly solves characteristic matching to novel The hysteresis quality problem of attack.Real-time traffic is introduced into sandbox by this method, by sandbox file system, network behavior, Process, registration table implementing monitoring, detect whether that there are malicious codes.Method for detecting abnormality can solve characteristic matching and in real time The deficiency of monitoring identifies abnormal behaviour by modeling to the normal behaviour mode in network.Full flow audit program is also It is insufficient and propose in order to solve traditional characteristic matching, this method to the flow in link carry out profound protocol analysis and Using reduction, whether identification includes wherein attack.In these three methods, abnormality detection is required with full flow audit Mass data is handled, therefore the model of APT attack detecting should apply to the very big situation of data volume.
Deep learning model is a kind of computation model of multiprocessing layer, by combine multilayered nonlinear simple module come Realize multilayer representative learning, every layer of characterization that can all learn more abstract complexity in upper one layer of simple characterization, from magnanimity higher-dimension Initial data in identify complicated model.So researcher, which has begun, explores deep learning in Network anomaly detection In application.It mainly include following three aspects to the APT attack detecting technical research in network using deep learning model: Unsupervised learning, supervised learning and semi-supervised learning.The immanent structure of the direct learning data of unsupervised learning, without sample Label data does not need largely to mark sample, but verification and measurement ratio is lower.Supervised learning needs in advance to training sample It is marked, this method is first using the training set Study strategies and methods for having label, then using the classifier after study to network Recognition detection is done in behavior.The method of supervised learning often shows excellent detection effect, however needs to mark sample Retraining after note, the data set big for data volume do not have higher practicability.Semi-supervised learning is supervised learning and no prison Educational inspector practises a kind of method for combining, this method mainly considers a small amount of labeled data of utilization and a large amount of unlabeled data into Row classification.
Summary of the invention
The purpose of the present invention is to provide attack to examine based on deepness belief network-Support Vector data description method APT Survey method, the model carry out structure dimensionality reduction using limitation Boltzmann machine (RBM), then refreshing with BP (Back Propagation) Through the reversed fine tuning structure parameter of network, to obtain the corresponding optimal expression of initial data, Support Vector data description is recycled Method carries out the intrusion detection of APT to data.
In order to achieve the above object, the invention is realized by the following technical scheme:
1. being comprised the steps of based on deepness belief network-Support Vector data description APT attack detection method:
S1, data are collected, network traffic data information is obtained using network flow packet capturing software, as detection APT's Data;
The feature extraction of S2, data, data are converted by vector space model, and the similitude being converted between vector is asked Topic, the information gain by calculating comentropy and each word can carry out feature extraction, to keep feature dimension identical and value model It encloses identical, need to further standardize;
S3, DBN train neural network, and the DBN of design includes the RBM of low layer, high-rise RBM and BP neural network, It include visible element, hidden unit in RBM, it is seen that unit v is to indicate feature, and hidden unit h is that study indicates feature, is established Connection between RBM layers does not connect between the unit of same levels, i.e., visible-visible or hiding-to hide connection, to RBM Input data is trained it, trains RBM using the method for log-likelihood, to find the value of parameter θ, so that energy is most Smallization;
S4, using DBN model to Data Dimensionality Reduction, low layer RBM, which carries out reusing high-rise RBM after preliminary dimensionality reduction, to be received from low Learn the characterization of more abstract complexity in the simple characterization that layer RBM is transmitted, and use BP neural network fine tuning structure parameter repeatedly, The outstanding data of feature are extracted, until generating the higher training pattern of accuracy;
S5, SVDD model recognition detection stage look for one as far as possible for all training samples in high-dimensional feature space DBN treated data are trained study with SVDD by the minimum sphere being all surrounded, and with the minimal hyper-sphere Decision boundary is classified and is described to data;
S6, result verification are judged by decision function f (x) in SVDD, and as f (x) >=0, which is normal Otherwise data sample is abnormal data sample, and generate corresponding warning, therefore by being detected based on DBN-SVDD deep learning The attack detecting of APT can be realized in method.
Wherein, the step S2 specifically includes:
S21, statistical sample concentrate total number of documents N, count the positive document frequency of occurrences A of each word, the negative document frequency of occurrences B, positive document not frequency of occurrences C, negative document not frequency of occurrences D, calculates the chi-square value of each word, formula isEach word is sorted from large to small by chi-square value, chooses K word as feature, K, that is, intrinsic dimensionality;
S22, the number of files for counting positive and negative classification:N1, N2, calculate comentropy, and formula isThe information gain of each word is calculated, Formula isEach word is pressed Information gain value sorts from large to small, and chooses K word as feature, K, that is, intrinsic dimensionality;
S23, because each feature dimension is different in feature vector, value range is different, so needing to each feature Vector is standardized (Standardization or Mean Removal and Variance Scalling), after transformation Each dimensional feature has 0 mean value, and unit variance also makes z-score standardize (zero-mean standardization);
The concrete operation step of S24, z-score standardization is as follows:Use formula Linear function conversion is carried out, formula y=log is then used10(x) logarithmic function conversion is carried out, formula is used The conversion of arc cotangent function is carried out, finally each feature vector is standardized, formula isWherein, Means indicates mean value, and variance indicates variance;
Wherein, the step S3 specifically includes:
S31, visible element v is to indicate feature in RBM, and hidden unit h is that study indicates feature, i.e., it will be from dimension To be mapped to dimension be d=to the input vector v for the input space that degree be n | h | feature space in, wherein p (v, h) be hide with It can be seen that the Joint Distribution of vector;
S32, data-oriented collection Dm×nAs input, RBM maps that Xm×dIn, in RBM, the unit of same levels Between do not connect, i.e., it is visible-visible or hiding-to hide connection, and two layers of figure with hiding between visible element pair Symmetrical weight W connection;
S33, in original RBM framework, hide and the Joint Distribution p (v, h) of visible vector be according to energy function E (v, h) is defined, it is assumed that input vector is the Gaussian random variable with variances sigma, then the energy letter of the Gauss Bernoulli Jacob RBM Number E (v, h) can be expressed as:
Wherein viAnd hjIt is with w respectivelyi,jVisible v layers of symmetrical weight and hiding h layers of i-th and j-th of unit, And corresponding deviation ciAnd bj, therefore the formula of p (v, h) is as follows:
Wherein, Z is the normalization factor of referred to as partition functions, and calculation formula is:
Z=∑v,he-E(v,h)
S34, hidden layer h are binary, and hidden unit is Bernoulli random variable, and input unit can be Binary system or real value, be coupled configuration value determined by the value relative to network parameter its calculation formula is:
θ=(W, b, c)
Wherein, b and c is the deviation to hidden layer and visible layer respectively, gives binary hidden unit hj, single due to hiding It is not connected between member, it is possible to directly design conditions distribution P (h | v):
And similarly, due to not connected between visible element,:
Wherein,
It is that logic S type function and N (μ, σ) indicate the Gaussian Profile with mean μ and variances sigma, RBM is trained to mean to look for To the value of parameter θ, so that energy minimizes, a kind of possible method is intended to maximize by its gradient relative to model parameter The log-likelihood of the v of estimation:
Since the accurately calculating for Section 2 of log-likelihood is reluctant, it is possible to be distributed using comparison is known as The method of (CD) is spent to estimate gradient, and CD is approximately through the expectation of the gibbs sampler (usual k=1) of K iteration to update net Network weight:
Wherein <.>IIndicate the average value of comparison degree of distributing iteration I;
S35, after the training RBM, another RBM can be stacked on the top of first RBM, i.e. hidden unit pushes away Disconnected state Xm×dAs the visible unit for training new RBM, upper layer can be Bernoulli Jacob RBM, wherein the main region with first layer It is not the visible unit of binary and energy function, stacks RBM and allow one between the hidden unit of modeling early stage RBM Significant dependent form;
Wherein, the step S4 specifically includes:
S41, stacked multilayer RBM indicate gradually more complicated in data to generate the nonlinear characteristic detectors of different layers Statistical framework, in RBM storehouse, the bottom-up identification weight of gained DBN is used to initialize the power of Multi-layer BP Neural Network Weight;
S42, characteristic parameter fine tuning is carried out with the backpropagation of the BP neural network in DBN model.The reality of BP algorithm Existing process is as follows:
Input:Training sample is vi, (i=1,2 ..., m);
Output:Model parameter θ={ W, b, c } after fine tuning;
For each training sample vi, calculate the reality output v of output nodei′;Calculate output node reality output with Ideal output (vi) error gradient, formula δk=vi′(1-vi′)(vi-vi′);The error gradient for hiding layer unit h is calculated, Formula isWherein, θhkFor the continuous weight of node h to subsequent node k, δkFor node k according to The output that excitation function is calculated;Calculate right value update, formula θijij+Δθij;Δθij=η Oiδj, wherein η is to learn Habit rate needs to determine by specific experiment, OiFor the output of node i, OjFor the output of node j, δjFor the recursive errors of node j Gradient;Above step is repeated, until output error (being indicated with variance) is sufficiently small, i.e.,: Wherein, s is training sample sequence, and z is output node sequence, dszIdeal output for sample s in node z, OszExist for sample s The reality output of node z;
So far, so that it may feed back model training link by adjusting the mode of characteristic parameter, meet accurately until generating Spend desired APT attack detection method;
Wherein, the step S5 specifically includes:
The description of S51, SVDD algorithm, firstly, defining a training sample set X={ x1,…,xn, wherein xi∈Rd(1 ≤ i≤n) it is column vector, the thought of SVDD is to find a hypersphere in sample characteristics space, it is denoted as B (c, r), wherein c and r points It is not the centre of sphere and radius of hypersphere, and makes B (c, r) envelope target sample as tightly as possible, that is, volume of hypersphere is as far as possible Small, therefore, which is described as:
s.t.||xi-c||2≤r2ii≥0,1≤i≤n(1)
Wherein, ξiIt is the relaxation item introduced, works as xiWhen in B (c, r) hypersphere or on face, ξi=0, otherwise ξi> 0;C > 0 It is control parameter, adjusts the size of mistake point number of training (the outer sample number of ball) and r, this parameter is adjusted according to practical application;
Usually introducing the i.e. so-called geo-nuclear tracin4 of kernel function improves the adaptability of algorithm, is substantially to find one suitably to reflect φ is penetrated by sample characteristics spaceIt is mapped to the feature space an of higher-dimension as far as possibleIn, i.e. φ:As given positive definite kernel function k:Rd×Rd∈ R, with this core induction under interior product representationIt is empty Between in SVDD model be
Hypersphere corresponding to above-mentioned model is denoted asConstruct the Lagrange's equation of above-mentioned model:
Wherein, α=(α1,…,αn)T>=0, β=(β1,…,βn)T>=0 is Lagrange multiplier vector, and formula (3) is right respectively The original variable r of optimization problem,And ξiIt seeks partial derivative and is set to 0, obtain:
By formula (6) it is found that αi=C- βi, when taking 0≤αiWhen≤C, βi>=0 must set up, therefore can omit this constraint, at this time Formula (4)~(6) are substituted into formula (3), obtain the dual form of primal problem:
S52, by DBN, treated that data are trained study with SVDD and classify, according to KKT (Karush-Kuhn- Tucker) theorem, as Lagrange multiplier > 0, the inequality constraints in formula (2) becomes equality constraint, i.e.,:Work as βi> 0 (at this point, αi< C) when, ξi=0;Work as αiWhen > 0,Then just like drawing a conclusion:1. working as αi=0 When, φ (xi)It is interior;2. as 0 < αiWhen < C, φ (xi)On spherical surface;3. working as αiWhen=C, φ (xi)Outside ball, 2. it can be obtained with formula (5) according to conclusion:
Wherein, xkCorresponding 0 < α of Lagrange multiplierk< C gives unknown sample x ∈ Rd
Wherein, the step S6 specifically includes:
Decision is carried out by following function:
After giving a certain kernel function, such as Gaussian kernel k (xi,xj)=exp (- | | φ (xi)-φ(xj)||2/ h) (h is height The bandwidth parameter of this core, | | | | be European 2 norm), formula (9) can be reduced to
Wherein
It is computable a certain constant, as f (x) >=0, otherwise it is exceptional sample which, which is target sample, therefore base In this SVDD algorithm recognition detection can be carried out to data set.
Compared with the prior art, the present invention has the following advantages:
1, the present invention considers compared with other deep learning models, devises one based on based on depth conviction net Network-Support Vector data description attack detection method, the model are a kind of unsupervised machine learning models, are not needed A large amount of data markers sample, suitable for detecting the data record of magnanimity.
2, the Feature Selection of more higher-dimension:Extract characteristic parameter based on big data, can from the Spatial dimensionality of modeling object, Behavior dimension etc. extracts characteristic parameter abundant enough, so that total energy embodies for any describable APT attack In the exception of one group of characteristic parameter, it is truly realized and attacker can nowhere escape under big data and there is dimensionality reduction outstanding effect Fruit.
3, more fully model training:Mass storage capacity based on big data platform can store enough history stream Data are measured as sample, the parameter and model of extraction are trained up, so that model has essence enough for abnormal behaviour True detectability, and its runing time is shorter than supervised learning but stronger than unsupervised learning, comprehensive detection rate and runing time With more outstanding application performance.
4, thinner model granularity:Different by modeling object of security domain from tradition, the APT based on big data is examined extremely Survey technology can be with based single host, and fine-grained model is established in the single application even on host, this allows for model There is enough sensitivity to abnormal detectability.The calculating cost done so be it is very big, with one only thousands of hosts Small business for, the obtained model quantity of connection modeling between Intrusion Detection based on host will be million magnitudes, this is being very in the past Hard to imagine, but the performance of big data platform is enough to support the calculating of similar scale at present.
Detailed description of the invention
Fig. 1 is that the present invention is based on the APT attack detection method flow charts of DBN-SVDD model;
Fig. 2 is the structure chart of DBN of the invention;
Fig. 3 is the structure chart of SVDD of the invention;
Fig. 4 is method of the invention applied system function module schematic diagram in embodiment.
Specific embodiment
The present invention is done and is further explained by one preferable specific embodiment of step-by-step procedures below in conjunction with attached drawing It states.
As shown in Figure 1, based on deepness belief network-Support Vector data description APT attack detection method, comprising with Lower step:
S1, data are collected, network traffic data information is obtained using network flow packet capturing software, as detection APT's Data;
The feature extraction of S2, data, data are converted by vector space model, and the similitude being converted between vector is asked Topic, the information gain by calculating comentropy and each word can carry out feature extraction, to keep feature dimension identical and value model It encloses identical, need to further standardize;
S3, DBN train neural network, and the DBN of design includes the RBM of low layer, high-rise RBM and BP neural network, It include visible element, hidden unit in RBM, it is seen that unit v is to indicate feature, and hidden unit h is that study indicates feature, is established Connection between RBM layers does not connect between the unit of same levels, i.e., visible-visible or hiding-to hide connection, to RBM Input data is trained it, trains RBM using the method for log-likelihood, to find the value of parameter θ, so that energy is most Smallization;
S4, using DBN model to Data Dimensionality Reduction, low layer RBM, which carries out reusing high-rise RBM after preliminary dimensionality reduction, to be received from low Learn the characterization of more abstract complexity in the simple characterization that layer RBM is transmitted, and use BP neural network fine tuning structure parameter repeatedly, The outstanding data of feature are extracted, until generating the higher training pattern of accuracy;
S5, SVDD model recognition detection stage look for one as far as possible for all training samples in high-dimensional feature space DBN treated data are trained study with SVDD by the minimum sphere being all surrounded, and with the minimal hyper-sphere Decision boundary is classified and is described to data;
S6, result verification are judged by decision function f (x) in SVDD, and as f (x) >=0, which is normal Otherwise data sample is abnormal data sample, and generate corresponding warning, therefore by being detected based on DBN-SVDD deep learning The attack detecting of APT can be realized in method.
Wherein, the step S2 specifically includes:
S21, statistical sample concentrate total number of documents N, count the positive document frequency of occurrences A of each word, the negative document frequency of occurrences B, positive document not frequency of occurrences C, negative document not frequency of occurrences D, calculates the chi-square value of each word, formula isEach word is sorted from large to small by chi-square value, chooses K word as feature, K, that is, intrinsic dimensionality;
S22, the number of files for counting positive and negative classification:N1, N2, calculate comentropy, and formula isThe information gain of each word is calculated, Formula isEach word is pressed Information gain value sorts from large to small, and chooses K word as feature, K, that is, intrinsic dimensionality;
S23, because each feature dimension is different in feature vector, value range is different, so needing to each feature Vector is standardized (Standardization or Mean Removal and Variance Scalling), after transformation Each dimensional feature has 0 mean value, and unit variance also makes z-score standardize (zero-mean standardization);
The concrete operation step of S24, z-score standardization is as follows:Use formula Linear function conversion is carried out, formula y=log is then used10(x) logarithmic function conversion is carried out, formula is used The conversion of arc cotangent function is carried out, finally each feature vector is standardized, formula isWherein, Means indicates mean value, and variance indicates variance;
Wherein, the step S3 specifically includes:
S31, visible element v is to indicate feature in RBM, and hidden unit h is that study indicates feature, i.e., it will be from dimension To be mapped to dimension be d=to the input vector v for the input space that degree be n | h | feature space in, wherein p (v, h) be hide with It can be seen that the Joint Distribution of vector;
S32, data-oriented collection Dm×nAs input, RBM maps that Xm×dIn, in RBM, the unit of same levels Between do not connect, i.e., it is visible-visible or hiding-to hide connection, and two layers of figure with hiding between visible element pair Symmetrical weight W connection;
S33, in original RBM framework, hide and the Joint Distribution p (v, h) of visible vector be according to energy function E (v, h) is defined, it is assumed that input vector is the Gaussian random variable with variances sigma, then the energy letter of the Gauss Bernoulli Jacob RBM Number E (v, h) can be expressed as:
Wherein viAnd hjIt is with w respectivelyi,jVisible v layers of symmetrical weight and hiding h layers of i-th and j-th of unit, And corresponding deviation ciAnd bj, therefore the formula of p (v, h) is as follows:
Wherein, Z is the normalization factor of referred to as partition functions, and calculation formula is:
Z=∑v,he-E(v,h)
S34, hidden layer h are binary, and hidden unit is Bernoulli random variable, and input unit can be Binary system or real value, be coupled configuration value determined by the value relative to network parameter its calculation formula is:
θ=(W, b, c)
Wherein, b and c is the deviation to hidden layer and visible layer respectively, gives binary hidden unit hj, single due to hiding It is not connected between member, it is possible to directly design conditions distribution P (h | v):
And similarly, due to not connected between visible element,:
Wherein,
It is that logic S type function and N (μ, σ) indicate the Gaussian Profile with mean μ and variances sigma, RBM is trained to mean to look for To the value of parameter θ, so that energy minimizes, a kind of possible method is intended to maximize by its gradient relative to model parameter The log-likelihood of the v of estimation:
Since the accurately calculating for Section 2 of log-likelihood is reluctant, it is possible to be distributed using comparison is known as The method of (CD) is spent to estimate gradient, and CD is approximately through the expectation of the gibbs sampler (usual k=1) of K iteration to update net Network weight:
Wherein <.> I indicates the average value of comparison degree of distributing iteration I;
S35, after the training RBM, another RBM can be stacked on the top of first RBM, i.e. hidden unit pushes away Disconnected state Xm×dAs the visible unit for training new RBM, upper layer can be Bernoulli Jacob RBM, wherein the main region with first layer It is not the visible unit of binary and energy function, stacks RBM and allow one between the hidden unit of modeling early stage RBM Significant dependent form;
Wherein, the step S4 specifically includes:
S41, stacked multilayer RBM indicate gradually more complicated in data to generate the nonlinear characteristic detectors of different layers Statistical framework, in RBM storehouse, the bottom-up identification weight of gained DBN is used to initialize the power of Multi-layer BP Neural Network Weight;
S42, characteristic parameter fine tuning is carried out with the backpropagation of the BP neural network in DBN model.The reality of BP algorithm Existing process is as follows:
Input:Training sample is vi, (i=1,2 ..., m);
Output:Model parameter θ={ W, b, c } after fine tuning;
For each training sample vi, calculate the reality output v of output nodei′;Calculate output node reality output with Ideal output (vi) error gradient, formula δk=vi′(1-vi′)(vi-vi′);The error gradient for hiding layer unit h is calculated, Formula isWherein, θhkFor the continuous weight of node h to subsequent node k, δkFor node k according to The output that excitation function is calculated;Calculate right value update, formula θijij+Δθij;Δθij=η Oiδj, wherein η is to learn Habit rate needs to determine by specific experiment, OiFor the output of node i, OjFor the output of node j, δjFor the recursive errors of node j Gradient;Above step is repeated, until output error (being indicated with variance) is sufficiently small, i.e.,: Wherein, s is training sample sequence, and z is output node sequence, dszIdeal output for sample s in node z, OszExist for sample s The reality output of node z;
So far, so that it may feed back model training link by adjusting the mode of characteristic parameter, meet accurately until generating Spend desired APT attack detection method;
Wherein, the step S5 specifically includes:
The description of S51, SVDD algorithm, firstly, defining a training sample set X={ x1,…,xn, wherein xi∈Rd(1 ≤ i≤n) it is column vector, the thought of SVDD is to find a hypersphere in sample characteristics space, it is denoted as B (c, r), wherein c and r points It is not the centre of sphere and radius of hypersphere, and makes B (c, r) envelope target sample as tightly as possible, that is, volume of hypersphere is as far as possible Small, therefore, which is described as:
s.t.||xi-c||2≤r2ii≥0,1≤i≤n(1)
Wherein, ξiIt is the relaxation item introduced, works as xiWhen in B (c, r) hypersphere or on face, ξi=0, otherwise ξi> 0;C > 0 It is control parameter, adjusts the size of mistake point number of training (the outer sample number of ball) and r, this parameter is adjusted according to practical application;
Usually introducing the i.e. so-called geo-nuclear tracin4 of kernel function improves the adaptability of algorithm, is substantially to find one suitably to reflect φ is penetrated by sample characteristics spaceIt is mapped to the feature space an of higher-dimension as far as possibleIn, i.e. φ:As given positive definite kernel function k:Rd×Rd∈ R, with this core induction under interior product representationIt is empty Between in SVDD model be
Hypersphere corresponding to above-mentioned model is denoted asConstruct the Lagrange's equation of above-mentioned model:
Wherein, α=(α1,…,αn)T>=0, β=(β1,…,βn)T>=0 is Lagrange multiplier vector, and formula (3) is right respectively The original variable r of optimization problem,And ξiIt seeks partial derivative and is set to 0, obtain:
By formula (6) it is found that αi=C- βi, when taking 0≤αiWhen≤C, βi>=0 must set up, therefore can omit this constraint, at this time Formula (4)~(6) are substituted into formula (3), obtain the dual form of primal problem:
S52, by DBN, treated that data are trained study with SVDD and classify, according to KKT (Karush-Kuhn- Tucker) theorem, as Lagrange multiplier > 0, the inequality constraints in formula (2) becomes equality constraint, i.e.,:Work as βi> 0 (at this point, αi< C) when, ξi=0;Work as αiWhen > 0,Then just like drawing a conclusion:1. working as αi=0 When, φ (xi)It is interior;2. as 0 < αiWhen < C, φ (xi)On spherical surface;3. working as αiWhen=C, φ (xi)Outside ball, 2. it can be obtained with formula (5) according to conclusion:
Wherein, xkCorresponding 0 < α of Lagrange multiplierk< C gives unknown sample x ∈ Rd
Wherein, the step S6 specifically includes:
Decision is carried out by following function:
After giving a certain kernel function, such as Gaussian kernel k (xi,xj)=exp (- | | φ (xi)-φ(xj)||2/ h) (h is height The bandwidth parameter of this core, | | | | be European 2 norm), formula (9) can be reduced to
Wherein
It is computable a certain constant, as f (x) >=0, otherwise it is exceptional sample which, which is target sample, therefore base In this SVDD algorithm recognition detection can be carried out to data set.
As shown in figure 4, the unusual checking principle towards APT attack proposes this model, the present invention passes through pair first The analysis and summary of the method for APT attack, extract targeted feature;It is then based on training data, first using no prison Educational inspector practise DBN algorithm carry out preliminary dimensionality reduction and classification, then on the basis of DBN algorithm, using SVDD algorithm carry out into The training of one step;Finally the validity of model is verified using test data or truthful data.For the detection knot of mistake Fruit can feed back model training link by adjusting the mode of characteristic parameter, until generating the model for meeting accuracy requirement. Although the principle and traditional abnormality detection technology have no essential distinction, the abnormality detection technology based on big data has as follows Feature.
(1) thinner model granularity:Different by modeling object of security domain from tradition, the APT based on big data is examined extremely Survey technology can be with based single host, and fine-grained model is established in the single application even on host, this allows for model There is enough sensitivity to abnormal detectability.The calculating cost done so be it is very big, with one only thousands of hosts Small business for, the obtained model quantity of connection modeling between Intrusion Detection based on host will be million magnitudes, this is being very in the past Hard to imagine, but the performance of big data platform is enough to support the calculating of similar scale at present.
(2) Feature Selection of more higher-dimension:Extract characteristic parameter based on big data, can from the Spatial dimensionality of modeling object, Behavior dimension etc. extracts characteristic parameter abundant enough, so that for any describable APT attack, total energy body In the exception of present one group of characteristic parameter, it is truly realized and attacker can nowhere escape under big data.
(3) more fully model training:Mass storage capacity based on big data platform can store enough history Data on flows trains up the parameter and model of extraction as sample, so that model has enough abnormal behaviour Accurate detectability.
It can be seen that realizing Network anomalous behaviors detection based on big data, early stage APT abnormality detection technology is overcome Deficiency brings qualitative leap to detection technique, but also proposes new challenge to storage and computing capability simultaneously, needs One can provide the technology platform effectively supported.
It is discussed in detail although the contents of the present invention have passed through above preferred embodiment, but it should be appreciated that above-mentioned Description be not considered as limitation of the present invention.After those skilled in the art have read above content, for the present invention A variety of modifications and substitutions all will be apparent.Therefore, protection scope of the present invention should be limited by the attached claims It is fixed.

Claims (6)

1. a kind of based on deepness belief network-Support Vector data description APT attack detection method, which is characterized in that include Following steps:
S1, data are collected, network traffic data information is obtained using network flow packet capturing software, the data as detection APT;
The feature extraction of S2, data, data are converted by vector space model, and the Similarity Problem being converted between vector leads to The information gain for crossing calculating comentropy and each word can carry out feature extraction, to keep feature dimension identical and value range phase Together, it need to further standardize;
S3, DBN train neural network, and the DBN of design includes the RBM of low layer, high-rise RBM and BP neural network, wrap in RBM Containing visible element, hidden unit, it is seen that unit v is to indicate feature, and hidden unit h is that study indicates feature, is established between RBM layers Connection, do not connect between the unit of same levels, i.e., it is visible-visible or hiding-to hide connection, to RBM input data pair It is trained, and trains RBM using the method for log-likelihood, to find the value of parameter θ, so that energy minimizes;
S4, using DBN model to Data Dimensionality Reduction, low layer RBM, which carries out reusing high-rise RBM after preliminary dimensionality reduction, to be received from low layer RBM Learn the characterization of more abstract complexity in the simple characterization transmitted, and use BP neural network fine tuning structure parameter repeatedly, extracts The outstanding data of feature, until generating the higher training pattern of accuracy;
In S5, SVDD model recognition detection stage, one is looked in high-dimensional feature space and as far as possible all wraps all training samples DBN treated data are trained study with SVDD by the minimum sphere to fence up, and with the decision side of the minimal hyper-sphere Bound pair data are classified and are described;
S6, result verification are judged by decision function f (x) in SVDD, and as f (x) >=0, which is normal data sample This, is otherwise abnormal data sample, and generates corresponding warning, therefore by being based on DBN-SVDD deep learning detection method Realize the attack detecting of APT.
2. as described in claim 1 based on deepness belief network-Support Vector data description APT attack detection method, It is characterized in that, the step S2 specifically includes:
Total number of documents N in S21, statistical sample data, count the positive document frequency of occurrences A of each word, negative document frequency of occurrences B, Positive document not frequency of occurrences C, negative document not frequency of occurrences D, calculates the chi-square value of each word, formula isEach word is sorted from large to small by chi-square value finally, chooses K word conduct Feature, K, that is, intrinsic dimensionality;
S22, the number of files for counting positive and negative classification:N1, N2, then calculate comentropy, and formula isThe information gain of each word is calculated, it is public Formula isEach word is pressed into information Yield value sorts from large to small, and chooses K word as feature, K, that is, intrinsic dimensionality;
The concrete operation step of S23, z-score standardization is as follows:Use formulaInto The conversion of row linear function, then uses formula y=log10(x) logarithmic function conversion is carried out, formula is usedInto The conversion of row arc cotangent function, is finally standardized each feature vector, formula isWherein, means Indicate mean value, variance indicates variance.
3. as described in claim 1 based on deepness belief network-Support Vector data description APT attack detection method, It is characterized in that, the step S3 specifically includes:
S31, visible element v is to indicate feature in RBM, and hidden unit h is that study indicates feature, i.e., it will be n from dimension The input space input vector v be mapped to dimension be d=| h | feature space in, wherein p (v, h) be hide and it is visible to The Joint Distribution of amount;
S32, data-oriented collection Dm×nAs input, RBM maps that Xm×dIn, in RBM, do not have between the unit of same levels There is connection, i.e., it is visible-visible or hiding-to hide connection, and the symmetrical power hidden between visible element pair of two layers of figure Weight W connection;
S33, in original RBM framework, hide and the Joint Distribution p (v, h) of visible vector be according to energy function E (v, h) Definition, it is assumed that input vector is the Gaussian random variable with variances sigma, then the Gauss Bernoulli Jacob RBM energy function E (v, H) it can be expressed as:
Wherein viAnd hjIt is with w respectivelyi,jVisible v layers of symmetrical weight and hiding h layers of ith and jth unit, Yi Jixiang The deviation c answerediAnd bj, therefore the formula of p (v, h) is as follows:
Wherein, Z is the normalization factor of referred to as partition functions, and calculation formula is:
Z=∑v,he-E(v,h)
S34, hidden layer h are binary, and hidden unit is Bernoulli random variable, and input unit can be binary system Or real value;Be coupled configuration value determined by the value relative to network parameter its calculation formula is:
θ=(W, b, c)
Wherein, b and c is the deviation to hidden layer and visible layer respectively, gives binary hidden unit hj, due between hidden unit Do not connect, it is possible to directly design conditions distribution P (h | v):
And similarly, due to not connected between visible element,:
Wherein,
It is that logic S type function and N (μ, σ) indicate the Gaussian Profile with mean μ and variances sigma, RBM is trained to mean to find parameter The value of θ, so that energy minimizes;A kind of possible method is intended to maximize the v estimated by it relative to the gradient of model parameter Log-likelihood:
Since the accurately calculating for Section 2 of log-likelihood is reluctant, it is possible to use the side for being known as comparison degree of distributing Method estimates gradient, compares degree of distributing approximately through the expectation of the gibbs sampler (usual k=1) of K iteration to update network Weight:
WhereinIndicate the average value of comparison degree of distributing iteration I;
4. as described in claim 1 based on deepness belief network-Support Vector data description APT attack detection method, It is characterized in that, the step S4 specifically includes:
S41, stacked multilayer RBM indicate system gradually more complicated in data to generate the nonlinear characteristic detectors of different layers Structure is counted, in RBM storehouse, the bottom-up identification weight of gained DBN is used to initialize the weight of Multi-layer BP Neural Network;
S42, characteristic parameter fine tuning, the realization of BP algorithm are carried out with the backpropagation of the BP neural network in DBN model Journey is as follows:
Input:Training sample is vi, (i=1,2 ..., m);
Output:Model parameter θ={ W, b, c } after fine tuning;
For each training sample vi, calculate the reality output v of output nodei′;Calculate output node reality output and ideal Export (vi) error gradient, formula δk=vi′(1-vi′)(vi-vi′);Calculate the error gradient for hiding layer unit h, formula ForWherein, θhkFor the continuous weight of node h to subsequent node k, δkIt is node k according to excitation The output that function is calculated;Calculate right value update, formula θijij+Δθij;Δθij=η Oiδj, wherein η is study Rate needs to determine by specific experiment, OiFor the output of node i, OjFor the output of node j, δjFor the recursive errors ladder of node j Degree;Above step is repeated, until output error (being indicated with variance) is sufficiently small, i.e.,: Wherein, s is training sample sequence, and z is output node sequence, dszIdeal output for sample s in node z, OszExist for sample s The reality output of node z;
5. as described in claim 1 based on deepness belief network-Support Vector data description APT attack detection method, It is characterized in that, the step S5 specifically includes:
The description of S51, SVDD algorithm defines a training sample set X={ x1,…,xn, wherein xi∈Rd(1≤i≤n) is column Vector, the thought of SVDD are to find a hypersphere in sample characteristics space, are denoted as B (c, r), wherein c and r is the ball of hypersphere respectively The heart and radius, and make B (c, r) envelope target sample as tightly as possible, that is, volume of hypersphere is as small as possible, therefore, this is excellent Change problem is described as:
s.t.||xi-c||2≤r2ii≥0,1≤i≤n (1)
Wherein, ξiIt is the relaxation item introduced, works as xiWhen in B (c, r) hypersphere or on face, ξi=0, otherwise ξi> 0;C > 0 is control Parameter adjusts the size of mistake point number of training (the outer sample number of ball) and r, adjusts this parameter according to practical application;
Usually introducing the i.e. so-called geo-nuclear tracin4 of kernel function improves the adaptability of algorithm, is substantially to find a suitable mapping phi for sample Feature spaceIt is mapped to the feature space an of higher-dimension as far as possibleIn, i.e.,When Given positive definite kernel function k:Rd×Rd∈ R, with this core induction under interior product representationSVDD model in space is
Hypersphere corresponding to above-mentioned model is denoted asConstruct the Lagrange's equation of above-mentioned model:
Wherein, α=(α1,…,αn)T>=0, β=(β1,…,βn)T>=0 is Lagrange multiplier vector, and formula (3) is respectively to optimization The original variable r of problem,And ξiIt seeks partial derivative and is set to 0, obtain:
By formula (6) it is found that αi=C- βi, when taking 0≤αiWhen≤C, βi>=0 must set up, therefore can omit this constraint, at this time by formula (4)~(6) formula (3) are substituted into, obtains the dual form of primal problem:
S52, by DBN, treated that data are trained study with SVDD and classify, and according to KKT theorem, works as Lagrange multiplier When > 0, the inequality constraints in formula (2) becomes equality constraint, i.e.,:Work as βi> 0 is (at this point, αi< C) when, ξi=0;Work as αi> 0 When,Then just like drawing a conclusion:1. working as αiWhen=0, φ (xi)It is interior;2. as 0 < αiWhen < C, φ (xi)On spherical surface;3. working as αiWhen=C, φ (xi)Outside ball, according to conclusion 2. and Formula (5) can obtain:
Wherein, xkCorresponding 0 < α of Lagrange multiplierk< C gives unknown sample x ∈ Rd
6. as described in claim 1 based on deepness belief network-Support Vector data description APT attack detection method, It is characterized in that, the step S6 specifically includes:
Decision is carried out by following function:
After giving a certain kernel function, such as Gaussian kernel k (xi,xj)=exp (- | | φ (xi)-φ(xj)||2/ h) (h is Gaussian kernel Bandwidth parameter, | | | | be European 2 norm), formula (9) can be reduced to
Wherein
It is computable a certain constant, as f (x) >=0, otherwise it is exceptional sample which, which is target sample, therefore is based on this SVDD algorithm can carry out recognition detection to data set.
CN201810526022.XA 2018-05-29 2018-05-29 Based on deepness belief network-Support Vector data description APT attack detection method Pending CN108848068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810526022.XA CN108848068A (en) 2018-05-29 2018-05-29 Based on deepness belief network-Support Vector data description APT attack detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810526022.XA CN108848068A (en) 2018-05-29 2018-05-29 Based on deepness belief network-Support Vector data description APT attack detection method

Publications (1)

Publication Number Publication Date
CN108848068A true CN108848068A (en) 2018-11-20

Family

ID=64209925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810526022.XA Pending CN108848068A (en) 2018-05-29 2018-05-29 Based on deepness belief network-Support Vector data description APT attack detection method

Country Status (1)

Country Link
CN (1) CN108848068A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347863A (en) * 2018-11-21 2019-02-15 成都城电电力工程设计有限公司 A kind of improved immune Network anomalous behaviors detection method
CN109934004A (en) * 2019-03-14 2019-06-25 中国科学技术大学 The method of privacy is protected in a kind of machine learning service system
CN109948424A (en) * 2019-01-22 2019-06-28 四川大学 A kind of group abnormality behavioral value method based on acceleration movement Feature Descriptor
CN110061961A (en) * 2019-03-05 2019-07-26 中国科学院信息工程研究所 A kind of anti-tracking network topological smart construction method and system based on limited Boltzmann machine
CN110290101A (en) * 2019-04-15 2019-09-27 南京邮电大学 Association attack recognition methods in smart grid environment based on depth trust network
CN110309886A (en) * 2019-07-08 2019-10-08 安徽农业大学 The real-time method for detecting abnormality of wireless sensor high dimensional data based on deep learning
CN110555463A (en) * 2019-08-05 2019-12-10 西北工业大学 gait feature-based identity recognition method
CN110889111A (en) * 2019-10-23 2020-03-17 广东工业大学 Power grid virtual data injection attack detection method based on deep belief network
CN111277433A (en) * 2020-01-15 2020-06-12 同济大学 Network service abnormity detection method and device based on attribute network characterization learning
CN111340144A (en) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 Risk sample detection method and device, electronic equipment and storage medium
CN111369339A (en) * 2020-03-02 2020-07-03 深圳索信达数据技术有限公司 Over-sampling improved svdd-based bank client transaction behavior abnormity identification method
CN111404911A (en) * 2020-03-11 2020-07-10 国网新疆电力有限公司电力科学研究院 Network attack detection method and device and electronic equipment
CN111523588A (en) * 2020-04-20 2020-08-11 电子科技大学 Method for classifying APT attack malicious software traffic based on improved LSTM
CN112073362A (en) * 2020-06-19 2020-12-11 北京邮电大学 APT (advanced persistent threat) organization flow identification method based on flow characteristics
CN112134873A (en) * 2020-09-18 2020-12-25 国网山东省电力公司青岛供电公司 IoT network abnormal flow real-time detection method and system
CN112399413A (en) * 2020-11-09 2021-02-23 东南大学 Physical layer identity authentication method based on deep support vector description method
CN112866273A (en) * 2021-02-01 2021-05-28 广东浩云长盛网络股份有限公司 Network abnormal behavior detection method based on big data technology
CN113158183A (en) * 2021-01-13 2021-07-23 青岛大学 Method, system, medium, equipment and application for detecting malicious behavior of mobile terminal
CN113364703A (en) * 2021-06-03 2021-09-07 中国电信股份有限公司 Network application traffic processing method and device, electronic equipment and readable medium
CN113359666A (en) * 2021-05-31 2021-09-07 西北工业大学 Deep SVDD (singular value decomposition) based vehicle external intrusion detection method and system
CN113591915A (en) * 2021-06-29 2021-11-02 中国电子科技集团公司第三十研究所 Abnormal flow identification method based on semi-supervised learning and single-classification support vector machine
CN114358064A (en) * 2021-12-23 2022-04-15 中国人民解放军海军工程大学 Interference detection device and method based on deep support vector data description
WO2022237865A1 (en) * 2021-05-14 2022-11-17 华为技术有限公司 Data processing method and apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060008A (en) * 2016-05-10 2016-10-26 中国人民解放军61599部队计算所 Network invasion abnormity detection method
US20170099306A1 (en) * 2015-10-02 2017-04-06 Trend Micro Incorporated Detection of advanced persistent threat attack on a private computer network
CN107992746A (en) * 2017-12-14 2018-05-04 华中师范大学 Malicious act method for digging and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170099306A1 (en) * 2015-10-02 2017-04-06 Trend Micro Incorporated Detection of advanced persistent threat attack on a private computer network
CN106060008A (en) * 2016-05-10 2016-10-26 中国人民解放军61599部队计算所 Network invasion abnormity detection method
CN107992746A (en) * 2017-12-14 2018-05-04 华中师范大学 Malicious act method for digging and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
EZREALMORE: ""特征选择方法:卡方检验和信息增益"", 《HTTPS://BLOG.CSDN.NET/LK7688535/ARTICLE/DETAILS/51322423》 *
刘飞帆等: ""一种基于DBN-SVDD的APT攻击检测方法"", 《计算机科学与应用》 *
杨昆朋: ""基于深度学习的入侵检测"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
胡文军: ""关于模式识别中大样本分类技术的几个关键问题研究"", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109347863A (en) * 2018-11-21 2019-02-15 成都城电电力工程设计有限公司 A kind of improved immune Network anomalous behaviors detection method
CN109347863B (en) * 2018-11-21 2021-04-06 成都城电电力工程设计有限公司 Improved immune network abnormal behavior detection method
CN109948424A (en) * 2019-01-22 2019-06-28 四川大学 A kind of group abnormality behavioral value method based on acceleration movement Feature Descriptor
CN110061961A (en) * 2019-03-05 2019-07-26 中国科学院信息工程研究所 A kind of anti-tracking network topological smart construction method and system based on limited Boltzmann machine
CN110061961B (en) * 2019-03-05 2020-08-25 中国科学院信息工程研究所 Anti-tracking network topology intelligent construction method and system based on limited Boltzmann machine
CN109934004A (en) * 2019-03-14 2019-06-25 中国科学技术大学 The method of privacy is protected in a kind of machine learning service system
CN110290101B (en) * 2019-04-15 2021-12-07 南京邮电大学 Deep trust network-based associated attack behavior identification method in smart grid environment
CN110290101A (en) * 2019-04-15 2019-09-27 南京邮电大学 Association attack recognition methods in smart grid environment based on depth trust network
CN110309886A (en) * 2019-07-08 2019-10-08 安徽农业大学 The real-time method for detecting abnormality of wireless sensor high dimensional data based on deep learning
CN110309886B (en) * 2019-07-08 2022-09-20 安徽农业大学 Wireless sensor high-dimensional data real-time anomaly detection method based on deep learning
CN110555463A (en) * 2019-08-05 2019-12-10 西北工业大学 gait feature-based identity recognition method
CN110555463B (en) * 2019-08-05 2022-05-03 西北工业大学 Gait feature-based identity recognition method
CN110889111A (en) * 2019-10-23 2020-03-17 广东工业大学 Power grid virtual data injection attack detection method based on deep belief network
CN111277433A (en) * 2020-01-15 2020-06-12 同济大学 Network service abnormity detection method and device based on attribute network characterization learning
CN111369339A (en) * 2020-03-02 2020-07-03 深圳索信达数据技术有限公司 Over-sampling improved svdd-based bank client transaction behavior abnormity identification method
CN111404911B (en) * 2020-03-11 2022-10-14 国网新疆电力有限公司电力科学研究院 Network attack detection method and device and electronic equipment
CN111404911A (en) * 2020-03-11 2020-07-10 国网新疆电力有限公司电力科学研究院 Network attack detection method and device and electronic equipment
CN111523588A (en) * 2020-04-20 2020-08-11 电子科技大学 Method for classifying APT attack malicious software traffic based on improved LSTM
CN111523588B (en) * 2020-04-20 2022-04-29 电子科技大学 Method for classifying APT attack malicious software traffic based on improved LSTM
CN111340144A (en) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 Risk sample detection method and device, electronic equipment and storage medium
CN112073362A (en) * 2020-06-19 2020-12-11 北京邮电大学 APT (advanced persistent threat) organization flow identification method based on flow characteristics
CN112134873A (en) * 2020-09-18 2020-12-25 国网山东省电力公司青岛供电公司 IoT network abnormal flow real-time detection method and system
CN112399413A (en) * 2020-11-09 2021-02-23 东南大学 Physical layer identity authentication method based on deep support vector description method
CN112399413B (en) * 2020-11-09 2022-08-30 东南大学 Physical layer identity authentication method based on deep support vector description method
CN113158183A (en) * 2021-01-13 2021-07-23 青岛大学 Method, system, medium, equipment and application for detecting malicious behavior of mobile terminal
CN112866273A (en) * 2021-02-01 2021-05-28 广东浩云长盛网络股份有限公司 Network abnormal behavior detection method based on big data technology
WO2022237865A1 (en) * 2021-05-14 2022-11-17 华为技术有限公司 Data processing method and apparatus
CN113359666A (en) * 2021-05-31 2021-09-07 西北工业大学 Deep SVDD (singular value decomposition) based vehicle external intrusion detection method and system
CN113364703A (en) * 2021-06-03 2021-09-07 中国电信股份有限公司 Network application traffic processing method and device, electronic equipment and readable medium
CN113364703B (en) * 2021-06-03 2023-08-08 天翼云科技有限公司 Processing method and device of network application traffic, electronic equipment and readable medium
CN113591915A (en) * 2021-06-29 2021-11-02 中国电子科技集团公司第三十研究所 Abnormal flow identification method based on semi-supervised learning and single-classification support vector machine
CN113591915B (en) * 2021-06-29 2023-05-19 中国电子科技集团公司第三十研究所 Abnormal flow identification method based on semi-supervised learning and single-classification support vector machine
CN114358064A (en) * 2021-12-23 2022-04-15 中国人民解放军海军工程大学 Interference detection device and method based on deep support vector data description
CN114358064B (en) * 2021-12-23 2022-06-21 中国人民解放军海军工程大学 Interference detection device and method based on deep support vector data description

Similar Documents

Publication Publication Date Title
CN108848068A (en) Based on deepness belief network-Support Vector data description APT attack detection method
Tian et al. An intrusion detection approach based on improved deep belief network
Tang et al. A pruning neural network model in credit classification analysis
CN109302410B (en) Method and system for detecting abnormal behavior of internal user and computer storage medium
CN110213244A (en) A kind of network inbreak detection method based on space-time characteristic fusion
Lopez-Rojas et al. Money laundering detection using synthetic data
CN109034194B (en) Transaction fraud behavior deep detection method based on feature differentiation
CN109858509A (en) Based on multilayer stochastic neural net single classifier method for detecting abnormality
CN109299741B (en) Network attack type identification method based on multi-layer detection
CN110381079B (en) Method for detecting network log abnormity by combining GRU and SVDD
CN112491796B (en) Intrusion detection and semantic decision tree quantitative interpretation method based on convolutional neural network
CN106570513A (en) Fault diagnosis method and apparatus for big data network system
CN104636449A (en) Distributed type big data system risk recognition method based on LSA-GCC
CN108932527A (en) Using cross-training model inspection to the method for resisting sample
CN111143838B (en) Database user abnormal behavior detection method
CN105574489A (en) Layered stack based violent group behavior detection method
CN110084609B (en) Transaction fraud behavior deep detection method based on characterization learning
CN102158486A (en) Method for rapidly detecting network invasion
CN109388944A (en) A kind of intrusion detection method based on KPCA and ELM
Dickey et al. Beyond correlation: A path‐invariant measure for seismogram similarity
KR20230107558A (en) Model training, data augmentation methods, devices, electronic devices and storage media
Xiao et al. A multitarget backdooring attack on deep neural networks with random location trigger
Yang et al. A method of intrusion detection based on Attention-LSTM neural network
Zewoudie et al. Federated Learning for Privacy Preserving On-Device Speaker Recognition
Alhazmi et al. A survey of credit card fraud detection use machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181120

RJ01 Rejection of invention patent application after publication