CN102324007B - Abnormal detection method based on data mining - Google Patents

Abnormal detection method based on data mining Download PDF

Info

Publication number
CN102324007B
CN102324007B CN201110283015XA CN201110283015A CN102324007B CN 102324007 B CN102324007 B CN 102324007B CN 201110283015X A CN201110283015X A CN 201110283015XA CN 201110283015 A CN201110283015 A CN 201110283015A CN 102324007 B CN102324007 B CN 102324007B
Authority
CN
China
Prior art keywords
training
observational variable
matrix
weak classifier
separation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110283015XA
Other languages
Chinese (zh)
Other versions
CN102324007A (en
Inventor
唐朝伟
时豪
严鸣
张雪臻
李超群
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201110283015XA priority Critical patent/CN102324007B/en
Publication of CN102324007A publication Critical patent/CN102324007A/en
Application granted granted Critical
Publication of CN102324007B publication Critical patent/CN102324007B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention discloses an abnormal detection method based on data mining, belonging to the technical field of network safety. The abnormal detection method is based on independent component analysis (ICA) and an Adaboost method, wherein a Fast-ICA algorithm is firstly used for performing feature extraction so as to eliminate redundant attributes and reduce data dimensions; and the Adaboost method is used for sequentially training a group of weak classifiers and integrating the weak classifiers into a strong classifier. According to the invention, redundant attribute information in network data is effectively eliminated, and the computation amount for training and detecting the classifiers is reduced; meanwhile, the detection precision is increased, and the probability of misreport and report failure of samples is decreased.

Description

The method for detecting abnormality that based on data excavates
Technical field
The present invention relates to the computing machine method for detecting abnormality, the method for detecting abnormality that especially a kind of based on data excavates.
Background technology
Intrusion detection is the detection to the computer system attack, provides internaling attack, the real-time guard of external attack and maloperation.In order to identify accurately attack type, intrusion detection is by from collecting related data information in several key events in the network system of the log recording file the computing machine local system, computing machine etc., and in the analysis by for these data, whether the result of the behavior generation of violating security strategy or the sign that whether is subjected to attack is arranged in the computing machine local system that obtains detecting or computer network system.Intrusion detection can monitoring and the current entry of analysis user and system is movable, the existing known attack of integrality, identification of the keystone resources of the security breaches in the check system configuration, evaluates calculation machine system and data file or user's abuse, statistics and analyze abnormal behaviour, record and manage and safeguard for system journal, namely in computer system performance can't affected situation, computer system network is carried out to real-time monitoring and control.
In existing Intrusion Detection Technique, the mass data collected is as the data source of intruding detection system, it is carried out to analyzing and processing to judge whether to occur intrusion event, a large amount of data provide can be for the quantity of information of utilizing in, also increased the difficulty of effectively utilizing these data, useful information may be submerged among a large amount of redundant datas on the contrary, has increased the difficulty of feature extraction.
Summary of the invention
The purpose of this invention is to provide the method for detecting abnormality that a kind of based on data excavates, by extracting useful network data feature in network data, eliminated the redundant attributes in the network data, improved the precision detected, and the probability that has reduced wrong report and failed to report.
To achieve these goals, the invention provides the method for detecting abnormality that a kind of based on data excavates, it is characterized in that: formed by following steps:
S1, using network data as observational variable, adopt the Fast-ICA method from described observational variable, extracting the observational variable feature, form observational variable characteristic set Z, namely obtain the network data feature of eliminating redundant attributes and reducing data dimension;
S2, employing AdaBoost method training observation characteristics of variables: the observational variable feature set of take is training set, each observational variable feature is as training text, to each training text, give weights, wherein said weights are for meaning that described training text is selected into the probability of training set by Weak Classifier, after the Weak Classifier training finishes, according to the classification results of training set, regulate the weight of each training text: if described training sample is by described Weak Classifier precise classification, the weight of described Weak Classifier reduces, and it is reduced by the probability that next Weak Classifier is selected into training set; If described training sample is not by described Weak Classifier precise classification, it is promoted by the probability that next Weak Classifier is selected into training set, finally obtains strong classifier;
S3, according to described strong classifier, abnormal network data is detected.
In described step S1, formed by following steps:
S10, N observational variable of setting
Figure 505410DEST_PATH_IMAGE002
, form observational variable set and each observational variable and all be expressed as M isolated component
Figure 379563DEST_PATH_IMAGE004
Linear combination, M isolated component wherein
Figure 800180DEST_PATH_IMAGE004
Form the isolated component set, i=1 ..., N, j=1 ..., M and N, M are and are greater than 1 integer, ask for the transposed matrix X=of observational variable set
Figure 210433DEST_PATH_IMAGE006
And the transposed matrix S=of isolated component set
Figure 323882DEST_PATH_IMAGE008
, and set X=A*S, wherein A
Figure 186796DEST_PATH_IMAGE010
For unknown hybrid matrix;
S11, described observational variable is carried out to the albefaction processing;
The generalized inverse of S12, setting hybrid matrix A is separation matrix W, according to formula
Figure 411104DEST_PATH_IMAGE012
By random gradient method, regulate described separation matrix W, ask for the optimal estimation of described transposed matrix S
Figure 410284DEST_PATH_IMAGE014
Thereby, obtain the network data feature of eliminating redundant attributes and reducing data dimension.
In described step S12, regulating separation matrix W by Stochastic gradient method is comprised of following steps:
(1) according to formula
Figure 694635DEST_PATH_IMAGE016
Described separation matrix W is carried out to iterative processing with behavior unit, wherein
Figure 546310DEST_PATH_IMAGE018
Mean after k iteration in described separation matrix W with the observational variable set in i observational variable Corresponding delegation's vector,
Figure 755891DEST_PATH_IMAGE022
Mean after k+1 iteration in separation matrix W with the observational variable set in i observational variable
Figure 883247DEST_PATH_IMAGE020
Corresponding delegation's vector,
Figure 783070DEST_PATH_IMAGE024
Mean after k iteration in separation matrix W with the observational variable set in i observational variable
Figure 286864DEST_PATH_IMAGE020
The transposed matrix of corresponding delegation's vector, E is the Gaussian distribution calculation symbol for expectation operational symbol, G, i, k are and are greater than 1 integer;
(2), judgement -
Figure 886789DEST_PATH_IMAGE018
Absolute value≤ξ whether set up, if set up finishing iteration is processed, obtains final separation matrix W(n), perform step (3), if be false repeated execution of steps (1), wherein ξ gets any number between 0~1;
(3), to described final separation matrix W(n) with behavior unit, carry out normalized, namely
Figure 273908DEST_PATH_IMAGE026
, wherein
Figure 643710DEST_PATH_IMAGE028
Mean to ask for norm;
(4) by final separation matrix W(n) the substitution formula In try to achieve the optimal estimation of described transposed matrix S
Figure 767578DEST_PATH_IMAGE014
Thereby, obtain the network data feature of eliminating redundant attributes and reducing data dimension.
In described step S2, formed by following steps:
S20, setting training set are G=
Figure 314097DEST_PATH_IMAGE032
,
Figure 487590DEST_PATH_IMAGE034
,
Figure 498271DEST_PATH_IMAGE036
, wherein y is the optimal estimation of transposed matrix S, i=1 ..., m+n, m+n are greater than 1 integer;
Figure 403910DEST_PATH_IMAGE038
For the class label,
Figure 500042DEST_PATH_IMAGE038
=+1 o'clock is minority class,
Figure 149329DEST_PATH_IMAGE038
=-1 o'clock is most classes, and the number of minority class sample is m, and the number of most class samples is n, and m<<n;
S21, the described training set of initialization: by training set G each
Figure 14517DEST_PATH_IMAGE040
Weight all be initialized as 1/n;
S22, the BP of take are Weak Classifier, call Weaklearn and carry out T iteration training, and wherein each iteration training obtains one group of Weak Classifier function;
S23, judge whether iterations >=T sets up before each iteration training, if set up by T group Weak Classifier combination of function acquisition strong classifier, if be false adjust weight, repeated execution of steps S22.
In sum, owing to having adopted technique scheme, the invention has the beneficial effects as follows:
By the present invention, effectively eliminate the redundant attributes information in network data, reduced the operand of training and the detection of sorter; Also improved simultaneously the precision detected, the probability that reduces the sample wrong report and fail to report.
The accompanying drawing explanation
Examples of the present invention will be described by way of reference to the accompanying drawings, wherein:
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is the process flow diagram that the Fast-ICA method is extracted feature;
Fig. 3 is the process flow diagram of AdaBoost method;
Fig. 4 is the experiment test design sketch.
Embodiment
Disclosed all features in this instructions, or the step in disclosed all methods or process, except mutually exclusive feature and/or step, all can combine by any way.
Disclosed arbitrary feature in this instructions (comprising any accessory claim, summary and accompanying drawing), unless special narration all can be replaced by other equivalences or the alternative features with similar purpose.That is, unless special narration, each feature is an example in a series of equivalences or similar characteristics.
As shown in Fig. 1, the method for detecting abnormality that this based on data excavates is comprised of three steps.
Step 1, using network data as observational variable, adopt the Fast-ICA method from this observational variable, extracting the observational variable feature, form the observational variable characteristic set, namely obtain the network data feature of eliminating redundant attributes and reducing data dimension.
The Fast-ICA algorithm is called again fix point method (Fixed-point), and its thinking is by Stochastic gradient method, to regulate separation matrix W to make the independence between source signal the strongest.
As shown in Figure 2, the process that adopts the Fast-ICA method to extract the observational variable feature specifically is comprised of following steps: S10, N observational variable of setting
Figure 153375DEST_PATH_IMAGE002
, form observational variable set and each observational variable and all be expressed as M isolated component
Figure 170091DEST_PATH_IMAGE004
Linear combination, M isolated component wherein
Figure 419807DEST_PATH_IMAGE004
Form the isolated component set, i=1 ..., N, j=1 ..., M and N, M are and are greater than 1 integer, ask for the transposed matrix X=of observational variable set
Figure 342764DEST_PATH_IMAGE006
And the transposed matrix S=of isolated component set
Figure 652522DEST_PATH_IMAGE008
, and set X=A*S, wherein A
Figure 723246DEST_PATH_IMAGE010
For unknown hybrid matrix;
S11, described observational variable is carried out to the albefaction processing;
The generalized inverse of S12, setting hybrid matrix A is separation matrix W, according to formula By random gradient method, regulate described separation matrix W, ask for the optimal estimation of described transposed matrix S
Figure 554116DEST_PATH_IMAGE014
Thereby, obtain the network data feature of eliminating redundant attributes and reducing data dimension, i.e. observational variable set feature.
The Fast-ICA method is that to take the maximum criterion principle of negentropy be basis.The principle of the maximum criterion of negentropy is: as can be known by central limit theorem, and stochastic variable in the observational variable set
Figure 972459DEST_PATH_IMAGE042
By many mutually independent random variables Form, need only each independently stochastic variable
Figure 121998DEST_PATH_IMAGE044
Have limited average and variance, no matter how it distributes, stochastic variable
Figure 252503DEST_PATH_IMAGE042
Must be near Gaussian distribution.Therefore, in detachment process, measure the non-Gauss of optimal estimation y, when non-Gauss's tolerance reaches maximum, show the separation completed each isolated component, the definition negentropy is as follows:
Figure 904064DEST_PATH_IMAGE046
Wherein
Figure 887063DEST_PATH_IMAGE048
Mean to have with optimal estimation y the random quantity of mutually homoscedastic Gaussian distribution,
Figure 282273DEST_PATH_IMAGE050
Information entropy for stochastic variable.By above-mentioned formula, can be found out, when optimal estimation y has Gaussian distribution
Figure 768749DEST_PATH_IMAGE052
, when the non-Gauss of optimal estimation y stronger,
Figure 856790DEST_PATH_IMAGE054
Value larger.
Therefore, Stochastic gradient method adopts
Figure 123824DEST_PATH_IMAGE056
(namely
Figure 260407DEST_PATH_IMAGE054
Be proportional to
Figure 663706DEST_PATH_IMAGE058
, wherein E is the expectation operational symbol, G is Gaussian distribution calculation symbol) the maximum criterion of negentropy separation matrix W is carried out to iterative processing, by following steps, formed:
(1) according to formula
Figure 361797DEST_PATH_IMAGE016
Described separation matrix W is carried out to iterative processing with behavior unit, wherein
Figure 116127DEST_PATH_IMAGE018
Mean after k iteration in described separation matrix W with the observational variable set in i observational variable
Figure 853139DEST_PATH_IMAGE020
Corresponding delegation's vector, Mean after k+1 iteration in separation matrix W with the observational variable set in i observational variable
Figure 744051DEST_PATH_IMAGE020
Corresponding delegation's vector,
Figure 657781DEST_PATH_IMAGE024
Mean after k iteration in separation matrix W with the observational variable set in i observational variable
Figure 198483DEST_PATH_IMAGE020
The transposed matrix of corresponding delegation's vector, E is the Gaussian distribution calculation symbol for expectation operational symbol, G, i, k are and are greater than 1 integer;
(2), judgement
Figure 841954DEST_PATH_IMAGE022
-
Figure 114804DEST_PATH_IMAGE018
Absolute value≤ξ whether set up, if set up finishing iteration is processed, obtains final separation matrix W(n), perform step (3), if be false repeated execution of steps (1), wherein ξ gets any number between 0~1;
(3), to described final separation matrix W(n) with behavior unit, carry out normalized, namely
Figure 843726DEST_PATH_IMAGE026
, wherein
Figure 358758DEST_PATH_IMAGE028
Mean to ask for norm;
(4) by final separation matrix W(n) the substitution formula
Figure 856736DEST_PATH_IMAGE030
In try to achieve the optimal estimation of described transposed matrix S
Figure 362803DEST_PATH_IMAGE014
Thereby, obtain the network data feature of eliminating redundant attributes and reducing data dimension.
Step 2, adopt AdaBoost method training observation characteristics of variables: the observational variable feature set of take is training set, each observational variable feature is as training text, to each training text, give weights, wherein said weights are for meaning that described training text is selected into the probability of training set by Weak Classifier, after the Weak Classifier training finishes, according to the classification results of training set, regulate the weight of each training text: if described training sample is by described Weak Classifier precise classification, the weight of described Weak Classifier reduces, it is reduced by the probability that next Weak Classifier is selected into training set, if described training sample is not by described Weak Classifier precise classification, it is promoted by the probability that next Weak Classifier is selected into training set, finally obtains strong classifier,
Step 3, according to described strong classifier, abnormal network data is detected.
As shown in Figure 3, using the BP network as Weak Classifier in AdaBoost method training process, formed by following steps:
S20, setting training set are G=
Figure 516704DEST_PATH_IMAGE032
,
Figure 399209DEST_PATH_IMAGE034
,
Figure 689377DEST_PATH_IMAGE036
, wherein y is the optimal estimation of transposed matrix S, i=1 ..., m+n, m+n are greater than 1 integer; For the class label,
Figure 69859DEST_PATH_IMAGE038
=+1 o'clock is minority class,
Figure 428159DEST_PATH_IMAGE038
=-1 o'clock is most classes, and the number of minority class sample is m, and the number of most class samples is n, and m<<n;
S21, initialization training set: by training set G each
Figure 900729DEST_PATH_IMAGE040
Weight all be initialized as 1/n;
S22, the BP of take are Weak Classifier, call Weaklearn and carry out T iteration training, and wherein each iteration training obtains one group of Weak Classifier function;
S23, before each iteration, judging whether iterations >=T sets up, if set up obtain strong classifier by T group Weak Classifier combination of function, if be false adjust weight, repeated execution of steps S22.Because the training of the iteration of AdaBoost method and weight adjustment process are mature technology, will not tire out and state at this.
The KDD99 data set is selected in test, and this data set is the test data set of being set up by Massachusetts Institute of Technology's Lincoln laboratory in 1998.Wherein every data record all comprises 41 property values.These property values can be divided into four parts, the base attribute namely connected, the contents attribute of connection and time-based flow attribution, Host Based flow attribution.Experimental data is comprised of training set and test set two parts.
In feature extraction step, introduced the FASTICA feature extraction step, before to the network data classification, first use the FASTICA algorithm to carry out feature extraction to data, eliminated the redundant attributes in data, greatly reduce the operand of training and the detection of sorter, utilized independent component analysis method to find between each attribute of new feature space sample in this space independent.In experiment, training dataset comprises 4000 records, and test data set comprises 800 records.
Emulation platform: programming simulation under matlab7.6, test design sketch as shown in Figure 4:
Strong classifier error in classification rate
ans?=?0.0063;
Weak Classifier error in classification rate
ans?=?0.0142。
Experimental analysis:
The performance of intruding detection system is weighed in experiment by verification and measurement ratio (detection rate, DR) and rate of false alarm (false positive rate, FPR).They are defined as follows:
The invasion sample number of verification and measurement ratio (DR)=detect/invasion total sample number
The normal sample given figure of error rate (FPR)=be mistaken as invasion/normal total sample number
In experimentation, first use training data set pair system to train, to set up an intrusion detection rule base; After having trained, the use test data set is tested system.
From experimental data, can find out, what this patent proposed has higher verification and measurement ratio and low rate of false alarm based on the visible intrusion detection method of processing through the FASTICA Feature Dimension Reduction.
Table one error in classification statistics
Table two detection statistics
Figure DEST_PATH_IMAGE062A
By the present invention, adopt the FASTICA algorithm to carry out feature extraction to data and carry out the data pre-service, redundant attributes in data is eliminated, greatly reduced the operand of sorter training and context of detection, aspect sorter, make Weak Classifier with BP simultaneously and form the Adaboost strong classifier, with 4000 training samples, remove to train the Adaboost sorter in test.As can be seen from the above table, through the pretreated Adaboost strong classifier of Fast-ICA data, higher verification and measurement ratio is arranged, simultaneously strong error in classification rate is lower than Weak Classifier error in classification rate, and the verification and measurement ratio of Adaboost strong classifier will be higher than Weak Classifier classification and Detection rate.
The present invention is not limited to aforesaid embodiment.The present invention expands to any new feature or any new combination disclosed in this manual, and the arbitrary new method disclosed or step or any new combination of process.

Claims (3)

1. the method for detecting abnormality that excavates of a based on data is characterized in that: be comprised of following steps:
S1, using network data as observational variable, adopt the Fast-ICA method from described observational variable, extracting the observational variable feature, form observational variable characteristic set Z, namely obtain the network data feature of eliminating redundant attributes and reducing data dimension;
S2, employing AdaBoost method training observation characteristics of variables: the observational variable feature set of take is training set, each observational variable feature is as training text, to each training text, give weights, wherein said weights are for meaning that described training text is selected into the probability of training set by Weak Classifier, after the Weak Classifier training finishes, according to the classification results of training set, regulate the weight of each training text: if described training sample is by described Weak Classifier precise classification, the weight of described Weak Classifier reduces, and it is reduced by the probability that next Weak Classifier is selected into training set; If described training sample is not by described Weak Classifier precise classification, it is promoted by the probability that next Weak Classifier is selected into training set, finally obtains strong classifier;
S3, according to described strong classifier, abnormal network data is detected;
In described step S1, formed by following steps:
S10, N observational variable x of setting i, form observational variable set and each observational variable and all be expressed as M isolated component s jLinear combination, M isolated component s wherein jForm the isolated component set, i=1 ..., N, j=1 ..., M and N, M are and are greater than 1 integer, ask for the transposed matrix X=(x of observational variable set 1, x 2..., x N) TAnd the transposed matrix S=(s of isolated component set 1, s 2..., s M) T, and set X=A*S, wherein A=(a Ij) N * M is unknown hybrid matrix;
S11, described observational variable is carried out to the albefaction processing;
The generalized inverse of S12, setting hybrid matrix A is separation matrix W, according to formula y=W*X, regulate described separation matrix W by random gradient method, ask for the optimal estimation y of described transposed matrix S, thereby obtain the network data feature of eliminating redundant attributes and reducing data dimension;
S13, described Stochastic gradient method employing Ng (y) ∝ [E|G (y) |-E|G (y Gauss) |] 2The maximum criterion of negentropy separation matrix W is carried out to iterative processing, namely Ng (y) be proportional to [E|G (y) |-E|G (y Gauss) |] 2, wherein E is the expectation operational symbol, and G is the Gaussian distribution calculation symbol, and Ng (y) is negentropy, y GaussFor with optimal estimation y, having the random quantity of mutually homoscedastic Gaussian distribution.
2. the method for detecting abnormality that excavates of based on data according to claim 1 is characterized in that: in described step S12, regulate separation matrix W by Stochastic gradient method and be comprised of following steps:
(1) according to formula W i ( k + 1 ) = E | x i G ( W i T ( k ) x i ) | - E | G ( W i T ( k ) x i ) | W i ( k ) Described separation matrix W is carried out to iterative processing with behavior unit, wherein W i(k) mean after k iteration in described separation matrix W with the observational variable set in i observational variable x iCorresponding delegation's vector, W i(k+1) mean after k+1 iteration in separation matrix W with the observational variable set in i observational variable x iCorresponding delegation's vector,
Figure FDA00003606392100023
Mean after k iteration in separation matrix W with the observational variable set in i observational variable x iThe transposed matrix of corresponding delegation's vector, E is the Gaussian distribution calculation symbol for expectation operational symbol, G, i, k are and are greater than 1 integer;
(2), judgement W i(k+1)-W iWhether absolute value≤ξ (k) sets up, if set up finishing iteration is processed, obtains final separation matrix W(n), perform step (3), if be false repeated execution of steps (1), wherein ξ gets any number between 0~1;
(3), to described final separation matrix W(n) with behavior unit, carry out normalized, namely
Figure FDA00003606392100021
Wherein || || mean to ask for norm;
(4) by final separation matrix W(n) the substitution formula S *In=W*X, try to achieve the optimal estimation y of described transposed matrix S, thereby obtain the network data feature of eliminating redundant attributes and reducing data dimension.
3. the method for detecting abnormality that excavates of based on data according to claim 1 is characterized in that: following steps, consist of in described step S2:
S20, setting training set are G={ (x 1, h 1) ..., (x m+n, h m+n), x i∈ y, h i∈ H={-1 ,+1}, wherein y is the optimal estimation of transposed matrix S, i=1 ..., m+n, m+n are greater than 1 integer; h iFor class label, h i=+1 o'clock is minority class, h i=-1 o'clock is most classes, and the number of minority class sample is m, and the number of most class samples is n, and m<<n;
S21, the described training set of initialization: by each (x in training set G i, h i) weight all be initialized as 1/n;
S22, the BP of take are Weak Classifier, call Weaklearn and carry out T iteration training, and wherein each iteration training obtains one group of Weak Classifier function;
S23, judge whether iterations >=T sets up before each iteration training, if set up by T group Weak Classifier combination of function acquisition strong classifier, if be false adjust weight, repeated execution of steps S22.
CN201110283015XA 2011-09-22 2011-09-22 Abnormal detection method based on data mining Expired - Fee Related CN102324007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110283015XA CN102324007B (en) 2011-09-22 2011-09-22 Abnormal detection method based on data mining

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110283015XA CN102324007B (en) 2011-09-22 2011-09-22 Abnormal detection method based on data mining

Publications (2)

Publication Number Publication Date
CN102324007A CN102324007A (en) 2012-01-18
CN102324007B true CN102324007B (en) 2013-11-27

Family

ID=45451748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110283015XA Expired - Fee Related CN102324007B (en) 2011-09-22 2011-09-22 Abnormal detection method based on data mining

Country Status (1)

Country Link
CN (1) CN102324007B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102879823B (en) * 2012-09-28 2015-07-22 电子科技大学 Method for fusing seismic attributes on basis of fast independent component analysis
CN103536282B (en) * 2013-11-06 2015-02-04 中国人民解放军第三军医大学 Magnetic induction cardiopulmonary activity signal separation method based on Fast-ICA method
US10417226B2 (en) * 2015-05-29 2019-09-17 International Business Machines Corporation Estimating the cost of data-mining services
CN108319883B (en) * 2017-01-16 2020-11-06 广东精点数据科技股份有限公司 Fingerprint identification method based on rapid independent component analysis
CN106950945B (en) * 2017-04-28 2019-04-09 宁波大学 A kind of fault detection method based on dimension changeable type independent component analysis model
CN107231348B (en) * 2017-05-17 2020-07-28 桂林电子科技大学 Network flow abnormity detection method based on relative entropy theory
CN112153000B (en) * 2020-08-21 2023-04-18 杭州安恒信息技术股份有限公司 Method and device for detecting network flow abnormity, electronic device and storage medium
CN112055007B (en) * 2020-08-28 2022-11-15 东南大学 Programmable node-based software and hardware combined threat situation awareness method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Adaboost方法在入侵检测技术上的应用;郭红刚等;《计算机应用》;20050130;第25卷(第1期);第144-146页 *
基于独立分量分析的入侵检测系统研究;张磊;《西安电子科技大学硕士论文》;20041231;第21、28、30、31页 *
张磊.基于独立分量分析的入侵检测系统研究.《西安电子科技大学硕士论文》.2004,
郭红刚等.Adaboost方法在入侵检测技术上的应用.《计算机应用》.2005,第25卷(第1期),

Also Published As

Publication number Publication date
CN102324007A (en) 2012-01-18

Similar Documents

Publication Publication Date Title
CN102324007B (en) Abnormal detection method based on data mining
CN108737406B (en) Method and system for detecting abnormal flow data
CN102291392B (en) Hybrid intrusion detection method based on Bagging algorithm
CN113688042B (en) Determination method and device of test scene, electronic equipment and readable storage medium
CN104598813B (en) Computer intrusion detection method based on integrated study and semi-supervised SVM
CN104169909B (en) Context resolution device and context resolution method
CN106709349B (en) A kind of malicious code classification method based on various dimensions behavioural characteristic
Sharma et al. A novel multi-classifier layered approach to improve minority attack detection in IDS
CN104636449A (en) Distributed type big data system risk recognition method based on LSA-GCC
CN111652290A (en) Detection method and device for confrontation sample
CN107315956A (en) A kind of Graph-theoretical Approach for being used to quick and precisely detect Malware on the zero
CN113868006A (en) Time sequence detection method and device, electronic equipment and computer storage medium
CN106792883A (en) Sensor network abnormal deviation data examination method and system
CN111726351B (en) Bagging-improved GRU parallel network flow abnormity detection method
Kye et al. Hierarchical detection of network anomalies: A self-supervised learning approach
Han et al. Partially Supervised Graph Embedding for Positive Unlabelled Feature Selection.
CN105590026A (en) PCA (Principal Component Analysis) based satellite telemetering regression method
Kanhere et al. A survey on outlier detection in financial transactions
CN109902731B (en) Performance fault detection method and device based on support vector machine
CN111275101A (en) Fault identification method and device for aircraft hydraulic system and readable storage medium
CN116074092B (en) Attack scene reconstruction system based on heterogram attention network
KR102192196B1 (en) An apparatus and method for detecting malicious codes using ai based machine running cross validation techniques
CN104980442B (en) A kind of network inbreak detection method based on first sample rarefaction representation
Sudha et al. Analysis and evaluation of integrated cyber crime offences
CN103150501A (en) Negative choice improvement-based intrusion detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20131127

Termination date: 20190922

CF01 Termination of patent right due to non-payment of annual fee