CN110674940A - Multi-index anomaly detection method based on neural network - Google Patents

Multi-index anomaly detection method based on neural network Download PDF

Info

Publication number
CN110674940A
CN110674940A CN201910880142.4A CN201910880142A CN110674940A CN 110674940 A CN110674940 A CN 110674940A CN 201910880142 A CN201910880142 A CN 201910880142A CN 110674940 A CN110674940 A CN 110674940A
Authority
CN
China
Prior art keywords
neuron
neurons
activated
radius
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910880142.4A
Other languages
Chinese (zh)
Other versions
CN110674940B (en
Inventor
葛晓波
杨辰
殷传旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qing Chuang Information Technology Co Ltd
Original Assignee
Shanghai Qing Chuang Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qing Chuang Information Technology Co Ltd filed Critical Shanghai Qing Chuang Information Technology Co Ltd
Priority to CN201910880142.4A priority Critical patent/CN110674940B/en
Publication of CN110674940A publication Critical patent/CN110674940A/en
Application granted granted Critical
Publication of CN110674940B publication Critical patent/CN110674940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a multi-index abnormality detection method based on a neural network, which comprises the following specific steps: step 1: defining a data format; step 2: performing model training on the system by using the SOM, and defining the training process as a learning process; and step 3: carrying out anomaly detection on input data, and defining the anomaly detection as a mapping process; and 4, step 4: and when the model is mapped to be abnormal, carrying out root cause positioning. The method can predict the unknown performance abnormity and provide abnormity reason prompt by utilizing the induced behavior model, and the model can obtain higher prediction precision in a benchmark test result; the SOM is utilized to map the high-dimensional input space into the low-dimensional map space, and meanwhile, the topological property of the original input space is reserved, so that the expandability and the effective system behavior learning can be realized.

Description

Multi-index anomaly detection method based on neural network
Technical Field
The invention relates to the technology in the field of computers, in particular to a multi-index abnormality detection method based on a neural network.
Background
An outlier is a data point that is sufficiently far from other points that it is suspected of being caused by another mechanism. Anomaly detection methods have been used in various fields of application, such as intrusion detection, financial fraud, medical diagnostics, law enforcement and natural sciences. The most common outlier detection methods include the use of distance-based methods, which, although they are old, have become the most popular and provide powerful results.
One particularly difficult case of anomaly detection is high-dimensional anomaly detection, which hides outliers due to their irrelevant properties. In high-dimensional anomaly detection, many different methods, such as feature bagging, high contrast methods, statistical subspace selection, and spectroscopic methods are used to score points as anomaly values. While system metrics for truly distributed applications often behave abnormally due to fluctuating noise from dynamic workloads or measurements, conventional approaches use statistical learning to detect abnormal data, often require training data given specific assumptions, require a significant amount of human effort, and can only deal with previously known abnormalities.
Achieving efficient multi-dimensional online system anomaly detection is a challenging task. The learning scheme first needs to achieve scalability, which results in a large amount of learning overhead. Furthermore, system metrics for truly distributed applications often fluctuate noise due to dynamic workload or measurements, which requires a powerful learning scheme. The SOM learning technology is selected to realize expandability and effective multi-index anomaly detection, the SOM maps a high-dimensional input space into a low-dimensional map space, usually two-dimensional, and simultaneously retains the topological property of the original input space, namely two similar samples are projected to a closed position in the map. Therefore, the SOM can handle multivariable system behavior learning well without missing any representative behavior.
Determining the root cause of an anomaly is for a very important task. The SOM consists of a set of neurons arranged in a lattice that retains the properties of the topological measurement space, and the model can use this information to identify the error metric that caused the anomaly. The basic idea is to observe the distinguishing neurons of abnormal neurons from normal neurons and output the index that differs most from the wrong index.
Disclosure of Invention
The invention aims to provide a multi-index anomaly detection method based on a neural network, which is based on a Self-Organizing neural network (SOM), and Self-Organizing neural network mapping is an unsupervised neural network model. Network parameters and structures are changed in a self-organizing and self-adaptive mode by automatically searching for inherent rules and attributes in the data, and therefore the data are gathered into different discrete areas according to the similarity degree. The method comprises the following specific steps:
step 1: defining a data format;
the data set D has n data points and D dimensions, wherein the D dimensions comprise index 1, index 2 and index 3 … … index D; the ith row of data is represented as a d-dimensional vector: x (t) ═ x (xi1, xi2, …, xid); where xid represents a system metric and uses the vector of measurements as input to the training SOM; an SOM is composed of a group of neurons arranged in a lattice, each neuron is assigned with different weight vectors and map coordinates, the weight vectors and the measurement vectors are the same in length, and the vectors in the training data are dynamically updated according to the measured values;
step 2: performing model training on the system by using the SOM, and defining the training process as a learning process;
step1. initialize the weight of each neuron, { ni=[wi1,wi2,wi3,…,wik]I, k 1: n }; the neurons form a node matrix with equal intervals on the neural network according to a two-dimensional form to form an output layer; each node has a corresponding weight vector, and the dimension of the weight vector is equal to the dimension length of the input data;
step2. select input data x ═ v with arbitrary dimension K1,v2,v3,…,vk]Calculating the distance from the neuron to each neuron, wherein all the neurons of the network output layer compete with each other, and only one winning neuron can be activated at a time, namely the neuron BMU is activated; determining activated neurons by competitive learning: c ═ arg min { dist (x, ni) };
step3, setting a radius by taking the activated neuron as a center, wherein a region within the radius is called a winning region; selecting the neuron in the winning area according to the coordinate of the activated neuron and the radius of the neighborhood; in the initial stage of the method, the value of the radius is set to be larger, the default initial radius is equal to the radius of the size of the neural network, the radius is continuously shrunk along with the increase of the iteration times, and the shrinking function is as follows:
Figure BDA0002205625270000021
wherein r istRadius at the t-th iteration, r0Is the initial radius, t is the current iteration time, and λ is a constant;
step4. when a neuron is activated, the neuron and the neurons in its area of dominance will get weight updates, making them more similar to the input samples, the update function is:
Wt+1=Wt+Θ(t)L(t)[Vt-Wt]
wherein, Wt+1For the updated weight, WtTo update the pre-weight value, VtFor an input sample, Θ (t) is a neighbor function, the neighbor function controls the update amplitude, the update amplitude obtained by the activated neuron is the maximum, and the closer the neurons in the winning region are to the activated neuron, the larger the obtained update amplitude is, the gaussian function is; l (t) is a learning rate function, the learning rate is declined along with the increase of the iteration times, and the weight of the neuron is gradually stabilized through the decline, so that the model is converged;
step5, repeating Step2 to Step4 until the training is finished when the model converges;
and step 3: carrying out anomaly detection on input data, and defining the anomaly detection as a mapping process;
step1. calculate the neighbor Area per neuron and sort, sort (Area)1,Area2,Area3,…,AreanAnd) setting a threshold value, and determining the abnormal cluster exceeding the threshold value; the neighbor area is defined as the distance between the selected neuron and its immediate neighbors, the immediate neighbors are located above, below, left and right of the map coordinates of the selected neuron (N)T,NB,NL,NR) The neighbor area is calculated by the mean value of the Manhattan distance between the instant neighbor and the selected neuron;
step2, selecting any input sample, calculating the distance from the input sample to each neuron and determining an activated neuron;
step3. Area of Area immediately adjacent to the neuron to be activatedBMUComparing the sample with a threshold value, and judging whether the sample is abnormal;
and 4, step 4: when the model is mapped to be abnormal, root cause positioning is carried out;
step1, when the measurement sample is mapped to an abnormal neuron, calculating the Euclidean distance from the abnormal neuron to a group of nearby normal neurons; the aim is to avoid comparisons with neighbouring abnormal neurons, as they represent unknown states, giving false indications of the cause of the abnormality; calculating the difference of each dimension between the corresponding activated neuron and Q normal neurons around the corresponding activated neuron to obtain Q groups of difference value arrays, wherein the length of each difference value array is K;
D1=[|WBMU1-Wnormal11|,|WBMU2-Wnormal12|,…,|WBMUk-Wnormal1k|]
DQ=[|WBMU1-WnormalQ1|,|WBMU2-WnormalQ2|,…,|WBMUk-Wnormalqk|]
step2, preferentially selecting normal neurons from the neighborhood radius range, and expanding the search radius when no enough normal neurons exist in the range until Q neurons are found;
step3. once a group of normal neurons is found, differences are calculated; taking the absolute value of the calculated difference because the change is not positive or negative; sorting the Q group difference value arrays from large to small respectively, recording the dimensionality with the largest difference value to the dimensionality with the smallest difference value as K, K-1, K-2, …,1 respectively, and obtaining a Q group index ranking table after the process is completed;
and step4, calculating the total score of each dimension by using a majority voting method, and selecting a plurality of dimensions with the highest total score as main factors.
Preferably, the system metric in step1 includes CPU, memory, disk I/O, or network traffic.
Preferably, in step3, the threshold value is selected from the sorted percentile, and in the method, the threshold value is set to 98% or 99%.
Compared with the prior art, the invention has the advantages that:
1) the method utilizes the self-organizing neural network to learn the behavior capture abnormality of the high-dimensional system, utilizes the induced behavior model to predict the unknown performance abnormality and provide an abnormality reason prompt, and in the benchmark test result, the model can obtain higher prediction precision.
2) According to the method, the high-dimensional input space is mapped into the low-dimensional map space, usually two-dimensional, by utilizing the SOM, the topological property of the original input space is reserved, and the expandability and the effective system behavior learning can be realized.
Drawings
FIG. 1 is a flow chart of a multi-index anomaly detection method based on a neural network;
FIG. 2SOM input layer and competition layer;
figure 3SOM neighbor neurons.
Detailed Description
The following detailed description of the embodiments of the present invention is provided with reference to the accompanying drawings, as shown in fig. 1, the following steps are performed:
step 1: defining a data format;
the data set D has n data points and D dimensions, including time, index 1, index 2 and index 3 … … index D; the ith row of data can then be represented as a d-dimensional vector: x (t) ═ x (xi1, xi2, …, xid); where xid represents a system metric such as CPU, memory, disk I/O or network traffic, and uses the measured value vector as input to train the SOM; an SOM is composed of a group of neurons arranged in a lattice, each neuron is assigned with different weight vectors and map coordinates, the weight vectors and the measurement vectors are the same in length, and the vectors in the training data are dynamically updated according to the measured values;
step 2: performing model training on the system by using the SOM, and defining the model training as Learning Process;
step1. initialize the weight of each neuron, { ni=[wi1,wi2,wi3,…,wik]I, k 1: n }; the neurons form a node matrix with equal intervals on the neural network according to a two-dimensional form to form an output layer; each node has a corresponding weight vector, and the dimension of the weight vector is equal to the dimension length of the input data;
step2. as shown in fig. 2, input data x ═ v with arbitrary dimension K is selected1,v2,v3,…,vk]Calculating the distance from the neuron to each neuron, wherein all the neurons of the network output layer compete with each other, and only one winning neuron can be activated at a time, namely, the neuron is activated (Best Matching Unit, BMU); determining activated neurons by competitive learning: c is argmin { dist (x, ni) };
step3, as shown in fig. 3, a radius is set around the activated neuron, and a region within the radius is called a winning region; selecting the neuron in the winning area according to the coordinate of the activated neuron and the radius of the neighborhood; in the initial stage of the method, the half-value is set to be larger, the default initial radius is equal to the radius of the size of the neural network, the radius is continuously shrunk along with the increase of the iteration times, and the shrinking function is as follows:
Figure BDA0002205625270000031
wherein r istRadius at the t-th iteration, r0Is the initial radius, t is the current iteration time, and λ is a constant;
step4. when a neuron is activated, the neuron and the neurons in its area of dominance will get weight updates, making them more similar to the input samples, the update function is:
Wt+1=Wt+Θ(t)L(t)[Vt-Wt]
wherein, Wt+1For the updated weight, WtTo update the pre-weight value, VtFor an input sample, Θ (t) is a neighbor function, the neighbor function controls the update amplitude, the update amplitude obtained by the activated neuron is the maximum, and the closer the neurons in the winning region are to the activated neuron, the larger the obtained update amplitude is, the gaussian function is; l (t) is a learning rate function, the learning rate is declined along with the increase of the iteration times, and the weight of the neuron is gradually stabilized through the decline, so that the model is converged;
step5, repeating Step2 to Step4 until the training is finished when the model converges;
and step 3: carrying out anomaly detection on input data, and defining the input data as Mapping Process;
step1. calculate the neighbor Area per neuron and sort, sort (Area)1,Area2,Area3,…,Arean-setting a Threshold value, exceeding the Threshold value (Threshold) as an abnormal cluster; selecting the sorted percentile as a threshold, wherein the threshold is set to 98% or 99% in the method; the neighbor area is defined as the distance between the selected neuron and its immediate neighbors, the immediate neighbors are located above, below, left and right of the map coordinates of the selected neuron (N)T,NB,NL,NR) The neighbor area is calculated by the mean value of the Manhattan distance between the instant neighbor and the selected neuron;
step2, selecting any input sample, calculating the distance from the input sample to each neuron and determining an activated neuron;
step3. Area of Area immediately adjacent to the neuron to be activatedBMUComparing the sample with a threshold value, and judging whether the sample is abnormal;
and 4, step 4: when the model is mapped to be abnormal, Root cause positioning Root cause;
step1, when the measurement sample is mapped to an abnormal neuron, calculating the Euclidean distance from the abnormal neuron to a group of nearby normal neurons; the aim is to avoid comparisons with neighbouring abnormal neurons, as they represent unknown states, giving false indications of the cause of the abnormality; calculating the difference of each dimension between the corresponding activated neuron and Q normal neurons around the corresponding activated neuron to obtain Q groups of difference value arrays, wherein the length of each difference value array is K;
D1=[|WBMU1-Wnormal11|,|WBMU2-Wnormal12|,…,|WBMUk-Wnormal1k|]
DQ=[|WBMU1-WnormalQ1|,|WBMU2-WnormalQ2|,…,|WBMUk-Wnormalqk|]
step2, preferentially selecting normal neurons from the neighborhood radius range, and expanding the search radius when no enough normal neurons exist in the range until Q neurons are found;
step3. once a group of normal neurons is found, differences are calculated; taking the absolute value of the calculated difference because the change is not positive or negative; sorting the Q group difference value arrays from large to small respectively, recording the dimensionality with the largest difference value to the dimensionality with the smallest difference value as K, K-1, K-2, …,1 respectively, and obtaining a Q group index ranking table after the process is completed;
and step4, calculating the total score of each dimension by using a majority voting method, and selecting a plurality of dimensions with the highest total score as main factors.
While the present invention has been described with reference to a limited number of embodiments and drawings, as described above, various modifications and changes will become apparent to those skilled in the art to which the present invention pertains. Accordingly, other embodiments are within the scope and spirit of the following claims and equivalents thereto.

Claims (3)

1. A multi-index abnormality detection method based on a neural network is characterized by comprising the following specific steps:
step 1: defining a data format;
the data set D has n data points and D dimensions, wherein the D dimensions comprise index 1, index 2 and index 3 … … index D; the ith row of data is represented as a d-dimensional vector: x (t) ═ x (xi1, xi2, …, xid); where xid represents a system metric and uses the vector of measurements as input to the training SOM; an SOM is composed of a group of neurons arranged in a lattice, each neuron is assigned with different weight vectors and map coordinates, the weight vectors and the measurement vectors are the same in length, and the vectors in the training data are dynamically updated according to the measured values;
step 2: performing model training on the system by using the SOM, and defining the training process as a learning process;
step1. initialize the weight of each neuron, { ni=[wi1,wi2,wi3,…,wik]I, k 1: n }; the neurons form a node matrix with equal intervals on the neural network according to a two-dimensional form to form an output layer; each node has a corresponding weight vector, and the dimension of the weight vector is equal to the dimension length of the input data;
step2. select input data x ═ v with arbitrary dimension K1,v2,v3,…,vk]Calculating the distance from the neuron to each neuron, wherein all the neurons of the network output layer compete with each other, and only one winning neuron is activated each time, namely the neuron BMU is activated; determining activated neurons by competitive learning: c ═ arg min { dist (x, ni) };
step3, setting a radius by taking the activated neuron as a center, wherein a region within the radius is called a winning region; selecting the neuron in the winning area according to the coordinate of the activated neuron and the radius of the neighborhood; in the initial stage of the method, the value of the radius is set to be larger, the default initial radius is equal to the radius of the size of the neural network, the radius is continuously shrunk along with the increase of the iteration times, and the shrinking function is as follows:
Figure FDA0002205625260000011
wherein r istRadius at the t-th iteration, r0Is the initial radius, t is the current iteration time, and λ is a constant;
step4. when a neuron is activated, the neuron and the neurons in its area of dominance will get weight updates, making them more similar to the input samples, the update function is:
Wt+1=Wt+Θ(t)L(t)[Vt-Wt]
wherein, Wt+1For the updated weight, WtTo update the pre-weight value, VtFor an input sample, Θ (t) is a neighbor function, the neighbor function controls the update amplitude, the update amplitude obtained by the activated neuron is the maximum, and the closer the neurons in the winning region are to the activated neuron, the larger the obtained update amplitude is, the gaussian function is; l (t) is a learning rate function, the learning rate is declined along with the increase of the iteration times, and the weight of the neuron is gradually stabilized through the decline, so that the model is converged;
step5, repeating Step2 to Step4 until the training is finished when the model converges;
and step 3: carrying out anomaly detection on input data, and defining the anomaly detection as a mapping process;
step1. meterCalculate and rank the neighbor Area of each neuron, sort (Area)1,Area2,Area3,…,Arean) Setting a threshold value, and taking the abnormal cluster exceeding the threshold value; the neighbor area is defined as the distance between the selected neuron and its immediate neighbors, the immediate neighbors are located above, below, left and right of the map coordinates of the selected neuron (N)T,NB,NL,NR) The neighbor area is calculated by the mean value of the Manhattan distance between the instant neighbor and the selected neuron;
step2, selecting any input sample, calculating the distance from the input sample to each neuron and determining an activated neuron;
step3. Area of Area immediately adjacent to the neuron to be activatedBMUComparing the sample with a threshold value, and judging whether the sample is abnormal;
and 4, step 4: when the model is mapped to be abnormal, root cause positioning is carried out;
step1, when the measurement sample is mapped to an abnormal neuron, calculating the Euclidean distance from the abnormal neuron to a group of nearby normal neurons; the aim is to avoid comparisons with neighbouring abnormal neurons, as they represent unknown states, giving false indications of the cause of the abnormality; calculating the difference of each dimension between the corresponding activated neuron and Q normal neurons around the corresponding activated neuron to obtain Q groups of difference value arrays, wherein the length of each difference value array is K;
D1=[|WBMU1-Wnormal11|,|WBMU2-Wnormal12|,…,|WBMUk-Wnormal1k|]
DQ=[|WBMU1-WnormalQ1|,|WBMU2-WnormalQ2|,…,|WBMUk-Wnormalqk|]
step2, selecting normal neurons from the neighborhood radius range, and expanding the search radius when no enough normal neurons exist in the range until Q neurons are found;
step3. once a group of normal neurons is found, differences are calculated; taking the absolute value of the calculated difference because the change is not positive or negative; sorting the Q group difference value arrays from large to small respectively, recording the dimensionality with the largest difference value to the dimensionality with the smallest difference value as K, K-1, K-2, …,1 respectively, and obtaining a Q group index ranking table after the process is completed;
and step4, calculating the total score of each dimension by using a majority voting method, and selecting a plurality of dimensions with the highest total score as main factors.
2. The multi-index abnormality detection method according to claim 1, characterized in that: the system measurement in the step1 comprises CPU, memory, disk I/O or network flow.
3. The multi-index abnormality detection method according to claim 1, characterized in that: and selecting the sorted percentile sites by using the threshold in the step3, wherein the threshold is set to be 98% or 99% in the method.
CN201910880142.4A 2019-09-18 2019-09-18 Multi-index anomaly detection method based on neural network Active CN110674940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910880142.4A CN110674940B (en) 2019-09-18 2019-09-18 Multi-index anomaly detection method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910880142.4A CN110674940B (en) 2019-09-18 2019-09-18 Multi-index anomaly detection method based on neural network

Publications (2)

Publication Number Publication Date
CN110674940A true CN110674940A (en) 2020-01-10
CN110674940B CN110674940B (en) 2023-04-18

Family

ID=69078143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910880142.4A Active CN110674940B (en) 2019-09-18 2019-09-18 Multi-index anomaly detection method based on neural network

Country Status (1)

Country Link
CN (1) CN110674940B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767273A (en) * 2020-06-22 2020-10-13 清华大学 Data intelligent detection method and device based on improved SOM algorithm
CN111885059A (en) * 2020-07-23 2020-11-03 清华大学 Method for detecting and positioning abnormal industrial network flow
CN113378870A (en) * 2020-03-10 2021-09-10 南京邮电大学 Method and device for predicting radiation source distribution of printed circuit board based on neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200076A (en) * 2014-08-19 2014-12-10 钟亚平 Athlete athletic injury risk early warning method
US20150074023A1 (en) * 2013-09-09 2015-03-12 North Carolina State University Unsupervised behavior learning system and method for predicting performance anomalies in distributed computing infrastructures
CN108200005A (en) * 2017-09-14 2018-06-22 国网浙江省电力公司宁波供电公司 Electric power secondary system network flow abnormal detecting method based on unsupervised learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150074023A1 (en) * 2013-09-09 2015-03-12 North Carolina State University Unsupervised behavior learning system and method for predicting performance anomalies in distributed computing infrastructures
CN104200076A (en) * 2014-08-19 2014-12-10 钟亚平 Athlete athletic injury risk early warning method
CN108200005A (en) * 2017-09-14 2018-06-22 国网浙江省电力公司宁波供电公司 Electric power secondary system network flow abnormal detecting method based on unsupervised learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陆云飞;陈临强;: "基于SOM的行人异常轨迹检测" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378870A (en) * 2020-03-10 2021-09-10 南京邮电大学 Method and device for predicting radiation source distribution of printed circuit board based on neural network
CN113378870B (en) * 2020-03-10 2022-08-12 南京邮电大学 Method and device for predicting radiation source distribution of printed circuit board based on neural network
CN111767273A (en) * 2020-06-22 2020-10-13 清华大学 Data intelligent detection method and device based on improved SOM algorithm
CN111767273B (en) * 2020-06-22 2023-05-23 清华大学 Data intelligent detection method and device based on improved SOM algorithm
CN111885059A (en) * 2020-07-23 2020-11-03 清华大学 Method for detecting and positioning abnormal industrial network flow

Also Published As

Publication number Publication date
CN110674940B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
Chen et al. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective
Menze et al. On oblique random forests
CN110674940B (en) Multi-index anomaly detection method based on neural network
Khan et al. Multi-objective feature subset selection using non-dominated sorting genetic algorithm
Doan et al. Selecting machine learning algorithms using regression models
Yang et al. A feature-metric-based affinity propagation technique for feature selection in hyperspectral image classification
Ding et al. Intelligent optimization methods for high-dimensional data classification for support vector machines
Mohammed et al. Evaluation of partitioning around medoids algorithm with various distances on microarray data
Nayini et al. A novel threshold-based clustering method to solve K-means weaknesses
Yan et al. A novel clustering algorithm based on fitness proportionate sharing
CN114781520A (en) Natural gas behavior abnormity detection method and system based on improved LOF model
Goel et al. Learning procedural abstractions and evaluating discrete latent temporal structure
Reif et al. Meta2-features: Providing meta-learners more information
Lin et al. A new density-based scheme for clustering based on genetic algorithm
Souza et al. Local overlap reduction procedure for dynamic ensemble selection
Wang et al. AMD-DBSCAN: An Adaptive Multi-density DBSCAN for datasets of extremely variable density
CN115310675A (en) Load estimation optimization method based on power grid user data set and neural network
Canales et al. Modification of the growing neural gas algorithm for cluster analysis
Mousavi A New Clustering Method Using Evolutionary Algorithms for Determining Initial States, and Diverse Pairwise Distances for Clustering
de Melo et al. Cost-sensitive measures of algorithm similarity for meta-learning
Zhang et al. Color clustering using self-organizing maps
CN111104950A (en) K value prediction method and device in k-NN algorithm based on neural network
Drotar et al. Comparison of stability measures for feature selection
Zeraliu Comparison of ensemble-based feature selection methods for binary classification of imbalanced data sets
Sorjamaa et al. Sparse linear combination of SOMs for data imputation: Application to financial database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant