CN111046931A - Turnout fault diagnosis method based on random forest - Google Patents

Turnout fault diagnosis method based on random forest Download PDF

Info

Publication number
CN111046931A
CN111046931A CN201911216430.6A CN201911216430A CN111046931A CN 111046931 A CN111046931 A CN 111046931A CN 201911216430 A CN201911216430 A CN 201911216430A CN 111046931 A CN111046931 A CN 111046931A
Authority
CN
China
Prior art keywords
random forest
classification
turnout
fault
decision tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911216430.6A
Other languages
Chinese (zh)
Inventor
王志鹏
张慧月
韩安平
秦勇
贾利民
王宁
阚佳玉
徐登科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201911216430.6A priority Critical patent/CN111046931A/en
Publication of CN111046931A publication Critical patent/CN111046931A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Abstract

The invention provides a turnout fault diagnosis method based on a random forest. The invention adopts a grid search method to carry out parameter optimization, utilizes the optimized parameter setting to establish a random forest model to carry out fault diagnosis of the turnout, and can improve the accuracy of fault classification prediction. The invention realizes fault diagnosis and classification of turnouts, and simultaneously proves that the random forest algorithm can accurately and quickly give diagnosis results aiming at turnouts states through comparing and analyzing the construction speed and the test precision of the classification model, and the fault classification prediction accuracy is high, thereby having obvious practical application value.

Description

Turnout fault diagnosis method based on random forest
Technical Field
The invention relates to a turnout fault diagnosis method based on a random forest, and belongs to the technical field of fault diagnosis of mechanical parts.
Background
The turnout realizes the line-changing operation of the train and is very key to the running. With the continuous expansion of railway networks in China, the application requirements of turnouts become larger and larger, the requirements on the turnout states are higher, and the safety is particularly important. However, the turnout has a complex structure, low mechanical strength, great influence from the external environment, and is easy to age and break down. The turnout faults occur frequently, the processing time is long, and great influence is caused on the travelling crane. Therefore, accurate and timely fault detection of the turnout is very important for accurate fault positioning, shortening of maintenance time, finally improvement of operation safety reliability and reduction of loss.
The random forest algorithm trains different classifiers by constructing a plurality of decision tree classifiers and generating a plurality of different sample subsets by adopting a Bootstrap method, and then integrating the results to obtain a final fault classification result. The random forest algorithm operation does not need data preprocessing, the algorithm is simple, the training speed is high, the classification precision is high, and the timeliness and the accuracy of fault diagnosis can be guaranteed. By carrying out parameter optimization on the random forest algorithm, the experience and blindness of the algorithm can be avoided, and a better fault prediction result can be obtained.
Disclosure of Invention
The invention provides a turnout fault diagnosis method based on a random forest. Firstly, a plurality of CART decision tree classifiers are constructed, the classification result of each tree is obtained through training, in the training process, the diversity of the decision tree classifiers is enhanced by using two modes of training sample disturbance and attribute disturbance, and then the result obtained through training is output by adopting a majority voting method. And in the parameter selection of the random forest, determining the optimal parameters by adopting a grid search method.
The invention provides a turnout fault diagnosis method based on a random forest, which specifically comprises the following steps:
step one, acquiring electric power data of periodic actions of turnouts, analyzing fault types corresponding to each data, establishing a learning sample with a label, and selecting a training sample set and a testing sample set;
secondly, establishing a random forest classification model based on a training sample set, optimizing parameter selection of a random forest by using a grid search method, firstly generating a training sample subset of each decision tree from an original training sample set by using a Bootstrap method, taking a real fault type corresponding to each sample as expected output, finishing training of each decision tree by using the sample subset, finally selecting a classification label with the most decision tree output results as a final classification result by using a majority voting method, forming a random forest model by using the decision tree combination, optimizing parameter selection of the random forest by using a network search method, and taking a result obtained by the majority voting method as a fault prediction result;
and step three, classifying the test sample set of the turnout action data by using the random forest classification model trained in the step two, voting the results output by all decision trees by using a majority voting method, and obtaining the classification label with the maximum vote number, namely the final turnout fault prediction result.
Preferably, the step two of selecting the optimized random forest parameters by the grid search method comprises the following steps:
1) the number of decision trees in the forest is set as x; selecting the number of features in the feature subset when the decision tree node is split, and setting the number as y; for x and y, respectively setting a certain step length (set as a, b) to carry out iterative search, calculating classification precision through a random forest algorithm, and obtaining a parameter (x) of an optimal value of one iterative search1,y1);
2) For x1And y1At x1And y1Vicinity (x)1-a,x1+a),(y1-b,y1+ b), setting a certain step length (c, d) again for iterative search to obtain the optimal parameter (x) of the secondary iterative search2,y2);
3) For x2And y2Reducing the search range and the step length again to carry out iteration until the step length is set to be 1 to obtain the final search optimal parameter (x)0,y0)。
To this end, the number of decision trees in the forest is set to x0When the decision tree node is split, the number of the features in the feature subset is selected to be y0
The invention has the advantages and positive effects that:
(1) the periodic electric power curve of the turnout from positioning to reversal and from reversal to positioning is taken as a research object, and the state of the turnout can be more comprehensively identified and the fault diagnosis research of the turnout can be carried out by considering the overall action.
(2) A turnout fault diagnosis method based on random forest is provided. The advantage that random forests do not need to be subjected to feature extraction is utilized, so that the complexity of turnout data feature extraction is avoided; the random forest adopts the decision tree as an individual learner, and has low operation complexity and high speed. Experiments show that compared with a single classifier, the random forest turnout fault diagnosis method has higher classification precision and higher operation speed.
(3) Random forest parameters are optimized by using a grid search method, the problems of experiential and blindness of parameter selection are solved, and the classification prediction accuracy is improved.
Drawings
FIG. 1 is a flow chart of a random forest algorithm;
FIG. 2 is a fault classification accuracy graph of a random forest algorithm.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples.
The invention provides a turnout fault diagnosis method based on a random forest algorithm, which comprises the following specific steps:
step one, establishing a training sample set and a testing sample set.
Analyzing the fault type corresponding to each data according to the acquired periodic action electric power data of the turnout switch machine, establishing a learning sample with a label, and selecting a training sample set and a testing sample set.
And step two, establishing a random forest classification model based on the training sample set, and optimizing parameter selection of the random forest by utilizing a grid search method.
The random forest algorithm comprises the following steps:
1. a training sample subset is randomly generated from the original sample set using a boottrap method.
The random forest generates a plurality of different sample subsets through a Bootstrap method to train different individual classifiers. Each sample subset contains the same number of samples as the original sample set. The specific description of generating the sample subset using the boottrap method is as follows:
bootstrap is a random sampling method with a put back. For a data set containing n samples, randomly sampling one sample at a time and putting the sample back into the sample subset, so that the sample can be extracted at the next sampling time, and obtaining the sample subset containing n samples through n times of random sampling.
The probability of each sample being drawn is 1/n, and after n times of random sampling, the probability of the sample not being drawn is
Figure BDA0002299648350000031
When the data set is large enough, the limit is taken to be
Figure BDA0002299648350000032
I.e. the probability that a sample is not drawn at all times is 36.8%. Therefore, each sample subset contains 63.2% of original data, so that the mutually overlapped sample subsets are obtained, and on the premise that the individual learners have better learning performance, the individual learners have larger difference, and the overall generalization performance is improved.
2. Training an individual decision tree classifier.
And the random forest adopts a CART decision tree as an individual classifier. And training the decision tree classifier by using a CART algorithm by using a sample subset. Each sample subset trains a classifier. Meanwhile, when the decision tree nodes are split, random selection of attributes is introduced, a plurality of attributes are randomly selected from all the attributes of the nodes to form a subset, the best attribute is found in the dependency subset by using the Gini index as a splitting index to split, no pruning is freely generated, and the individual decision tree classifier is generated.
The splitting attribute selection of the random forest is different from a common decision tree algorithm, in the training process, for a common decision tree, all attributes of a current node need to be traversed, the optimal attributes are found and divided, for the random forest, only a plurality of attributes are randomly selected to form an attribute subset each time the random forest is split downwards, and the optimal attributes in the attribute subset are found, namely attribute disturbance. Therefore, the difference between the trained individual decision trees is further increased, the overfitting phenomenon possibly occurring in the decision trees is reduced, and the generalization capability is effectively improved.
The single decision tree classifier training process is as follows:
inputting: training subset D1
Wherein D is1The random attribute number is M, and the total attribute number of each sample is M.
1) Randomly selecting m attributes from all attributes of the current node of the decision tree to form an attribute subset;
2) calculating the possible value-taking kini index of each attribute in the attribute subset;
the kini index is defined as:
Figure BDA0002299648350000041
where D is the sample set, a is the attribute that divides the sample set D, and a has V possible values { a }1,a2,...,avGet V sub-nodes after division, DvI.e. the sample contained in the v-th child node, Gini (D)v) Is a data set DvThe value of (a).
The definition of the kini value is as follows:
Figure BDA0002299648350000042
where D is the sample set, pk(k-1, 2, …, | y |) is the proportion of the kth sample in the sample set D.
3) Finding out the attribute with the minimum kini index for splitting;
4) and repeating the steps for the obtained child nodes, and freely growing until the division stops.
And (3) outputting: a trained CART decision tree.
3. And obtaining a final classification result by a majority voting method.
The majority voting method is one of integrated learning and combining strategies, and the final result is integrated and output by using the method, namely if the number of votes obtained by a certain type of mark is the maximum, the mark is the final classification result. And if a plurality of class marks all obtain the highest vote number, randomly selecting one of the class marks as a final classification result.
The parameter selection in the random forest comprises the following steps: 1) the number of decision trees in the forest is set as x; 2) and when the decision tree node is split, selecting the number of the features in the feature subset, and setting the number as y. The selection of random forest parameters is optimized by a grid search method. The selection steps of the grid search method for optimizing the random forest parameters are as follows:
1. for x and y, respectively setting a certain step length (set as a, b) to carry out iterative search, calculating classification precision through a random forest algorithm, and obtaining a parameter (x) with an optimal value for one-time search1,y1);
2. For x1And y1At x1And y1Vicinity (x)1-a,x1+a),(y1-b,y1+ b), setting a certain step length (c, d) again for iterative search to obtain the optimal parameter (x) of the secondary search2,y2);
3. For x2And y2Reducing the search range and the step length again to carry out iteration until the step length is set to be 1 to obtain the final search optimal parameter (x)0,y0)。
To this end, the number of decision trees in the forest is set to x0When the decision tree node is split, the number of the features in the feature subset is selected to be y0And optimizing parameter selection in the random forest classification model.
And step three, carrying out fault classification prediction on the test sample set by using a random forest model.
And classifying the test sample set of the turnout action data by using the random forest classification model trained in the step two, voting the results output by all decision trees by using a majority voting method, and obtaining the classification label with the maximum vote number which is the final classification result of the turnout fault, thereby realizing the fault diagnosis of the turnout.
The example adopts the electric power data of the periodic action of the S700K type turnout switch machine collected by the Guangzhou subway turnout experiment table for verification. The data of 14 states including normal, single fault (abnormal friction, close contact, foreign matter inclusion) and multiple faults are collected. The experimental data are shown in table 1.
Table 1 experimental data description
Numbering Status type Number of samples Description of the state
1 Normal 9 The plaster is 0
2 Nfault5 50 Close contact 0.5mm
3 Nfault15 43 Close contact 1.5mm
4 Ffault0 223 Abnormal frictional force
5 Ffault3 127 Abnormal friction force and close contact of 0.3mm
6 Ffault10 127 Abnormal friction force and close contact of 1.0mm
7 Ffault13 104 Abnormal friction force and close contact of 1.3mm
8 Ffault20 109 Abnormal friction force and 2.0mm close contact
9 Ffault25 61 Abnormal friction force and 2.5mm close contact
10 Ffault30 61 Abnormal friction force and close contact of 3.0mm
11 Fforeign2 12 Abnormal frictional force + 2.0mm foreign matter
12 Fforeign4 10 Abnormal frictional force + foreign matter 4.0mm
13 locked25 22 Friction force abnormity + sealing 2.5mm (unlocking fault)
14 locked30 4 Friction force abnormity + sealing 3.0mm (unlocking fault)
The data is used for detecting and verifying the turnout fault diagnosis method based on the random forest, and meanwhile, the effectiveness and the efficiency of the method are verified by comparing the operation results of different classification methods. The specific experiment is as follows:
there were 962 sets of data, each set having a characteristic number of 296. Firstly, a data set is randomly divided into a training set and a testing set.
1) And (4) diagnosing and analyzing turnout faults by using a random forest algorithm.
Applying a random forest algorithm, according to experience, setting random forest parameters as follows:
number of decision trees in random forest: 1,11, 21, …, 901;
selecting the number of features in the feature subset when each tree is split: less than log2The largest integer of M +1 (M is the number of sample features, here 296).
Training sample data is used as input training random forest models, fault diagnosis and classification are carried out on the test samples, and the obtained results are shown in fig. 2 and table 2. In the graph, the horizontal axis is the number of decision tree classifiers, and the vertical axis is the fault classification accuracy of the random forest. It can be seen from the figure that when the number of the decision trees reaches 21, a better fault diagnosis recognition rate (87.99%) is obtained, and when the number of the decision trees exceeds 141, the fault classification precision is close to stable. When the number of the decision trees reaches 901, the decision trees still have stable classification precision, and overfitting cannot occur.
To better illustrate the fault diagnosis results of the proposed algorithm, we record the mean, maximum and model build times (at 901 decision trees) of classification accuracy after 21 decision trees as in table 2. Therefore, the random forest is used for diagnosing the turnout fault, and has high fault identification accuracy, good stability and high operation speed.
TABLE 2 Fault diagnosis and classification of random forest switches
Figure BDA0002299648350000061
2) And (4) selecting and optimizing random forest parameters.
One iteration:
setting the iteration step size of the decision tree number to be 20, namely, setting the tree _ num to be 1:20: 301; and setting the iteration step size of the feature number in the selected feature subset at the time of splitting each tree to be 5, namely feature _ num is 1:5: 296. And (4) utilizing a random forest algorithm to diagnose faults, and combining the first four parameters with the highest fault classification accuracy to perform the next iteration. The parameter combinations are respectively as follows: (181,6), (141,11), (161,16), (41,31), the classification accuracy is: 97.03 percent
And (3) secondary iteration:
the search range is: (max (x-20,1), x +20), (max (y-5,1), min (y1+5,296)), where x, y represent the currently taken parameter value; the iteration step size of tree _ num and feature _ num is 1. And (3) utilizing a random forest algorithm to carry out fault diagnosis, and obtaining the parameter combination with the highest fault classification accuracy in each search range as follows: (161, 8), (160, 17), (174, 19), (51, 36), the classification accuracy rates are respectively: 97.03%, 97.53%, 97.88% and 97.53%. Thus, an optimal set of parameters for the random forest is determined (174, 19), with an optimal classification accuracy of: 97.88 percent. Therefore, the random forest classification accuracy after parameter selection is greatly improved.
3) And carrying out fault diagnosis comparison analysis on the random forest algorithm, the CART decision tree and the Bagging algorithm.
The number of decision trees is set to 174 and the number of features in the split feature subset is set to 19. The model build times and classification accuracy for the different classifiers are shown in table 3.
TABLE 3 diagnosis and classification of turnout faults of different classifiers
Algorithm Accuracy (%) Model build time(s)
Random forest 97.88 14.74
Decision tree Bagging 93.58 394.04
CART decision tree 88.69 2.40
According to experimental results, the turnout fault diagnosis method based on the random forest is superior to decision trees and Bagging methods. The method can realize fault detection and classification of the turnout, has high fault accuracy and has obvious practical application value.

Claims (2)

1. A turnout fault diagnosis method based on a random forest is characterized by comprising the following steps:
step one, acquiring electric power data of periodic actions of turnouts, analyzing fault types corresponding to each data, establishing a learning sample with a label, and selecting a training sample set and a testing sample set;
secondly, establishing a random forest classification model based on a training sample set, optimizing parameter selection of a random forest by using a grid search method, firstly generating a training sample subset of each decision tree from an original training sample set by using a Bootstrap method, taking a real fault type corresponding to each sample as expected output, finishing training of each decision tree by using the sample subset, finally selecting a classification label with the most decision tree output results as a final classification result by using a majority voting method, forming a random forest model by using the decision tree combination, optimizing parameter selection of the random forest by using a network search method, and taking a result obtained by the majority voting method as a fault prediction result;
and step three, classifying the test sample set of the turnout action data by using the random forest classification model trained in the step two, voting the results output by all decision trees by using a majority voting method, and obtaining the classification label with the maximum vote number, namely the final turnout fault prediction result.
2. The turnout fault diagnosis method based on the random forest as claimed in claim 1, wherein the step of selecting the optimized random forest parameters by the grid search method in the second step is as follows:
1) the number of decision trees in the forest is set as x; selecting the number of features in the feature subset when the decision tree node is split, and setting the number as y; for x and y, respectively setting a certain step length (set as a, b) to carry out iterative search, calculating classification precision through a random forest algorithm, and obtaining a parameter (x) of an optimal value of one iterative search1,y1);
2) For x1And y1At x1And y1Vicinity (x)1-a,x1+a),(y1-b,y1+ b), setting a certain step length (c, d) again for iterative search to obtain the optimal parameter (x) of the secondary iterative search2,y2);
3) For x2And y2Reducing the search range and the step length again to carry out iteration until the step length is set to be 1 to obtain the final search optimal parameter (x)0,y0);
To this end, the number of decision trees in the forest is set to x0When the decision tree node is split, the number of the features in the feature subset is selected to be y0
CN201911216430.6A 2019-12-02 2019-12-02 Turnout fault diagnosis method based on random forest Pending CN111046931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911216430.6A CN111046931A (en) 2019-12-02 2019-12-02 Turnout fault diagnosis method based on random forest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911216430.6A CN111046931A (en) 2019-12-02 2019-12-02 Turnout fault diagnosis method based on random forest

Publications (1)

Publication Number Publication Date
CN111046931A true CN111046931A (en) 2020-04-21

Family

ID=70234422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911216430.6A Pending CN111046931A (en) 2019-12-02 2019-12-02 Turnout fault diagnosis method based on random forest

Country Status (1)

Country Link
CN (1) CN111046931A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723520A (en) * 2020-05-29 2020-09-29 国网四川省电力公司电力科学研究院 Transformer fault diagnosis device and method based on decision tree and random forest
CN111737918A (en) * 2020-06-24 2020-10-02 大连理工大学 Integrated learning method for fault diagnosis of high-pressure rotor of aircraft engine
CN111881159A (en) * 2020-08-05 2020-11-03 长沙理工大学 Fault detection method and device based on cost-sensitive extreme random forest
CN111985571A (en) * 2020-08-26 2020-11-24 国网湖南省电力有限公司 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment based on improved random forest algorithm
CN112269878A (en) * 2020-11-02 2021-01-26 成都纬创立科技有限公司 Interpretable law decision prediction method, interpretable law decision prediction device, electronic equipment and storage medium
CN112307287A (en) * 2020-11-11 2021-02-02 国网山东省电力公司威海供电公司 Cloud edge cooperative architecture based power internet of things data classification processing method and device
CN112329341A (en) * 2020-11-02 2021-02-05 浙江智昌机器人科技有限公司 Fault diagnosis system and method based on AR and random forest model
CN112364929A (en) * 2020-11-19 2021-02-12 浙江工业大学 Random forest classification method in power plant fault data diagnosis project
CN112561176A (en) * 2020-12-21 2021-03-26 深圳供电局有限公司 Early warning method for online running state of electric power metering device
CN112633733A (en) * 2020-12-30 2021-04-09 武汉轻工大学 Random forest soil heavy metal risk evaluation method and system based on credibility
CN113076982A (en) * 2021-03-25 2021-07-06 南京晨光集团有限责任公司 Fault diagnosis and test method based on proportional valve shaft controller
CN113095511A (en) * 2021-04-16 2021-07-09 广东电网有限责任公司 Method and device for judging in-place operation of automatic master station
CN113190460A (en) * 2021-05-25 2021-07-30 中国工商银行股份有限公司 Method and device for automatically generating test cases
CN113408068A (en) * 2021-06-18 2021-09-17 浙江大学 Random forest classification machine pump fault diagnosis method and device
CN113709747A (en) * 2020-05-09 2021-11-26 中国移动通信集团有限公司 Harassment number identification method and device, computer equipment and storage medium
CN113780351A (en) * 2021-08-10 2021-12-10 北京自动化控制设备研究所 Satellite receiver fault diagnosis method based on random forest
CN114187533A (en) * 2022-02-15 2022-03-15 西南交通大学 GB-InSAR (GB-InSAR) atmospheric correction method based on random forest time sequence classification
CN114326660A (en) * 2021-12-13 2022-04-12 中国航发北京航科发动机控制系统科技有限公司 RSS-ETR-based intelligent debugging method for fuel pump regulator
CN114781762A (en) * 2022-06-21 2022-07-22 四川观想科技股份有限公司 Equipment fault prediction method based on life consumption
CN113076982B (en) * 2021-03-25 2024-04-26 南京晨光集团有限责任公司 Fault diagnosis and test method based on proportional valve shaft controller

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
温博文;董文瀚;解武杰;马骏;: "基于改进网格搜索算法的随机森林参数优化", 计算机工程与应用 *
蔡金锭;鄢仁武;: "基于小波分析与随机森林算法的电力电子电路故障诊断", 电力科学与技术学报 *
鄢仁武等: "基于随机森林的电力电子电路故障诊断技术", 《武汉大学学报(工学版)》 *
闫东阳: "基于对象的随机森林遥感分类方法优化", 《中国优秀硕士学位论文(电子期刊)基础科学辑》 *
黄蕾: "基于道岔转辙机动作功率曲线关联分析道岔故障", 《兵工自动化》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709747A (en) * 2020-05-09 2021-11-26 中国移动通信集团有限公司 Harassment number identification method and device, computer equipment and storage medium
CN113709747B (en) * 2020-05-09 2023-10-13 中国移动通信集团有限公司 Harassment number identification method and device, computer equipment and storage medium
CN111723520A (en) * 2020-05-29 2020-09-29 国网四川省电力公司电力科学研究院 Transformer fault diagnosis device and method based on decision tree and random forest
CN111737918A (en) * 2020-06-24 2020-10-02 大连理工大学 Integrated learning method for fault diagnosis of high-pressure rotor of aircraft engine
CN111881159A (en) * 2020-08-05 2020-11-03 长沙理工大学 Fault detection method and device based on cost-sensitive extreme random forest
CN111881159B (en) * 2020-08-05 2022-05-31 长沙理工大学 Fault detection method and device based on cost-sensitive extreme random forest
CN111985571A (en) * 2020-08-26 2020-11-24 国网湖南省电力有限公司 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment based on improved random forest algorithm
CN111985571B (en) * 2020-08-26 2022-09-09 国网湖南省电力有限公司 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment
CN112269878A (en) * 2020-11-02 2021-01-26 成都纬创立科技有限公司 Interpretable law decision prediction method, interpretable law decision prediction device, electronic equipment and storage medium
CN112269878B (en) * 2020-11-02 2024-03-26 成都纬创立科技有限公司 Interpretable legal decision prediction method, interpretable legal decision prediction device, electronic equipment and storage medium
CN112329341A (en) * 2020-11-02 2021-02-05 浙江智昌机器人科技有限公司 Fault diagnosis system and method based on AR and random forest model
CN112307287B (en) * 2020-11-11 2022-08-02 国网山东省电力公司威海供电公司 Cloud edge cooperative architecture based power internet of things data classification processing method and device
CN112307287A (en) * 2020-11-11 2021-02-02 国网山东省电力公司威海供电公司 Cloud edge cooperative architecture based power internet of things data classification processing method and device
CN112364929A (en) * 2020-11-19 2021-02-12 浙江工业大学 Random forest classification method in power plant fault data diagnosis project
CN112561176A (en) * 2020-12-21 2021-03-26 深圳供电局有限公司 Early warning method for online running state of electric power metering device
CN112633733A (en) * 2020-12-30 2021-04-09 武汉轻工大学 Random forest soil heavy metal risk evaluation method and system based on credibility
CN113076982B (en) * 2021-03-25 2024-04-26 南京晨光集团有限责任公司 Fault diagnosis and test method based on proportional valve shaft controller
CN113076982A (en) * 2021-03-25 2021-07-06 南京晨光集团有限责任公司 Fault diagnosis and test method based on proportional valve shaft controller
CN113095511A (en) * 2021-04-16 2021-07-09 广东电网有限责任公司 Method and device for judging in-place operation of automatic master station
CN113190460A (en) * 2021-05-25 2021-07-30 中国工商银行股份有限公司 Method and device for automatically generating test cases
CN113190460B (en) * 2021-05-25 2024-03-15 中国工商银行股份有限公司 Automatic test case generation method and device
CN113408068A (en) * 2021-06-18 2021-09-17 浙江大学 Random forest classification machine pump fault diagnosis method and device
CN113780351A (en) * 2021-08-10 2021-12-10 北京自动化控制设备研究所 Satellite receiver fault diagnosis method based on random forest
CN114326660A (en) * 2021-12-13 2022-04-12 中国航发北京航科发动机控制系统科技有限公司 RSS-ETR-based intelligent debugging method for fuel pump regulator
CN114187533A (en) * 2022-02-15 2022-03-15 西南交通大学 GB-InSAR (GB-InSAR) atmospheric correction method based on random forest time sequence classification
CN114187533B (en) * 2022-02-15 2022-05-03 西南交通大学 GB-InSAR (GB-InSAR) atmospheric correction method based on random forest time sequence classification
CN114781762B (en) * 2022-06-21 2022-09-23 四川观想科技股份有限公司 Equipment fault prediction method based on life consumption
CN114781762A (en) * 2022-06-21 2022-07-22 四川观想科技股份有限公司 Equipment fault prediction method based on life consumption

Similar Documents

Publication Publication Date Title
CN111046931A (en) Turnout fault diagnosis method based on random forest
CN110596492B (en) Transformer fault diagnosis method based on particle swarm optimization random forest model
CN108921285B (en) Bidirectional gate control cyclic neural network-based classification method for power quality disturbance
CN110519128B (en) Random forest based operating system identification method
WO2023044978A1 (en) Adversarial-flow-model-based unsupervised fault diagnosis method for mechanical device
CN107590506A (en) A kind of complex device method for diagnosing faults of feature based processing
CN110460605A (en) A kind of Abnormal network traffic detection method based on autocoding
CN109948726B (en) Power quality disturbance classification method based on deep forest
CN104866558A (en) Training method of social networking account mapping model, mapping method and system
CN105930792A (en) Human action classification method based on video local feature dictionary
CN105304078A (en) Target sound data training device and target sound data training method
CN111078876A (en) Short text classification method and system based on multi-model integration
Thaler et al. Towards a neural language model for signature extraction from forensic logs
CN112756759A (en) Spot welding robot workstation fault judgment method
JP2023515731A (en) Simulation Method Using Master Equations for Quantum Conditions in Simulating Quantum Transfer Processes Using Recurrent Neural Networks
CN113159139B (en) Damage state diagnosis method based on improved acoustic emission density clustering
CN110942098A (en) Power supply service quality analysis method based on Bayesian pruning decision tree
CN112882899B (en) Log abnormality detection method and device
CN113255591A (en) Bearing fault diagnosis method based on random forest and fusion characteristics
CN113205125A (en) XGboost-based extra-high voltage converter valve operation state evaluation method
CN104468276B (en) Network flow identification method based on random sampling multi-categorizer
CN110879802A (en) Log pattern extraction and matching method
CN110543675A (en) Power transmission line fault identification method
CN115470839A (en) Power transformer fault diagnosis method
CN115278752A (en) AI (Artificial intelligence) detection method for abnormal logs of 5G (third generation) communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination