CN111985571B - Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment - Google Patents

Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment Download PDF

Info

Publication number
CN111985571B
CN111985571B CN202010872318.4A CN202010872318A CN111985571B CN 111985571 B CN111985571 B CN 111985571B CN 202010872318 A CN202010872318 A CN 202010872318A CN 111985571 B CN111985571 B CN 111985571B
Authority
CN
China
Prior art keywords
sample
characteristic
value
samples
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010872318.4A
Other languages
Chinese (zh)
Other versions
CN111985571A (en
Inventor
邓威
唐海国
朱吉然
张帝
游金梁
彭涛
康童
叶丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd, State Grid Hunan Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010872318.4A priority Critical patent/CN111985571B/en
Publication of CN111985571A publication Critical patent/CN111985571A/en
Application granted granted Critical
Publication of CN111985571B publication Critical patent/CN111985571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a low-voltage intelligent monitoring terminal fault prediction method, a low-voltage intelligent monitoring terminal fault prediction device, a low-voltage intelligent monitoring terminal fault prediction medium and low-voltage intelligent monitoring terminal fault prediction equipment based on an improved random forest algorithm. And adaptively adjusting the optimal voting weight value a according to the correct sample proportion of the prediction accuracy rate of 100%, and weighting the voting result by using the optimal value a so as to achieve the purpose of optimal prediction accuracy.

Description

Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment
Technical Field
The invention belongs to the field of intelligent low-voltage fault prediction, and particularly relates to a low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment based on an improved random forest algorithm.
Background
The random forest algorithm belongs to a supervised learning algorithm in an artificial intelligence algorithm, a distribution network emergency repair fault amount prediction method based on the random forest algorithm is already proposed, and the predicted fault amount is used as a reasonable distribution basis of emergency repair resources and teams. It has been proposed in the prior art to use gray projections to improve the random forest algorithm to predict the short term load of the system. The field of power distribution network fault amount prediction provides a power distribution network fault prediction model based on a gray projection random forest classification algorithm, the number of faults in different areas and different voltage classes is predicted based on information such as historical faults, loads, weather and area classification, reference basis is provided for reasonable distribution of emergency repair resources and emergency repair teams of each unit, and prediction accuracy needs to be further improved.
Disclosure of Invention
The invention provides a low-voltage intelligent monitoring terminal fault prediction method, a device, a medium and equipment based on an improved random forest algorithm, wherein the accuracy is kept to be optimal 100% accurate by adopting the improved random forest algorithm to adaptively adjust the weight and the voting weight of an incidence matrix, and the prediction accuracy is effectively improved.
The technical scheme provided by the invention is as follows:
on one hand, the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm comprises the following steps:
step 1: a, B, C historical characteristic values of three-phase voltage and current are obtained, and a historical sample characteristic set is constructed;
the feature vector of the historical sample is S i ,S i =[s i1 ,s i2 ,...,s im ]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the scale of the historical sample, s im An mth feature value representing an ith history sample;
step 2: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure GDA0003743092960000011
wherein Z is ij The correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure GDA0003743092960000021
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is a radical of a fluorine atom ej Represents the jth characteristic value, y, of the e-th history sample kj Representing the jth characteristic value of the kth sample to be predicted;
Figure GDA0003743092960000022
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
and 3, step 3: constructing a characteristic weight matrix W, W ═ W 1 ,W 2 ,...W i ,...,W n ] T Wherein W is i =[w 1 ,w 2 ,...w j ...,w m ],w j The weight of the jth characteristic value is taken as the initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
and 4, step 4: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
Figure GDA0003743092960000023
and 5: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure GDA0003743092960000024
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r is j Representing the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number of fj Represents the jth characteristic value of the f training sample, y kj Represents the j characteristic value of the kth sample to be predicted, f is 1,2, and t; k is 1,2,. l; j is 1,2,. said, m;
Figure GDA0003743092960000025
Figure GDA0003743092960000026
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
step 6: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic values q Q 1, 2.. times, m, if U { S } i The elements in the matrix are more than or equal to a set threshold value eta q I.e. z ij w j s ij ≥η q I ═ 2.., n; j 1,2, m, in U { S } i In the first g rows of the matrix, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is met through accumulative selection ij w j s ij ≥η q T x m elements of (1) to form a similar historical sample feature set S u
It is possible that less than m elements of each row satisfy the condition, and several rows are selected so that each row has m elements, and finally t × m elements satisfying the condition are formed to form a matrix of t × m.
Figure GDA0003743092960000031
The number of similar samples is equal to the number of t × m matrix elements, and the threshold value eta q Adaptation according to tAdjustment, i.e. threshold eta q The criterion for adjustment is to keep in the matrix U { Si } taking v elements per row and g rows greater than a threshold η q The number of similar samples is equal to the number of t × m matrix elements;
and 7: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
and step 8: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
and step 9: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
using initial values a and I (·) and the prediction accuracy rate of 100%, substituting f RF (X), calculating U, and updating W in an adaptive mode based on the Z value obtained in the step 2;
Figure GDA0003743092960000032
f RF (X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and f l tree I represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure GDA0003743092960000033
will f is mixed l tree And (X) i is the number of times of accurate prediction as the number of samples with accurate final failure prediction, and the number of the predicted sample feature vectors is the number of predicted samples.
Further, the process of training the decision tree in the random forest is as follows:
(1) setting parameters;
taking the number of the historical sample characteristics as the characteristic dimension of each decision tree, taking the number of times that the voltage and current characteristic values need to be judged as the number of the decision trees, taking the set number of times of decision time intervals as the level of each decision tree node, and setting the minimum sample number on the node as the sampling number in one day;
the actual number of samples is the number of eigenvalues multiplied by the number of sampling times; the minimum information gain on the nodes is 1, and the root node of each decision tree corresponds to the fault quantity of the similar historical sample characteristic set;
(2) selecting a sample;
selecting training subset X from historical sample set X i Samples as root nodes;
(3) dividing the characteristics;
if the current node reaches the termination condition, namely the current node is trained or marked as a leaf node, no more node characteristic values are used for decision making, the current node is set as the leaf node, and the prediction output of the leaf node is the class c with the maximum number in the sample set of the current node i With a probability p i Denotes c i The proportion of the class in the current sample set;
if the current node does not reach the termination condition, randomly selecting a feature Z from the Z-dimensional features without being put back; using the z-dimension feature to search the one-dimension feature k with the best classification effect and the threshold value t thereof h
When the one-dimensional characteristic k with the best classification effect is calculated, the optimal threshold values of various discrimination types are determined, the k-th dimension characteristic value of the sample on the current node is smaller than the characteristic threshold value of the corresponding discrimination type, the k-th dimension characteristic value is divided into left nodes, and the rest are divided into right nodes. Then, continuously training other nodes to obtain a weak classifier;
for example, voltage loss, undervoltage, overvoltage, overcurrent, undercurrent, overload, reversal, phase failure, residual current fault and normal power failure are used as the discrimination types, the phase voltage value of the phase B and the phase current value of the phase C of the phase A and the phase C of the phase B are used as characteristic values, and the threshold value is set as the characteristic threshold value of voltage loss, undervoltage, overvoltage, overcurrent, undercurrent, overload, reversal, phase failure, residual current fault and normal power failure;
(4) continuously dividing;
repeating steps (2) and (3) until all nodes are trained or labeled as leaf nodes;
(5) outputting the prediction;
outputting a predicted value to each left leaf node and each right leaf node of the t trees, wherein the predicted value is c with the maximum sum of prediction probabilities in all the trees i Accumulating class probabilities; and when the existing weak classifiers reach a certain number, obtaining the strong classifiers through a voting strategy to obtain the decision tree in the random forest.
Further, the existing weak classifiers reaching a certain number means that the weak classifiers reach the boundary function.
A random forest is a collection comprising a plurality of tree classifiers, defining h (x, theta) i ),i=1,2,3...}
Wherein, h (x, theta) i ) For the meta classifier of the model, a classification regression tree which is constructed by the CART algorithm and is not subjected to pruning operation is used, x represents a training data set constructed by a random forest, is a multi-dimensional vector set, and theta i The method is characterized in that a data vector set which is independently and identically distributed and is randomly extracted from x by using a bagging algorithm is used. Theta i The classification capability of the corresponding decision tree is determined.
The random forest model can be described as one such weak classifier:
{h 1 (x),h 2 (x),...,h k (x)}
is a classifier set composed of k (k >1) sub-classifiers, and a prediction vector x is input to obtain a prediction output node
If y, a boundary function is defined for the sample data set (x, y) as:
margin(x,y)=av k I(h k (x)=y)-max j≠y av k I(h k (x)=j)
i (func) is an exemplary function which is taken when a func description condition is met
1, otherwise, take 0, av k (. indicates averaging the set. The boundary function calculates the weak classifier to a certain identity
The vector predicts the average number of votes correctly and the maximum number of votes in case of a wrong prediction and calculates the difference between the two indices. It is clear that the larger the value of the boundary function, the stronger the predictive power of the set of classifiers is indicated,
the higher the confidence.
On the other hand, a low pressure intelligent monitoring terminal fault prediction device based on improve random forest algorithm, its characterized in that includes:
a historical sample feature set construction unit: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is S i ,S i =[s i1 ,s i2 ,...,s im ]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, s im An mth feature value representing an ith history sample;
an association judgment matrix calculation unit: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure GDA0003743092960000051
wherein Z is ij The correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure GDA0003743092960000052
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number of ej Represents the jth characteristic value, y, of the e-th history sample kj Representing the jth characteristic value of the kth sample to be predicted; x is the number of j ,y j Respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
the characteristic weight matrix construction unit is used for constructing a weight matrix unit by utilizing the weight of each characteristic value;
W=[W 1 ,W 2 ,...W i ,...,W n ] T wherein W is i =[w 1 ,w 2 ,...w j ...,w m ],w j The weight of the jth characteristic value is set as an initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
an association decision matrix calculation unit: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
a weighted vote value calculation unit of the decision tree: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure GDA0003743092960000061
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r is j Representing the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is a radical of a fluorine atom fj Represents the jth characteristic value of the f training sample, y kj Represents the j characteristic value of the kth sample to be predicted, wherein f is 1, 2. k is 1,2,. l; j is 1,2,. said, m;
Figure GDA0003743092960000062
Figure GDA0003743092960000063
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
a similar historical sample feature set selection unit: selecting a similar historical sample feature set from the historical sample feature vector set by using a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic values q Q 1, 2.. times, m, if U { S } i The elements in the matrix are more than or equal to a set threshold value eta q I.e. z ij w j s ij ≥η q I ═ 2., n; j 1,2, m, in U { S } i Matrix is composed of a plurality of matrix unitsIn the first g rows, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is satisfied through accumulated selection ij w j s ij ≥η q T x m elements of (1) to form a similar historical sample feature set S u
A random forest training unit: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
a prediction model acquisition unit: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
a result prediction unit: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percent RF (X) calculating U, and updating W in a self-adaptive manner based on the Z value obtained by the association judgment matrix calculating unit;
Figure GDA0003743092960000071
f RF (X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and f l tree I represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure GDA0003743092960000072
will f is l tree And (X) i is the number of times of accurate prediction as the number of samples with accurate final failure prediction, and the number of the predicted sample feature vectors is the number of predicted samples.
In one aspect, a computer storage medium includes a computer program, where the computer program is executed by a processor to implement the method for predicting the fault of the low voltage intelligent monitoring terminal based on the improved random forest algorithm.
On one hand, the low-voltage intelligent monitoring terminal fault prediction device based on the improved random forest algorithm comprises a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to enable the low-voltage intelligent monitoring terminal fault prediction device based on the improved random forest algorithm to execute the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm.
Advantageous effects
The invention provides a low-voltage intelligent monitoring terminal fault prediction method, a device, a medium and equipment based on an improved random forest algorithm. And according to the correct sample proportion of the prediction accuracy rate of 100%, adaptively adjusting the optimal voting weight value a, and then weighting the voting result by using the optimal value a to achieve the aim of optimal prediction accuracy.
Drawings
FIG. 1 is a schematic flow diagram of a process according to an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the following figures and examples.
As shown in fig. 1, a method for predicting a fault of a low-voltage intelligent monitoring terminal based on an improved random forest algorithm includes:
step 1: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is S i ,S i =[s i1 ,s i2 ,...,s im ]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, s im An mth feature value representing an ith history sample;
step 2: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure GDA0003743092960000081
wherein Z is ij The correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is represented, i is 2, …, n, j is 1, …, m,
Figure GDA0003743092960000082
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number of ej Represents the jth characteristic value, y, of the e-th history sample kj Representing the jth characteristic value of a kth sample to be predicted;
Figure GDA0003743092960000083
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
and step 3: constructing a characteristic weight matrix W, W ═ W 1 ,W 2 ,...W i ,...,W n ] T Wherein W is i =[w 1 ,w 2 ,...w j ...,w m ],w j The weight of the jth characteristic value is set as an initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
and 4, step 4: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
Figure GDA0003743092960000084
and 5: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure GDA0003743092960000091
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r is j Representing the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number of fj Represents the jth characteristic value of the f training sample, y kj Represents the j characteristic value of the kth sample to be predicted, f is 1,2, and t; k is 1,2,. l; j is 1,2,. said, m;
Figure GDA0003743092960000092
Figure GDA0003743092960000093
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
step 6: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic values q Q 1, 2.. times, m, if U { S } i The elements in the matrix are greater than or equal to a set threshold eta q I.e. z ij w j s ij ≥η q I ═ 2.., n; j 1,2, m, in U { S } i In the first g rows of the matrix, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is met through accumulative selection ij w j s ij ≥η q T x m elements of (1) to form a similar historical sample feature set S u
It is possible that less than m elements of each row satisfy the condition, and several rows are selected so that each row has m elements, and finally t × m elements satisfying the condition are formed to form a matrix of t × m.
Figure GDA0003743092960000094
The number of similar samples is equal to the number of t × m matrix elements, and the threshold value eta q Adaptation according to tAdjustment, i.e. threshold eta q The criterion for adjustment is to remain at U x S i Taking v elements from each row in the matrix, and taking g rows larger than a threshold eta q The number of similar samples is equal to the number of t × m matrix elements;
and 7: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
and 8: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
and step 9: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percent RF (X), calculating U, and updating W in an adaptive mode based on the Z value obtained in the step 2;
Figure GDA0003743092960000101
f RF (X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and f l tree I represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure GDA0003743092960000102
will f is mixed l tree And (X) i, namely, the number of times of accurate prediction is used as the number of samples with accurate final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
The process of training the decision tree in the random forest is as follows:
(1) setting parameters;
taking the number of historical sample characteristics as the characteristic dimension of each decision tree, taking the frequency of judging voltage and current characteristic values as the number of decision trees, taking the set decision time interval frequency as the level of each decision tree node, and setting the minimum sample number on the node as the sampling frequency in one day;
the actual number of samples is the number of eigenvalues multiplied by the number of sampling times; the minimum information gain on the nodes is 1, and the root node of each decision tree corresponds to the fault amount of the similar historical sample feature set;
(2) selecting a sample;
selecting training subset X from historical sample set X i Samples as root nodes;
(3) dividing the characteristics;
if the current node reaches the termination condition, namely the current node is trained or marked as a leaf node, no more node characteristic values are used for decision making, the current node is set as the leaf node, and the prediction output of the leaf node is the class c with the maximum number in the sample set of the current node i With a probability p i Denotes c i The proportion of the class in the current sample set;
if the current node does not reach the termination condition, randomly selecting a feature Z from the Z-dimensional features without being put back; using the z-dimension feature to search the one-dimension feature k with the best classification effect and the threshold value t thereof h
When the one-dimensional characteristic k with the best classification effect is calculated, the optimal threshold values of various discrimination types are determined, the k-th dimension characteristic value of the sample on the current node is smaller than the characteristic threshold value of the corresponding discrimination type, the k-th dimension characteristic value is divided into left nodes, and the rest are divided into right nodes. Then, continuously training other nodes to obtain a weak classifier;
for example, voltage loss, undervoltage, overvoltage, overcurrent, undercurrent, overload, reversal, phase failure, residual current fault and normal power failure are used as the discrimination types, the phase voltage value of the phase B and the phase current value of the phase C of the phase A and the phase C of the phase B are used as characteristic values, and the threshold value is set as the characteristic threshold value of voltage loss, undervoltage, overvoltage, overcurrent, undercurrent, overload, reversal, phase failure, residual current fault and normal power failure;
(4) continuously dividing;
repeating steps (2) and (3) until all nodes are trained or labeled as leaf nodes;
(5) outputting the prediction;
outputting a predicted value to each left leaf node and each right leaf node of the t trees, wherein the predicted value is c with the maximum sum of the predicted probabilities in all the trees i Accumulating class probabilities; and when the existing weak classifiers reach a certain number, obtaining the strong classifiers through a voting strategy to obtain the decision tree in the random forest.
Wherein, the existing weak classifiers reaching a certain number means that the weak classifiers reach the boundary function.
A random forest is a collection comprising a plurality of tree classifiers, defining h (x, theta) i ),i=1,2,3...}
Wherein, h (x, theta) i ) For the meta classifier of the model, a classification regression tree which is constructed by the CART algorithm and is not subjected to pruning operation is used, x represents a training data set constructed by a random forest, is a multi-dimensional vector set, and theta i The method is characterized in that a data vector set which is independently and identically distributed and is randomly extracted from x by using a bagging algorithm is used. Theta i The classification capability of the corresponding decision tree is determined.
The random forest model can be described as one such weak classifier:
{h 1 (x),h 2 (x),...,h k (x)}
is a classifier set composed of k (k >1) sub-classifiers, and a prediction vector x is input to obtain a prediction output node
If y, a boundary function is defined for the sample data set (x, y) as:
margin(x,y)=av k I(h k (x)=y)-max j≠y av k I(h k (x)=j)
i (func) is an exemplary function which is taken when a func description condition is satisfied
1, otherwise, take 0, av k (. indicates averaging the set. The boundary function calculates the weak classifier to a certain identity
The vector predicts the correct average number of votes and the maximum number of votes in case of a wrong prediction, and calculates the difference between the two indices. It is clear that the larger the value of the boundary function, the stronger the predictive power of the set of classifiers is indicated,
the higher the confidence.
A low pressure intelligent monitoring terminal fault prediction device based on improve random forest algorithm includes:
a historical sample feature set construction unit: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is S i ,S i =[s i1 ,s i2 ,...,s im ]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, s im An mth feature value representing an ith history sample;
an association judgment matrix calculation unit: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure GDA0003743092960000121
wherein Z is ij The correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is represented, i is 2, …, n, j is 1, …, m,
Figure GDA0003743092960000122
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number of ej Represents the jth characteristic value, y, of the e-th history sample kj Representing the jth characteristic value of a kth sample to be predicted;
Figure GDA0003743092960000123
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
the characteristic weight matrix construction unit is used for constructing a weight matrix unit by utilizing the weight of each characteristic value;
W=[W 1 ,W 2 ,...W i ,...,W n ] T wherein W is i =[w 1 ,w 2 ,...w j ...,w m ],w j The weight of the jth characteristic value is set as an initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
an association decision matrix calculation unit: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
a weighted vote value calculation unit of the decision tree: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure GDA0003743092960000124
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r is j Representing the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number of fj Represents the jth characteristic value of the f training sample, y kj Represents the j characteristic value of the k sample to be predicted, wherein f is 1,2, n; k is 1,2,. l; j is 1,2,. said, m;
Figure GDA0003743092960000125
Figure GDA0003743092960000126
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
a similar historical sample feature set selection unit: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic values q Q is 1, 2.. times, m, if U × Si matrix element is greater than or equal to a set threshold η q I.e. z ij w j s ij ≥η q ,i=2,...N; j is 1,2, in the first g rows of the matrix of U x { Si }, g is more than or equal to t, in the first v columns, v is less than or equal to m, and the sum is selected to satisfy z ij w j s ij ≥η q T x m elements of (1) to form a similar historical sample feature set S u
A random forest training unit: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
a prediction model acquisition unit: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
a result prediction unit: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
using initial values a and I (·) and the prediction accuracy rate of 100%, substituting f RF (X) calculating U, and updating W in a self-adaptive manner based on the Z value obtained by the association judgment matrix calculating unit;
Figure GDA0003743092960000131
f RF (X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and f l tree Wherein, i represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure GDA0003743092960000132
will f is l tree And (X) i, namely, the number of times of accurate prediction is used as the number of samples with accurate final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
It should be understood that the functional unit modules in the embodiments of the present invention may be integrated into one processing unit, or each unit module may exist alone physically, or two or more unit modules are integrated into one unit module, and may be implemented in the form of hardware or software.
The embodiment of the invention also provides a computer storage medium which comprises a computer program, and the computer program is executed by a processor to realize the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm. The beneficial effects are referred to in the method part, and are not described in detail herein.
The embodiment of the invention also provides low-voltage intelligent monitoring terminal fault prediction equipment based on the improved random forest algorithm, which comprises a processor and a memory;
the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the low-voltage intelligent monitoring terminal fault prediction device based on the improved random forest algorithm executes the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Although the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications may be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (6)

1. A low-voltage intelligent monitoring terminal fault prediction method based on an improved random forest algorithm is characterized by comprising the following steps:
step 1: a, B, C historical characteristic values of three-phase voltage and current are obtained, and a historical sample characteristic set is constructed;
the feature vector of the historical sample is S i ,S i =[s i1 ,s i2 ,...,s im ]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the scale of the historical sample, s im An mth feature value representing an ith history sample;
step 2: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure FDA0003743092950000011
wherein Z is ij The correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure FDA0003743092950000012
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number of ej Represents the jth characteristic value, y, of the e-th history sample kj Representing the jth characteristic value of a kth sample to be predicted;
Figure FDA0003743092950000013
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
and step 3: constructing a feature weight matrix W, W ═ W 1 ,W 2 ,...W i ,...,W n ] T Wherein W is i =[w 1 ,w 2 ,...w j ...,w m ],w j The weight of the jth characteristic value is taken as the initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
and 4, step 4: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
Figure FDA0003743092950000014
and 5: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure FDA0003743092950000021
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r is a radical of hydrogen j Representing the correlation between the j-th column of the feature vector in the training sample set and the j-th column of the feature vector in the sample set to be predicted; x is the number of fj Represents the jth characteristic value of the f training sample, y kj Represents the j characteristic value of the kth sample to be predicted, f is 1,2, and t; k is 1,2,. l; j ═ 1,2,. said, m;
Figure FDA0003743092950000022
Figure FDA0003743092950000023
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
step 6: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic values q Q 1, 2.. times, m, if U { S } i The elements in the matrix are more than or equal to a set threshold value eta q I.e. z ij w j s ij ≥η q I ═ 2.., n; j 1,2, m, in U { S } i In the first g rows of the matrix, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is met through accumulative selection ij w j s ij ≥η q T m elements of the set of sample characteristics form a similar history sample characteristic set S u
And 7: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
and 8: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
and step 9: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percent RF (X), calculating U, and updating W in an adaptive mode based on the Z value obtained in the step 2;
Figure FDA0003743092950000024
f RF (X) represents the final prediction model, and I (. cndot.) represents the number of expressions satisfying the parenthesis,f l tree I represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure FDA0003743092950000025
will f is l tree And (X) i, namely, the number of times of accurate prediction is used as the number of samples with accurate final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
2. The method for predicting the fault of the low-voltage intelligent monitoring terminal based on the improved random forest algorithm is characterized in that the process of training the decision tree in the random forest is as follows:
(1) setting parameters;
taking the number of historical sample characteristics as the characteristic dimension of each decision tree, taking the frequency of judging voltage and current characteristic values as the number of decision trees, taking the set decision time interval frequency as the level of each decision tree node, and setting the minimum sample number on the node as the sampling frequency in one day;
the actual number of samples is the number of eigenvalues multiplied by the number of sampling times; the minimum information gain on the nodes is 1, and the root node of each decision tree corresponds to the fault quantity of the similar historical sample characteristic set;
(2) selecting a sample;
selecting training subset X from historical sample set X i Samples as root nodes;
(3) dividing the characteristics;
if the current node reaches the termination condition, namely the current node is trained or marked as a leaf node, no more node characteristic values are used for decision making, the current node is set as the leaf node, and the prediction output of the leaf node is the class c with the maximum number in the sample set of the current node i With a probability p i Denotes c i The proportion of the class in the current sample set;
if the current node does not reach the termination condition, randomly selecting a feature Z from the Z-dimensional features without being put back; using this z dimensionCharacteristic, finding one-dimensional characteristic k with best classification effect and threshold t thereof h
When the one-dimensional characteristic k with the best classification effect is calculated, determining the optimal threshold values of various discrimination types, dividing the kth-dimensional characteristic value of a sample on the current node into left nodes when the kth-dimensional characteristic value of the sample on the current node is smaller than the characteristic threshold value of the corresponding discrimination type, and dividing the rest of the sample on the right node; then, continuously training other nodes to obtain a weak classifier;
(4) continuously dividing;
repeating steps (2) and (3) until all nodes are trained or labeled as leaf nodes;
(5) outputting the prediction;
outputting a predicted value to each left leaf node and each right leaf node of the t trees, wherein the predicted value is c with the maximum sum of prediction probabilities in all the trees i Accumulating class probabilities; and when the existing weak classifiers reach a certain number, obtaining the strong classifiers through a voting strategy to obtain the decision tree in the random forest.
3. The method for predicting the fault of the low-voltage intelligent monitoring terminal based on the improved random forest algorithm as claimed in claim 2, wherein the weak classifiers reaching a certain number refer to the weak classifiers reaching a boundary function.
4. The utility model provides a low pressure intelligent monitoring terminal fault prediction device based on improve random forest algorithm which characterized in that includes:
a historical sample feature set construction unit: obtaining A, B, C historical characteristic values of three-phase voltage and current, and constructing a historical sample characteristic set;
the feature vector of the historical sample is S i ,S i =[s i1 ,s i2 ,...,s im ]1,2, ·, n; m represents the number of eigenvalues contained in each sample, n represents the size of the historical sample, s im An mth feature value representing an ith history sample;
an association judgment matrix calculation unit: calculating a correlation judgment matrix Z with the size of n x m by calculating the correlation between the historical sample feature vector and the sample feature vector to be predicted;
Figure FDA0003743092950000041
wherein Z is ij The correlation between the jth eigenvalue of the ith to nth history samples and the jth column eigenvalue of the sample set to be predicted is shown, i is 2, …, n, j is 1, …, m,
Figure FDA0003743092950000042
wherein n represents the number of historical samples, m represents the number of sample characteristic values, and l represents the number of samples to be predicted; x is the number of ej Represents the jth characteristic value, y, of the e-th history sample kj Representing the jth characteristic value of the kth sample to be predicted;
Figure FDA0003743092950000043
respectively averaging jth eigenvalue of all samples in the historical sample characteristic set and the sample characteristic set to be predicted;
the characteristic weight matrix construction unit is used for constructing a weight matrix unit by utilizing the weight of each characteristic value;
W=[W 1 ,W 2 ,...W i ,...,W n ] T wherein W is i =[w 1 ,w 2 ,...w j ...,w m ],w j The weight of the jth characteristic value is taken as the initial value, the initial value is a random value, and the weight value is more than or equal to 0 and less than or equal to 1;
an association decision matrix calculation unit: performing dot product calculation on the characteristic weight matrix W and the association judgment matrix Z to obtain an association decision matrix U;
a weighted vote value calculation unit of the decision tree: randomly selecting d characteristics from a historical sample characteristic set as a training sample set to obtain a weighted voting value a of a decision tree,
Figure FDA0003743092950000044
wherein, λ is a parameter adjusting factor, the initial value is a random value, and the random value is more than or equal to 0 and less than or equal to 1; r is j Representing the correlation between the j column of the feature vector in the training sample set and the j column of the feature vector in the sample set to be predicted; x is the number of fj Represents the jth characteristic value of the f training sample, y kj Represents the j characteristic value of the kth sample to be predicted, wherein f is 1, 2. k is 1,2,. l; j is 1,2,. said, m;
Figure FDA0003743092950000045
Figure FDA0003743092950000051
respectively averaging jth characteristic values of all samples in the training sample set and the sample set to be predicted; t represents the number of training samples;
a similar historical sample feature set selection unit: selecting a similar historical sample feature set from the historical sample feature vector set by utilizing a U;
adaptively adjusting the threshold according to the number of the set similar historical samples, and setting a threshold eta for each column of characteristic values q Q is 1,2, · m, if U { S } i The elements in the matrix are more than or equal to a set threshold value eta q I.e. z ij w j s ij ≥η q I ═ 2.., n; j 1,2, m, where U { S } i In the first g rows of the matrix, g is more than or equal to t, in the first v columns, v is less than or equal to m, and z is met through accumulative selection ij w j s ij ≥η q T m elements of the set of sample characteristics form a similar history sample characteristic set S u
A random forest training unit: training a decision tree in the random forest by using the selected similar historical sample feature set and the corresponding fault category to obtain a trained random forest;
a prediction model acquisition unit: weighting the fault prediction result of each decision tree in the random forest by using the weighted voting value a of the decision tree and the associated decision matrix U, and adjusting a to obtain a final prediction model with the prediction accuracy as a target of 100%;
a result prediction unit: inputting the characteristic vector of the sample to be predicted into a final prediction model to obtain a final fault prediction result;
substituting f into the initial values a and I (DEG) with the prediction accuracy of 100 percent RF (X) calculating U, and updating W in a self-adaptive manner based on the Z value obtained by the association judgment matrix calculating unit;
Figure FDA0003743092950000052
f RF (X) represents the final prediction model, I (. cndot.) represents the number of expressions in parentheses, and f l tree I represents the fault prediction result of the first decision tree in the trained random forest as i, c represents the fault prediction result category number of the whole random forest,
Figure FDA0003743092950000053
will f is l tree And (X) i, namely, the number of times of accurate prediction is used as the number of samples with accurate final fault prediction, and the number of characteristic vectors of the prediction samples is used as the number of prediction samples.
5. A computer storage medium comprising a computer program, wherein the computer program is executed by a processor to implement the method for predicting the fault of the low voltage intelligent monitoring terminal based on the improved random forest algorithm according to any one of claims 1 to 3.
6. A low-voltage intelligent monitoring terminal fault prediction device based on an improved random forest algorithm is characterized by comprising a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory to enable the low-voltage intelligent monitoring terminal fault prediction device based on the improved random forest algorithm to execute the low-voltage intelligent monitoring terminal fault prediction method based on the improved random forest algorithm according to any one of claims 1 to 3.
CN202010872318.4A 2020-08-26 2020-08-26 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment Active CN111985571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010872318.4A CN111985571B (en) 2020-08-26 2020-08-26 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010872318.4A CN111985571B (en) 2020-08-26 2020-08-26 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment

Publications (2)

Publication Number Publication Date
CN111985571A CN111985571A (en) 2020-11-24
CN111985571B true CN111985571B (en) 2022-09-09

Family

ID=73440950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010872318.4A Active CN111985571B (en) 2020-08-26 2020-08-26 Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN111985571B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985524A (en) * 2020-07-01 2020-11-24 佳源科技有限公司 Improved low-voltage transformer area line loss calculation method
CN112733903B (en) * 2020-12-30 2023-11-17 许昌学院 SVM-RF-DT combination-based air quality monitoring and alarming method, system, device and medium
CN113361607B (en) * 2021-06-08 2023-01-20 云南电网有限责任公司电力科学研究院 Medium-voltage distribution network line problem analysis method and device
CN114912372B (en) * 2022-06-17 2024-01-26 山东黄金矿业科技有限公司充填工程实验室分公司 High-precision filling pipeline fault early warning method based on artificial intelligence algorithm
CN115184674A (en) * 2022-07-01 2022-10-14 苏州清研精准汽车科技有限公司 Insulation test method and device, electronic terminal and storage medium
CN114912721B (en) * 2022-07-18 2022-12-13 国网江西省电力有限公司经济技术研究院 Method and system for predicting energy storage peak shaving demand
CN116910668B (en) * 2023-09-11 2024-04-02 国网浙江省电力有限公司余姚市供电公司 Lightning arrester fault early warning method, device, equipment and storage medium
CN117408574A (en) * 2023-12-13 2024-01-16 南通至正电子有限公司 Chip production monitoring management method, device, equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210381A (en) * 2019-05-30 2019-09-06 盐城工学院 A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN111046931A (en) * 2019-12-02 2020-04-21 北京交通大学 Turnout fault diagnosis method based on random forest

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100499398C (en) * 2005-03-02 2009-06-10 中兴通讯股份有限公司 Method and apparatus for realizing intelligent antenna of broadband CDMA system
US8019015B2 (en) * 2007-02-26 2011-09-13 Harris Corporation Linearization of RF power amplifiers using an adaptive subband predistorter

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210381A (en) * 2019-05-30 2019-09-06 盐城工学院 A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN111046931A (en) * 2019-12-02 2020-04-21 北京交通大学 Turnout fault diagnosis method based on random forest

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Identification of sunflower leaf diseases based on random forest algorithm;Jun Liu et al.;《2019 International Conference on Intelligent Computing, Automation and Systems (ICICAS)》;20200402;第459-463页 *

Also Published As

Publication number Publication date
CN111985571A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111985571B (en) Low-voltage intelligent monitoring terminal fault prediction method, device, medium and equipment
CN107516170B (en) Difference self-healing control method based on equipment failure probability and power grid operation risk
CN106600059B (en) Intelligent power grid short-term load prediction method based on improved RBF neural network
CN111339712B (en) Proton exchange membrane fuel cell residual life prediction method
CN109871860B (en) Daily load curve dimension reduction clustering method based on kernel principal component analysis
CN110009030B (en) Sewage treatment fault diagnosis method based on stacking meta-learning strategy
CN114925612A (en) Transformer fault diagnosis method for optimizing hybrid kernel extreme learning machine based on sparrow search algorithm
CN115907195A (en) Photovoltaic power generation power prediction method, system, electronic device and medium
CN112766603A (en) Traffic flow prediction method, system, computer device and storage medium
CN114118596A (en) Photovoltaic power generation capacity prediction method and device
Suresh et al. A sequential learning algorithm for meta-cognitive neuro-fuzzy inference system for classification problems
CN117310533A (en) Service life acceleration test method and system for proton exchange membrane fuel cell
CN115713144A (en) Short-term wind speed multi-step prediction method based on combined CGRU model
CN111275074A (en) Power CPS information attack identification method based on stack type self-coding network model
Li The hybrid credit scoring strategies based on knn classifier
CN117689082A (en) Short-term wind power probability prediction method, system and storage medium
Saidi et al. Power outage prediction by using logistic regression and decision tree
CN115795035A (en) Science and technology service resource classification method and system based on evolutionary neural network and computer readable storage medium thereof
CN112990255B (en) Device failure prediction method, device, electronic device and storage medium
CN111985524A (en) Improved low-voltage transformer area line loss calculation method
CN116108975A (en) Method for establishing short-term load prediction model of power distribution network based on BR-SOM clustering algorithm
CN113807019A (en) MCMC wind power simulation method based on improved scene classification and coarse grain removal
Remeikis et al. Text categorization using neural networks initialized with decision trees
CN115587644B (en) Photovoltaic power station performance parameter prediction method, device, equipment and medium
CN117998364B (en) XGBoost WSN intrusion detection system based on mixed feature selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant