CN111259953A - Equipment defect time prediction method based on capacitive equipment defect data - Google Patents
Equipment defect time prediction method based on capacitive equipment defect data Download PDFInfo
- Publication number
- CN111259953A CN111259953A CN202010039425.9A CN202010039425A CN111259953A CN 111259953 A CN111259953 A CN 111259953A CN 202010039425 A CN202010039425 A CN 202010039425A CN 111259953 A CN111259953 A CN 111259953A
- Authority
- CN
- China
- Prior art keywords
- data
- defect
- capacitive
- equipment
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
According to the method for predicting the equipment defect time based on the capacitive equipment defect data, firstly, the capacitive equipment defect data are obtained, the obtained abnormal, redundant and missing data are better processed through a series of characteristic engineering methods, the occurrence time of the defect is better predicted by establishing a capacitive equipment defect occurrence time model, the model can extract effective characteristics from big data, and the characteristics are used for accurately predicting the defect occurrence time of the capacitive equipment. The method for predicting the equipment defect time based on the capacitive equipment defect data has the advantages of being simple to implement, high in calculation speed, high in prediction precision, good in prediction robustness and systematic in prediction process, and solves the problem that in the prior art, the defect time of equipment cannot be accurately predicted because the experimental data is only considered to give up the capacitive equipment work and maintenance data collected by a power grid company all the year round to conduct research prediction.
Description
Technical Field
The application relates to the technical field of electrical equipment and information, in particular to an equipment defect time prediction method based on capacitance type equipment defect data.
Background
A capacitive device is a device that employs a capacitive shielding insulation structure. The device mainly comprises a current transformer, a voltage transformer, a capacitive type pipe sleeve, a coupling excrement container and the like, which account for about 40 to 50 percent of the total amount of power transmission and transformation equipment and are the most equipment in a transformer substation. The healthy operation of capacitive devices and the safety of electrical devices are of vital importance to substations, and any unexpected failure may lead to major accidents and very large economic losses. Therefore, the realization of on-line detection and prediction of capacitive devices is of great research significance.
At present, on-line monitoring research on capacitive equipment mainly focuses on the aspects of development of a digital measurement sender and an on-line monitoring system, the influence of research environment on the capacitive equipment needs to be tested in a climate chamber, the research is relatively complex, and the research on the capacitive equipment at home and abroad is less.
In some researches, a large-scale artificial climate chamber is adopted to carry out experiments on the influence of environmental factors on capacitive equipment, more comprehensive and accurate experimental data are obtained, a correction model of the influence of main link factors based on a Support Vector Machine (SVM) is provided, and a genetic algorithm is adopted to optimize model parameters. However, this method requires a separate laboratory and is not practical for all types of equipment since the performance of the equipment parameters from each manufacturer is not very consistent. Meanwhile, the working real environment of the capacitive equipment is more complex than the experimental environment, and the research and prediction can possibly draw a conclusion on one side by simply considering the experimental data to give up the working and maintenance data of the capacitive equipment collected by a power grid company over the years.
Disclosure of Invention
The application provides a method for predicting equipment defect time based on capacitive equipment defect data, which aims to solve the problem that in the prior art, the experimental data is only considered to give up capacitive equipment working and maintenance data collected by a power grid company over the years to carry out research and prediction, so that a conclusion of comparison can be obtained, and the equipment defect time cannot be accurately predicted.
The technical scheme adopted by the application for solving the technical problems is as follows:
a method for predicting device defect time based on capacitive device defect data, the method comprising:
carrying out data cleaning treatment on the capacitance type equipment defect data set;
carrying out characteristic transformation and coding on the cleaned data to obtain characteristic data;
performing dimension reduction and denoising on the characteristic data by using a method of recoding by using a self-encoder to obtain recoded characteristic data;
training a plurality of machine learning models by using the obtained recoding data characteristics, evaluating the quality degree of the models by using training time and root mean square error, and selecting an optimal model;
and storing the optimal model, and predicting the defect time of the capacitive equipment by using the optimal model.
Optionally, the data cleansing process includes: and filling missing values in the data cleaning, wherein the filling method is to use a K nearest neighbor algorithm and a random forest algorithm for filling.
Optionally, the feature transformation and encoding includes feature decomposition and interleaving.
Optionally, the method of re-encoding by the self-encoder includes: sparse autoencoder, noise reduction autoencoder and variation autoencoder;
the sparse encoder selects an encoder comprising 4 fully-connected layers and a decoder comprising 4 fully-connected layers.
Optionally, the training a plurality of machine learning models by using the obtained recoded data features includes:
taking the marked and cleaned data characteristics as a label sample, and carrying out a normalization pretreatment process on the label sample;
using the label samples to learn models for five machines: respectively training K nearest neighbor regression, support vector regression, random forest, gradient lifting tree and deep learning;
calculating a mean square loss function of the machine learning model, and if the loss function meets the conditions and the super-parameters of all grid search are trained, obtaining the model by using the super-parameters with the best effect; otherwise, the label samples are reused to continuously train the five machine learning models.
Through evaluating the loss of the five machine learning models, the model with the minimum loss is selected.
Optionally, in the five machine learning models, a K value in a K neighbor regression is selected to be 3, and a manhattan distance is used as a distance measure; selecting a Gaussian kernel to use in the support vector regression; the number of decision trees in the random forest is 100, and the maximum depth of the trees is 3; the number of decision trees in the gradient promotion number is 100, the maximum depth of the trees is 3, and the learning rate is 0.1; the deep learning comprises four groups of convolution and batch normalization layers, two full-connection layers are connected behind the four groups of convolution and batch normalization layers, a ReLU is selected by an activation function, an MSE is selected by a loss function, and Adam is used in an optimization method.
Optionally, the machine learning models are trained by using a five-fold cross validation grid hyper-parameter search method, so as to ensure that the optimal hyper-parameter is searched and reach a set mean square error threshold, and if the mean square error is greater than the threshold, the training is continued.
The technical scheme provided by the application comprises the following beneficial technical effects:
according to the method for predicting the equipment defect time based on the capacitive equipment defect data, firstly, the capacitive equipment defect data are obtained, the obtained abnormal, redundant and missing data are better processed through a series of characteristic engineering methods, the occurrence time of the defect is better predicted by establishing a capacitive equipment defect occurrence time model, the model can extract effective characteristics from big data, and the characteristics are used for accurately predicting the defect occurrence time of the capacitive equipment. The method for predicting the equipment defect time based on the capacitive equipment defect data has the advantages of being simple to implement, high in calculation speed, high in prediction precision, good in prediction robustness and systematic in prediction process, and solves the problem that in the prior art, the defect time of equipment cannot be accurately predicted because the experimental data is only considered to give up the capacitive equipment work and maintenance data collected by a power grid company all the year round to conduct research prediction.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a flowchart of a method for predicting device defect time based on capacitive device defect data according to an embodiment of the present application;
fig. 2 is provided for an embodiment of the present application.
Detailed Description
In order to make the technical solutions in the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application; it is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a device defect time prediction method based on capacitive device defect data according to an embodiment of the present application, and as shown in fig. 1, the device defect time prediction method based on capacitive device defect data according to the present application includes the following steps:
s1: carrying out data cleaning treatment on the capacitance type equipment defect data set;
s2: carrying out characteristic transformation and coding on the cleaned data to obtain characteristic data;
s3: performing dimension reduction and denoising on the characteristic data by using a method of recoding by using a self-encoder to obtain recoded characteristic data;
s4: training a plurality of machine learning models by using the obtained recoding data characteristics, evaluating the quality degree of the models by using training time and root mean square error, and selecting an optimal model;
s5: and saving the optimal model, and predicting the defect time of the capacitive equipment by using the optimal model.
According to the device defect time prediction method based on the capacitive device defect data, firstly, the capacitive device defect data are obtained, the obtained abnormal, redundant and missing data are better processed through a series of characteristic engineering methods, the occurrence time of the defect is better predicted by establishing a capacitive device defect occurrence time model, the model can extract effective characteristics from big data, and the characteristics are used for accurately predicting the defect occurrence time of the capacitive device. The method for predicting the equipment defect time based on the capacitive equipment defect data has the advantages of being simple to implement, high in calculation speed, high in prediction precision, good in prediction robustness and systematic in prediction process.
Optionally, the data cleansing process includes: and filling missing values in data cleaning by using a K nearest neighbor algorithm and a random forest algorithm.
By adopting the K neighbor algorithm and the random forest algorithm, the missing value filling result is more robust, and more available information is introduced by combining with information of other characteristics, so that the data is closer to the real data, and the modeling step is facilitated.
Optionally, the feature transformation and encoding includes feature decomposition and interleaving.
Optionally, the method of re-encoding by the self-encoder includes: sparse autoencoder, noise reduction autoencoder and variation autoencoder; the sparse encoder selects an encoder that includes 4 fully-connected layers and a decoder of 4 fully-connected layers.
By adopting the technical scheme, in the dimension reduction of the characteristic data, the function space is larger than that of a principal component analysis by using a method model recoded by an autoencoder, and the loss is smaller while the dimension reduction is carried out; in the noise reduction of the characteristic data, the self-coding method can ensure the sparsity of the data and remove most of the noise in the data, thereby obtaining the specific building process of the recoding method model of the self-coder, and setting specific input and output parameters by combining the building process with a specific application scene.
Optionally, training a plurality of machine learning models by using the obtained recoded data features, including:
taking the marked and cleaned data characteristics as a label sample, and carrying out a normalization pretreatment process on the label sample;
the model is learned for five machines by using label samples: respectively training K nearest neighbor regression, support vector regression, random forest, gradient lifting tree and deep learning;
calculating a mean square loss function of the machine learning model, and if the loss function meets the conditions and the super-parameters of all grid search are trained, obtaining the model by using the super-parameters with the best effect; otherwise, the label samples are reused to continuously train the five machine learning models.
And evaluating the loss of the five machine learning models, and selecting the model with the minimum loss so as to obtain the best machine learning model.
Optionally, in the five machine learning models, a K value in a K neighbor regression is selected to be 3, and a manhattan distance is used as a distance measure; selecting a Gaussian kernel to use in the support vector regression; the number of decision trees in the random forest is 100, and the maximum depth of the trees is 3; the number of decision trees in the gradient promotion number is 100, the maximum depth of the trees is 3, and the learning rate is 0.1; the deep learning comprises four groups of convolution and batch normalization layers, two full-connection layers are connected behind the four groups of convolution and batch normalization layers, a ReLU is selected by an activation function, an MSE is selected by a loss function, and Adam is used in an optimization method.
By adopting the technical scheme, a specific establishing process of the machine learning model is obtained, and specific input and output parameters are set by combining the establishing process into a specific application scene.
Optionally, the machine learning models are trained by adopting a five-fold cross validation grid hyper-parameter search method, so as to ensure that the optimal hyper-parameter is searched and reach a set mean square error threshold, and if the mean square error is greater than the threshold, the training is continued.
By adopting the technical scheme, the determination standard of the machine learning model is obtained.
Specifically, the embodiment of the present application further provides a specific fact manner, as follows:
step 1: and (3) carrying out data cleaning treatment on the capacitance type equipment defect data set: removing values missing by more than 70%, and filling missing values of more than 30% by using a K nearest neighbor algorithm and a random forest algorithm; drawing a box type graph of each characteristic according to the data characteristics, removing abnormal values of the data, and deleting all redundant data and null data;
step 2: and (3) carrying out characteristic transformation and coding on the cleaned data: all the character-type features are subjected to feature decomposition, as shown in table 1 below. Label coding is carried out on the decomposed character type characteristics, namely, the value of each characteristic is corresponding to a number. For continuous numerical features, feature binning techniques are performed, such as latitude and longitude, with different codes being given every 10 °. Finally, all the coded features are subjected to feature intersection, and the features are multiplied by each other to form new features.
TABLE 1 character-type characteristic decomposition look-up table
And step 3: and (3) carrying out sparse self-encoder recoding on the characteristic data obtained in the step (2): and selecting sparse self-coding to perform dimension reduction and denoising recoding on the characteristic data according to the quality of the characteristic data. The sparse encoder selects an encoder that includes 4 fully-connected layers and a decoder of 4 fully-connected layers. The input of the encoder is the characteristics obtained in the step 2, and the output of the first layer is 32 characteristics after dimension reduction. The encoder second, third and fourth layer shapes are (64, 32), (32, 32) and (32, 16), respectively. The decoder first, second and third layers are (16, 32), (32, 32) and (32, 64) in shape, respectively. The fourth layer of the decoder inputs 64 features and outputs the feature quantity obtained in the step 2. The activation function chosen is the Tanh function. The self-encoder adds an L1 regularization term and the loss function is chosen to be the mean square error. And performing five-fold cross validation on 200 rounds of training to obtain the trained sparse self-encoder. And (3) inputting the obtained data characteristics of the step (2) into a self-encoder, and taking the output of the encoder as new recoding characteristics. The sparse autoencoder structure is shown in fig. 2.
And 4, step 4: and (3) constructing a model by using the characteristic data obtained in the step (3): and predicting the defect occurrence time of the capacitive equipment by using K nearest neighbor regression, support vector regression, random forest, gradient lifting tree and deep learning methods respectively. And (3) taking the feature data with the defects as a labeled training set, taking the feature data without the defects as a test set, and normalizing the feature data set. And (4) training the five models by using a training set through a five-fold cross validation method and a grid search method respectively, and selecting the optimal hyper-parameter. The resulting model and its mean square error are shown in table 2 below, where K-nearest neighbor regression K-value is selected to be 3, using manhattan distance as the distance metric; the support vector regression selection uses a Gaussian kernel; the number of decision trees of the random forest is 100, and the maximum depth of the trees is 3; the number of decision trees with gradient promotion number is 100, the maximum depth of the trees is 3, and the learning rate is 0.1; the deep learning model comprises four groups of convolution and batch normalization layers, and two fully connected layers are connected in sequence. The activation function selects ReLU, the loss function selects MSE, and the optimization method uses Adam. And (3) evaluating the five models to finally obtain the model with the best effect: a gradient lifting tree model.
TABLE 2 five models and their mean square error comparison table
And 5: and (4) storing the model obtained in the step (4), and predicting the data of the test set: and (4) storing the parameters of the gradient lifting decision tree model, extracting the characteristic vectors of the test set in the step (4), and inputting the characteristic vectors into the model for prediction. And finally, obtaining a prediction result of the defect time of the capacitive equipment, and writing the prediction result back to the data table again to finish the operation.
According to the method for predicting the defect time of the capacitive equipment based on the defect data of the capacitive equipment, firstly, the defect data of the capacitive equipment is obtained, the obtained abnormal, redundant and missing data are better processed through a series of characteristic engineering methods, the occurrence time of the defect is better predicted by establishing a model of the occurrence time of the defect of the capacitive equipment, the model can extract effective characteristics from big data, and the characteristics are used for accurately predicting the occurrence time of the defect of the capacitive equipment. The method for predicting the equipment defect time based on the capacitive equipment defect data has the advantages of being simple to implement, high in calculation speed, high in prediction precision, good in prediction robustness and systematic in prediction process, and solves the problem that in the prior art, the defect time of equipment cannot be accurately predicted because the experimental data is only considered to give up the capacitive equipment work and maintenance data collected by a power grid company all the year round to conduct research prediction.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be understood that the present application is not limited to what has been described above and shown in the accompanying drawings, and that various modifications and changes can be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (7)
1. A device defect time prediction method based on capacitive device defect data is characterized by comprising the following steps:
carrying out data cleaning treatment on the capacitance type equipment defect data set;
carrying out characteristic transformation and coding on the cleaned data to obtain characteristic data;
performing dimension reduction and denoising on the characteristic data by using a method of recoding by using a self-encoder to obtain recoded characteristic data;
training a plurality of machine learning models by using the obtained recoding data characteristics, evaluating the quality degree of the models by using training time and root mean square error, and selecting an optimal model;
and storing the optimal model, and predicting the defect time of the capacitive equipment by using the optimal model.
2. The method of claim 1, wherein the data cleaning process comprises: and filling missing values in the data cleaning, wherein the filling method is to use a K nearest neighbor algorithm and a random forest algorithm for filling.
3. The method of claim 1, wherein the feature transformation and encoding comprises feature decomposition and interleaving.
4. The method of claim 1, wherein the method of self-encoder recoding comprises: sparse autoencoder, noise reduction autoencoder and variation autoencoder;
the sparse encoder selects an encoder comprising 4 fully-connected layers and a decoder comprising 4 fully-connected layers.
5. The method of claim 1, wherein training a plurality of machine learning models using the obtained re-encoded data features comprises:
taking the marked and cleaned data characteristics as a label sample, and carrying out a normalization pretreatment process on the label sample;
using the label samples to learn models for five machines: respectively training K nearest neighbor regression, support vector regression, random forest, gradient lifting tree and deep learning;
calculating a mean square loss function of the machine learning model, and if the loss function meets the conditions and the super-parameters of all grid search are trained, obtaining the model by using the super-parameters with the best effect; otherwise, the label samples are reused to continuously train the five machine learning models.
Through evaluating the loss of the five machine learning models, the model with the minimum loss is selected.
6. The method of claim 5, wherein in the five machine learning models, the K value in the K nearest neighbor regression is selected to be 3, and the manhattan distance is used as the distance metric; selecting a Gaussian kernel to use in the support vector regression; the number of decision trees in the random forest is 100, and the maximum depth of the trees is 3; the number of decision trees in the gradient promotion number is 100, the maximum depth of the trees is 3, and the learning rate is 0.1; the deep learning comprises four groups of convolution and batch normalization layers, two full-connection layers are connected behind the four groups of convolution and batch normalization layers, a ReLU is selected by an activation function, an MSE is selected by a loss function, and Adam is used in an optimization method.
7. The method for predicting the defect time of the capacitive device based on the defect data of the capacitive device according to claim 1, wherein the machine learning models are trained by a grid hyper-parameter searching method of five-fold cross validation, so that the optimal hyper-parameter is searched, a set mean square error threshold value is reached, and the training is continued if the mean square error is greater than the threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010039425.9A CN111259953B (en) | 2020-01-15 | 2020-01-15 | Equipment defect time prediction method based on capacitive equipment defect data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010039425.9A CN111259953B (en) | 2020-01-15 | 2020-01-15 | Equipment defect time prediction method based on capacitive equipment defect data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111259953A true CN111259953A (en) | 2020-06-09 |
CN111259953B CN111259953B (en) | 2023-10-20 |
Family
ID=70948786
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010039425.9A Active CN111259953B (en) | 2020-01-15 | 2020-01-15 | Equipment defect time prediction method based on capacitive equipment defect data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259953B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112926627A (en) * | 2021-01-28 | 2021-06-08 | 电子科技大学 | Equipment defect time prediction method based on capacitive equipment defect data |
CN113378908A (en) * | 2021-06-04 | 2021-09-10 | 浙江大学 | Heating ventilation air conditioning system fault diagnosis method based on LightGBM algorithm and grid search algorithm |
CN114004515A (en) * | 2021-11-04 | 2022-02-01 | 云南电网有限责任公司电力科学研究院 | Singular value decomposition-based transformer substation equipment health monitoring sensor arrangement method |
WO2022117218A1 (en) * | 2020-12-02 | 2022-06-09 | Hitachi Energy Switzerland Ag | Prognosis of high voltage equipment |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090157571A1 (en) * | 2007-12-12 | 2009-06-18 | International Business Machines Corporation | Method and apparatus for model-shared subspace boosting for multi-label classification |
WO2014205231A1 (en) * | 2013-06-19 | 2014-12-24 | The Regents Of The University Of Michigan | Deep learning framework for generic object detection |
US20150112903A1 (en) * | 2013-02-28 | 2015-04-23 | Huawei Technologies Co., Ltd. | Defect prediction method and apparatus |
CN108255656A (en) * | 2018-02-28 | 2018-07-06 | 湖州师范学院 | A kind of fault detection method applied to batch process |
CN108459955A (en) * | 2017-09-29 | 2018-08-28 | 重庆大学 | Software Defects Predict Methods based on depth autoencoder network |
CN109165664A (en) * | 2018-07-04 | 2019-01-08 | 华南理工大学 | A kind of attribute missing data collection completion and prediction technique based on generation confrontation network |
CN109271374A (en) * | 2018-10-19 | 2019-01-25 | 国网江苏省电力有限公司信息通信分公司 | A kind of database health scoring method and scoring system based on machine learning |
CN110174499A (en) * | 2019-07-10 | 2019-08-27 | 云南电网有限责任公司电力科学研究院 | A kind of prediction technique and device of sulfur hexafluoride electrical equipment Air Leakage Defect |
CN110348633A (en) * | 2019-07-11 | 2019-10-18 | 电子科技大学 | A kind of linear classification model defect Occurrence forecast method |
CN110399685A (en) * | 2019-07-29 | 2019-11-01 | 云南电网有限责任公司电力科学研究院 | Capacitance type equipment defect rank prediction technique and device |
CN110515931A (en) * | 2019-07-02 | 2019-11-29 | 电子科技大学 | A kind of capacitance type equipment failure prediction method based on random forests algorithm |
CN110659937A (en) * | 2019-09-20 | 2020-01-07 | 鞍钢集团矿业有限公司 | Gradient-lifting-tree-based improved supplier quantitative scoring prediction algorithm |
-
2020
- 2020-01-15 CN CN202010039425.9A patent/CN111259953B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090157571A1 (en) * | 2007-12-12 | 2009-06-18 | International Business Machines Corporation | Method and apparatus for model-shared subspace boosting for multi-label classification |
US20150112903A1 (en) * | 2013-02-28 | 2015-04-23 | Huawei Technologies Co., Ltd. | Defect prediction method and apparatus |
WO2014205231A1 (en) * | 2013-06-19 | 2014-12-24 | The Regents Of The University Of Michigan | Deep learning framework for generic object detection |
CN108459955A (en) * | 2017-09-29 | 2018-08-28 | 重庆大学 | Software Defects Predict Methods based on depth autoencoder network |
CN108255656A (en) * | 2018-02-28 | 2018-07-06 | 湖州师范学院 | A kind of fault detection method applied to batch process |
CN109165664A (en) * | 2018-07-04 | 2019-01-08 | 华南理工大学 | A kind of attribute missing data collection completion and prediction technique based on generation confrontation network |
CN109271374A (en) * | 2018-10-19 | 2019-01-25 | 国网江苏省电力有限公司信息通信分公司 | A kind of database health scoring method and scoring system based on machine learning |
CN110515931A (en) * | 2019-07-02 | 2019-11-29 | 电子科技大学 | A kind of capacitance type equipment failure prediction method based on random forests algorithm |
CN110174499A (en) * | 2019-07-10 | 2019-08-27 | 云南电网有限责任公司电力科学研究院 | A kind of prediction technique and device of sulfur hexafluoride electrical equipment Air Leakage Defect |
CN110348633A (en) * | 2019-07-11 | 2019-10-18 | 电子科技大学 | A kind of linear classification model defect Occurrence forecast method |
CN110399685A (en) * | 2019-07-29 | 2019-11-01 | 云南电网有限责任公司电力科学研究院 | Capacitance type equipment defect rank prediction technique and device |
CN110659937A (en) * | 2019-09-20 | 2020-01-07 | 鞍钢集团矿业有限公司 | Gradient-lifting-tree-based improved supplier quantitative scoring prediction algorithm |
Non-Patent Citations (3)
Title |
---|
(美)斯瓦米等: "《无线传感器网络:信号处理与通信》", 31 December 2015, pages: 183 - 184 * |
陈拥军等: ""基于决策支持系统平台的设备缺陷预测"", 《浙江电力》, no. 6, pages 1 - 4 * |
高顿财经研究院: "《CFA 二级中文教材》", 31 March 2019, pages: 116 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022117218A1 (en) * | 2020-12-02 | 2022-06-09 | Hitachi Energy Switzerland Ag | Prognosis of high voltage equipment |
CN112926627A (en) * | 2021-01-28 | 2021-06-08 | 电子科技大学 | Equipment defect time prediction method based on capacitive equipment defect data |
CN113378908A (en) * | 2021-06-04 | 2021-09-10 | 浙江大学 | Heating ventilation air conditioning system fault diagnosis method based on LightGBM algorithm and grid search algorithm |
CN114004515A (en) * | 2021-11-04 | 2022-02-01 | 云南电网有限责任公司电力科学研究院 | Singular value decomposition-based transformer substation equipment health monitoring sensor arrangement method |
Also Published As
Publication number | Publication date |
---|---|
CN111259953B (en) | 2023-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111259953B (en) | Equipment defect time prediction method based on capacitive equipment defect data | |
CN109978079A (en) | A kind of data cleaning method of improved storehouse noise reduction self-encoding encoder | |
CN111325403B (en) | Method for predicting residual life of electromechanical equipment of highway tunnel | |
CN112733417B (en) | Abnormal load data detection and correction method and system based on model optimization | |
CN112926627A (en) | Equipment defect time prediction method based on capacitive equipment defect data | |
CN115563563A (en) | Fault diagnosis method and device based on transformer oil chromatographic analysis | |
CN112289391B (en) | Anode aluminum foil performance prediction system based on machine learning | |
CN114397526A (en) | Power transformer fault prediction method and system driven by state holographic sensing data | |
CN114169434A (en) | Load prediction method | |
CN117056865B (en) | Method and device for diagnosing operation faults of machine pump equipment based on feature fusion | |
CN114021758A (en) | Operation and maintenance personnel intelligent recommendation method and device based on fusion of gradient lifting decision tree and logistic regression | |
CN112287605B (en) | Power flow checking method based on graph convolution network acceleration | |
CN116842337A (en) | Transformer fault diagnosis method based on LightGBM (gallium nitride based) optimal characteristics and COA-CNN (chip on board) model | |
CN116379360A (en) | Knowledge migration-based hydrogen-doped natural gas pipeline damage prediction method and system | |
CN113922823B (en) | Social media information propagation graph data compression method based on constraint sparse representation | |
CN115935814A (en) | Transformer fault prediction method based on ARIMA-SVM model | |
CN115907158A (en) | Load prediction method, device and storage medium based on heuristic configuration | |
CN112949203B (en) | Board laser cutting quality judgment method based on electrical parameters and XGBOOST-NN algorithm | |
CN111273635B (en) | Unknown anomaly detection method for industrial control equipment | |
CN114239999A (en) | Element reliability parameter optimization analysis method based on cross entropy important sampling | |
Rosli et al. | Improving state estimation accuracy through incremental meter placement using new evolutionary strategy | |
CN112308338A (en) | Power data processing method and device | |
CN112699608B (en) | Time sequence repairing method suitable for data loss caused by sensor power failure | |
CN115714381B (en) | Power grid transient stability prediction method based on improved CNN model | |
CN111461461B (en) | Hydraulic engineering abnormity detection method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |