CN112861417A - Transformer fault diagnosis method based on weighted sum selective naive Bayes - Google Patents

Transformer fault diagnosis method based on weighted sum selective naive Bayes Download PDF

Info

Publication number
CN112861417A
CN112861417A CN202011489636.9A CN202011489636A CN112861417A CN 112861417 A CN112861417 A CN 112861417A CN 202011489636 A CN202011489636 A CN 202011489636A CN 112861417 A CN112861417 A CN 112861417A
Authority
CN
China
Prior art keywords
attribute
attributes
data
probability
correlation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011489636.9A
Other languages
Chinese (zh)
Inventor
魏清
惠光艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Zhongkun Data Technology Co ltd
Original Assignee
Jiangsu Zhongkun Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Zhongkun Data Technology Co ltd filed Critical Jiangsu Zhongkun Data Technology Co ltd
Priority to CN202011489636.9A priority Critical patent/CN112861417A/en
Publication of CN112861417A publication Critical patent/CN112861417A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/08Probabilistic or stochastic CAD

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

Transformer fault diagnosis method based on weighted sum selective naive Bayes, based on x2Removing a part of redundant attributes by a statistical attribute selection method, constructing an attribute learning classifier with a better classification result, 1) collecting historical fault data of the main transformer, wherein the historical fault data comprises attribute data and fault types, discretizing the conditional attribute data, and dividing the data into a training set and a test set; 2) using a base x2Selecting an optimal reduction subset RAS by a statistic attribute selection method; 3) the prior probability learning comprises the steps of calculating the prior probability of all decision attributes and the conditional probability of the attributes in the RAS by a training set, and respectively storing results into a CP (content provider) table and a CPT (content provider table) table;4) establishing a weight value table of attribute data by using a correlation probability method; and calculating all weights of the attributes in the RAS table under different categories, and storing the weights in the weight value table AW table. Testing the model performance by using the test set; and evaluating the accuracy of the model according to the actual category of the test data.

Description

Transformer fault diagnosis method based on weighted sum selective naive Bayes
Technical Field
The invention belongs to the technical field of main transformer fault diagnosis in power transformation equipment, and particularly relates to a transformer fault diagnosis method based on weighted sum selectivity naive Bayes.
Background
The latest task of the national power grid definitely requires the full guarantee of the safe and stable operation of the power grid, the power transformer undertakes the important tasks of voltage transformation, electric energy distribution and transfer in the power system, and the normal operation of the transformer is an important guarantee of the safe, reliable, high-quality and economic operation of the power system. However, in actual operation, faults and accidents cannot be completely avoided, and early detection and treatment of transformer faults are of great significance. At present, a plurality of transformer fault diagnosis methods are available, and methods such as a neural network, ensemble learning, a support vector machine and the like are effectively applied, wherein naive Bayes is approved due to the advantages of short diagnosis time, high efficiency and the like, but the conditional independence of a naive Bayes classification model assumes that the calculation precision is lost in practical application, and measures such as attribute selection, network expansion, weighting and the like are also used for improving naive Bayes performance, but a single improvement measure still cannot simultaneously solve the problems of setting of attribute weight and relevant attribute selection in transformer fault diagnosis.
Naive bayes, a simple but extremely powerful predictive modeling algorithm. It is called naive bayes because it assumes that each input variable is independent. Naive Bayes is one of classic machine learning algorithms and is also a few classification algorithms based on probability theory. The naive Bayes principle is simple and easy to realize, and is mainly used for text classification, such as spam filtering and the like. Naive bayes assumes that each input variable is independent, which is hard and presents many paradoxs in real-life applications. The statistical value is selected by frequency analysis-series analysis (cross analysis) in the statistical analysisChi-square value(statistical attribute selection method), finally, whether the probability value sig corresponding to the chi-square value is less than 0.05 or not is judged, and if the probability value sig is less than 0.05, the fact that the significance exists is shownDifference in
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a transformer fault diagnosis method based on weighted sum selective naive Bayes, and solves the problems that the existing transformer main transformer fault diagnosis method is long in diagnosis time and low in accuracy, and a single naive Bayes improvement measure cannot solve the problem of setting of attribute weight and selecting attributes.
In order to solve the technical problems, the invention adopts the technical scheme that a transformer fault diagnosis method based on weighted sum selective naive Bayes classifies transformer faults based on the naive Bayes method, considers the weight design of attributes and the problem of attribute selection, and is based on x2Statistical attribute selection method and related probability method are combined to improve naive Bayes performance and are based on x2The statistical property selection method removes a part of redundant properties, constructs a property learning classifier with better classification results, and considers the related probability method of weight design to distinguish the contribution degree (difference) of different conditional properties to decision classification by endowing different properties with a corresponding weight, and also weakens the conditional independence assumption from another angle, and comprises the following steps:
step 1: collecting historical fault data of a main transformer, wherein the historical fault data comprises attribute data and fault types, discretizing the condition attribute data, and dividing the data into a training set and a testing set, wherein the fault types are decision attributes;
step 2: using a base x2Selecting an optimal reduction subset RAS by a statistic attribute selection method;
the relevance measurement method comprises the following steps: for two attributes A, B, the values are ai,(i=1,2,…,m),bj,(j=1,2,…,n), x2The attribute correlation calculation for the statistics is based on a frequency table of two attributes, a list of frequencies:
b1 b2 bn SUM
a1 f11 f12 f1n A1=∑f1j
a2 f21 f22 f2n A2=∑f2j
am fm1 fm2 fmn Am=∑fmj
SUM B1=∑fi1 B2=∑fi2 Bn=∑fin
Figure BDA0002840365360000021
fijdenotes aiAnd bjFrequency of co-occurrence, AiDenotes aiFrequency of occurrence, BjDenotes bjThe frequency of occurrence, f, represents the volume of the sample.
From x2Statistics can lead to a measure of the relevance of the row attribute variables in the m x n list data:
Figure BDA0002840365360000031
the larger the absolute value of ψ, the stronger the attribute correlation, the weaker the attribute correlation when its absolute value is close to 0, and the same applies to the correlation measure between the conditional attribute and the decision attribute.
Step 2.1: calculating the attribute correlation psi (A) between all condition attributes and decision attributes in the training setiC) and storing the absolute values thereof in a table AR;
step 2.2: sorting all condition attributes in a descending order according to the values in the table AR, and storing the sorting result into the table AS;
step 2.3: selecting the first attribute in the AS, calculating the correlation between the attribute and the rest attributes in turn, and calculatingCorrelation between conditional attributes if the condition ψ (A) is satisfied at the same timej,C)>ψ(AiC) and psi (A)j,Ai)>ψ(AiC), then called AiIs AjRedundant attribute of (1), deletion condition attribute Ai
Step 2.4: selecting the next attribute in the AS, and deleting the redundant attribute of the attribute according to the step 2.3;
step 2.5: repeating the step 2.4 until all the attributes in the AS are judged;
step 2.6: and finally obtaining the optimal reduction attribute subset RAS.
And step 3: the prior probability learning comprises the steps of calculating the prior probability of all decision attributes and the conditional probability of the attributes in the RAS by a training set, and respectively storing results into a CP (content provider) table and a CPT (content provider table) table;
and 4, step 4: establishing an attribute weight value table by using a correlation probability method;
for a certain conditional attribute AjPossibly taking the value of
Figure BDA0002840365360000032
Wherein k is [1, m ]]Is represented by AjThere are m possible values, then for each class CiIn other words, attribute AjAll have a relation to CiIs correlated and uncorrelated probabilities p (A)j|norel)。
Figure BDA0002840365360000033
Figure BDA0002840365360000034
The weight of the attribute is:
Figure BDA0002840365360000041
and calculating all weights of the attributes in the RAS table under different categories, and storing the weights in the AW table.
And 5: testing the model performance by using the test set; for the condition attributes in the test set, calling a prior probability table CP and a conditional probability table CPT, sequentially inspecting the current values of the attributes in the optimal reduction subset RAS, calling corresponding weights in a weight value table AW according to the attributes, sequentially calculating the posterior probability of each test case belonging to different categories according to the following formula, finding out the maximum posterior probability, distributing the categories, and evaluating the accuracy of the model according to the actual categories of the test data:
Figure BDA0002840365360000042
wherein p (C)i) Represents class CiThe probability of (a) of (b) being,
Figure BDA0002840365360000043
represents class CiIn attribute
Figure BDA0002840365360000044
The probability of the following (a) is,
Figure BDA0002840365360000045
is the attribute weight.
Beneficial effect, compare with prior art, its apparent advantage and the effect that forms have: the invention is based on x2Statistical attribute selection and associated probability (weight selection) methods are combined to improve naive Bayes performance based on x2Removing a part of redundant attributes by a statistical attribute selection method, and constructing an attribute learning classifier with a better classification result; the method has the advantage that the method has the expression capability which is more suitable for the actual situation in the fault diagnosis of the main transformer.
Drawings
FIG. 1 is a schematic flow diagram of a method of an exemplary embodiment of the present invention;
FIG. 2 is a graph of x-based in an exemplary embodiment of the invention2And selecting an optimal reduction subset RAS flow chart by using the statistical property selection method.
Detailed Description
The invention will be further described with reference to the drawings and the exemplary embodiments: as shown in fig. 1, the transformer fault diagnosis method based on weighted and selective naive bayes includes the following steps:
step 1: collecting historical fault data of a main transformer, wherein the historical fault data comprises attribute data and fault types, discretizing the condition attribute data, and dividing the data into a training set and a testing set, wherein the fault types are decision attributes;
according to the operation experience and the transformer state evaluation guide rule, the common fault types of the transformer are divided into 10 types, and as shown in Table 1, the normal class of the transformer is classified as C0. According to the analysis and judgment guide rule of the dissolved gas in the transformer oil in DL/T722-2014 and expert experience in China, representative fault characteristics are selected for judging the fault type of the transformer, as shown in Table 2. And converting the attribute variables into discrete data suitable for the classifier to identify by adopting a discretization method of threshold segmentation, wherein the discretization standard of part of the attributes is shown in a table 3.
625 pieces of transformer fault data with definite results are collected, wherein 418 pieces are taken as a training set, 207 pieces are taken as a testing set, and modeling of a transformer fault diagnosis model is carried out.
TABLE 1 Transformer Fault types
Item set Type of failure Item set Type of failure
C1 Winding fault C6 Aging of insulation
C2 Core failure C7 Deterioration of insulating oil
C3 Current loop overheating C8 Partial discharge
C4 The inlet water is affected with damp C9 Discharge of oil flow
C5 Arc discharge C10 Poor contact
TABLE 2 Transformer Fault characteristics
Numbering Properties Numbering Properties
X1 H2 X8 CH4/H
X2 CH4 X9 C2H4/C2H6
X3 C2H6 X10 CO2/CO
X4 C2H4 X11 Dielectric loss of insulating oil
X5 C2H2 X12 Water content in oil
X6 Total hydrocarbons X13 Breakdown voltage of oil
X7 C2H2/C2H4 X14 Index of polarization
TABLE 3 attribute discretization criteria
Figure BDA0002840365360000051
Step 2: using a base x2Selecting an optimal reduction subset RAS by a statistic attribute selection method;
in step 2, as shown in FIG. 2, the base x is used2The statistical property selection method selects an optimal reduced subset RAS, and comprises the following calculation steps:
step 2.1: calculating the attribute correlation psi (A) between all condition attributes and decision attributes in the training setiC) and storing the absolute values thereof in a table AR;
step 2.2: sorting all condition attributes in a descending order according to the values in the table AR, and storing the sorting result into the table AS;
step 2.3: selecting the first attribute in AS, calculating the correlation between the attribute and the rest attributes, calculating the correlation between conditional attributes, and if the condition psi (A) is satisfied at the same timej,C)>ψ(AiC) and psi (A)j,Ai)>ψ(AiC), then called AiIs AjRedundant attribute of (1), deletion condition attribute Ai
Step 2.4: selecting the next attribute in the AS, and deleting the redundant attribute of the attribute according to the step 2.3;
step 2.5: repeating the step 2.4 until all the attributes in the AS are judged;
step 2.6: and finally obtaining the optimal reduction attribute subset RAS.
And step 3: the prior probability learning comprises the steps of calculating the prior probability of all decision attributes and the conditional probability of the attributes in the RAS by a training set, and respectively storing results into a CP (content provider) table and a CPT (content provider table) table;
and 4, step 4: establishing an attribute weight value table by using a correlation probability method;
and calculating all weights of the attributes in the RAS table under different categories, and storing the weights in the AW table.
And 5: testing the model performance by using the test set; for the conditional attributes in the test set, calling a prior probability table CP and a conditional probability table CPT, sequentially inspecting the current values of the attributes in the optimal reduction subset RAS, calling corresponding weights in a weight value table AW according to the attributes, sequentially calculating the posterior probability of each test case belonging to different classes according to the following formula, finding out the maximum posterior probability, distributing the classes, and evaluating the accuracy of the model according to the actual classes of the test data.
Naive Bayes (NB), Weighted Naive Bayes (WNB), selective naive bayes (RNB), weighted and selective naive bayes (WRNB) were used to diagnose the test sets, respectively, and the results of the diagnosis based on the number of samples not trained are shown in table 4.
TABLE 4 diagnosis accuracy based on different training sample numbers
Figure BDA0002840365360000061
Figure BDA0002840365360000071
Through verification of the test set, although the classification accuracy rate is improved to a certain extent based on weighted naive Bayes and selective naive Bayes compared with a naive Bayes model, the naive Bayes model improved by combining the weighting and selecting method has higher classification accuracy rate on different training sample numbers compared with a single improving measure.
For example, when one main transformer of a power plant is put into operation in 91 years and 5 months, part of attribute data are shown in a table
Figure BDA0002840365360000072
From the results of the chromatographic analysis, C was found2H2The content exceeds the standard, the total hydrocarbon is not high, the three ratio value is coded as 101, and the fault belongs to low-energy discharge faults. Using the method set forth herein, the calculation results in being C8The probability of (1) is 78%, which is the maximum value, and the judgment result is consistent with the actual result.

Claims (3)

1. A transformer fault diagnosis method based on weighted sum selective naive Bayes is characterized in that x is used as a basis2The statistical attribute selection method removes a part of redundant attributes, constructs an attribute learning classifier with a better classification result, and the naive Bayes method considers a relevant probability method of weight design to distinguish the contribution degree of different conditional attributes to decision classification by endowing different attributes with a corresponding weight, and comprises the following steps:
step 1: collecting historical fault data of a main transformer, wherein the historical fault data comprises attribute data and fault types, discretizing the condition attribute data, and dividing the data into a training set and a testing set, wherein the fault types are decision attributes;
step 2: using a base x2Selecting an optimal reduction subset RAS by a statistic attribute selection method;
the main transformer attribute data correlation measurement method comprises the following steps: for two attributes A, B, the values are ai,(i=1,2,…,m),bj,(j=1,2,…,n),x2Calculating a frequency table based on the two attributes for the attribute correlation of the statistics;
and step 3: the prior probability learning comprises the steps of calculating the prior probability of all decision attributes and the conditional probability of the attributes in the RAS by a training set, and respectively storing results into a CP (content provider) table and a CPT (content provider table) table;
and 4, step 4: establishing a weight value table of attribute data by using a correlation probability method;
calculating all weights of attributes in the RAS table under different categories, and storing the weights in a weight value table AW table;
and 5: testing the model performance by using the test set; for the conditional attributes in the test set, calling a prior probability table CP and a conditional probability table CPT, sequentially investigating the current values of the attributes in the optimal reduction subset RAS, calling the corresponding weight in a weight value table AW according to the attributes, sequentially calculating the posterior probability of each test case belonging to different classes according to the following formula,
finding out the maximum posterior probability, distributing classes, and evaluating the accuracy of the model according to the actual class of the test data;
Figure FDA0002840365350000011
wherein p (C)i) Represents class CiThe probability of (a) of (b) being,
Figure FDA0002840365350000012
represents class CiIn attribute
Figure FDA0002840365350000013
The probability of the following (a) is,
Figure FDA0002840365350000014
is the attribute weight.
2. The weighted and selective naive bayes based transformer fault diagnosis method according to claim 1, wherein in step 2, based on a list of frequencies of two attributes:
Figure FDA0002840365350000015
Figure FDA0002840365350000021
Figure FDA0002840365350000022
fijdenotes aiAnd bjFrequency of co-occurrence, AiDenotes aiFrequency of occurrence, BjDenotes bjThe frequency of occurrence, f represents the volume of the sample;
from x2And (3) obtaining the correlation measurement of the row and column attribute variables in the m x n list data by statistics:
Figure FDA0002840365350000023
the larger the absolute value of psi, the stronger the attribute correlation, and the weaker the attribute correlation when the absolute value is close to 0, which is also applicable to the correlation measurement between the conditional attribute and the decision attribute;
step 2.1: calculating the attribute correlation psi (A) between all condition attributes and decision attributes in the training setiC) and storing the absolute values thereof in a table AR;
step 2.2: sorting all condition attributes in a descending order according to the values in the table AR, and storing the sorting result into the table AS;
step 2.3: selecting the first attribute in the AS table, calculating the correlation between the attribute and the rest attributes in turn, calculating the correlation between the conditional attributes, if the condition psi (A) is satisfied at the same timej,C)>ψ(AiC) and psi (A)j,Ai)>ψ(AiC), then called AiIs AjRedundant attribute of (1), deletion condition attribute Ai
Step 2.4: selecting the next attribute in the AS, and deleting the redundant attribute of the attribute according to the step 2.3;
step 2.5: repeating the step 2.4 until all the attributes in the AS are judged;
step 2.6: and finally obtaining the optimal reduction attribute subset RAS.
3. The weight sum based selection of claim 1The transformer fault diagnosis method based on naive Bayes is characterized by comprising the following steps of: for a certain conditional attribute AjPossibly taking the value of
Figure FDA0002840365350000024
Wherein k is [1, m ]]Is represented by AjThere are m possible values, then for each class CiIn other words, attribute AjAll have a relation to CiIs correlated and uncorrelated probabilities p (A)j|norel)。
Figure FDA0002840365350000031
Figure FDA0002840365350000032
The weight of the attribute is:
Figure FDA0002840365350000033
CN202011489636.9A 2020-12-16 2020-12-16 Transformer fault diagnosis method based on weighted sum selective naive Bayes Pending CN112861417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011489636.9A CN112861417A (en) 2020-12-16 2020-12-16 Transformer fault diagnosis method based on weighted sum selective naive Bayes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011489636.9A CN112861417A (en) 2020-12-16 2020-12-16 Transformer fault diagnosis method based on weighted sum selective naive Bayes

Publications (1)

Publication Number Publication Date
CN112861417A true CN112861417A (en) 2021-05-28

Family

ID=75997401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011489636.9A Pending CN112861417A (en) 2020-12-16 2020-12-16 Transformer fault diagnosis method based on weighted sum selective naive Bayes

Country Status (1)

Country Link
CN (1) CN112861417A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569315A (en) * 2021-07-27 2021-10-29 中铁大桥局集团有限公司 Bridge cluster dynamic evaluation method, device, equipment and readable storage medium
CN113591396A (en) * 2021-08-12 2021-11-02 国网江苏省电力有限公司常州供电分公司 Power grid component fault diagnosis method based on naive Bayesian network
CN113807433A (en) * 2021-09-16 2021-12-17 青岛中科曙光科技服务有限公司 Data classification method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105530122A (en) * 2015-12-03 2016-04-27 国网江西省电力公司信息通信分公司 Network failure diagnosis method based on selective hidden Naive Bayesian classifier
CN110568286A (en) * 2019-09-12 2019-12-13 齐鲁工业大学 Transformer fault diagnosis method and system based on weighted double-hidden naive Bayes
CN111709495A (en) * 2020-07-17 2020-09-25 西南石油大学 Transformer fault diagnosis method based on NBC model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105530122A (en) * 2015-12-03 2016-04-27 国网江西省电力公司信息通信分公司 Network failure diagnosis method based on selective hidden Naive Bayesian classifier
CN110568286A (en) * 2019-09-12 2019-12-13 齐鲁工业大学 Transformer fault diagnosis method and system based on weighted double-hidden naive Bayes
CN111709495A (en) * 2020-07-17 2020-09-25 西南石油大学 Transformer fault diagnosis method based on NBC model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙秀亮: "基于属性加权的选择性朴素贝叶斯分类研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, vol. 2014, 15 April 2014 (2014-04-15), pages 21 - 30 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569315A (en) * 2021-07-27 2021-10-29 中铁大桥局集团有限公司 Bridge cluster dynamic evaluation method, device, equipment and readable storage medium
CN113569315B (en) * 2021-07-27 2023-11-28 中铁大桥局集团有限公司 Bridge cluster dynamic evaluation method, device, equipment and readable storage medium
CN113591396A (en) * 2021-08-12 2021-11-02 国网江苏省电力有限公司常州供电分公司 Power grid component fault diagnosis method based on naive Bayesian network
CN113591396B (en) * 2021-08-12 2024-03-05 国网江苏省电力有限公司常州供电分公司 Power grid component fault diagnosis method based on naive Bayesian network
CN113807433A (en) * 2021-09-16 2021-12-17 青岛中科曙光科技服务有限公司 Data classification method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112861417A (en) Transformer fault diagnosis method based on weighted sum selective naive Bayes
CN107301296B (en) Data-based qualitative analysis method for circuit breaker fault influence factors
CN109063734B (en) Oil-immersed transformer fault state evaluation method combining multi-level local density clustering
CN108304567B (en) Method and system for identifying working condition mode and classifying data of high-voltage transformer
CN111478904B (en) Method and device for detecting communication anomaly of Internet of things equipment based on concept drift
CN111507504A (en) Adaboost integrated learning power grid fault diagnosis system and method based on data resampling
CN115048985B (en) Electrical equipment fault discrimination method
CN115563563A (en) Fault diagnosis method and device based on transformer oil chromatographic analysis
CN113205125A (en) XGboost-based extra-high voltage converter valve operation state evaluation method
CN111950645A (en) Method for improving class imbalance classification performance by improving random forest
CN110197222A (en) A method of based on multi-category support vector machines transformer fault diagnosis
CN115329908A (en) Power transformer fault diagnosis method based on deep learning
CN115881238A (en) Model training method, transformer fault diagnosis method and related device
CN115600088A (en) Distribution transformer fault diagnosis method based on vibration signals
CN116562114A (en) Power transformer fault diagnosis method based on graph convolution neural network
CN114184861A (en) Fault diagnosis method for oil-immersed transformer
CN111695288A (en) Transformer fault diagnosis method based on Apriori-BP algorithm
CN112085064B (en) Transformer fault diagnosis method based on multi-classification probability output of support vector machine
CN114091549A (en) Equipment fault diagnosis method based on deep residual error network
CN114358193A (en) Transformer state diagnosis method based on oil chromatography, terminal and storage medium
CN113469252A (en) Extra-high voltage converter valve operation state evaluation method considering unbalanced samples
CN111737993B (en) Method for extracting equipment health state from fault defect text of power distribution network equipment
CN112817954A (en) Missing value interpolation method based on multi-method ensemble learning
CN116992362A (en) Transformer fault characterization feature quantity screening method and device based on Xia Puli value
CN115561596A (en) Lightning arrester insulation state assessment method based on random forest

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination