CN108229581A - Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost - Google Patents

Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost Download PDF

Info

Publication number
CN108229581A
CN108229581A CN201810098080.7A CN201810098080A CN108229581A CN 108229581 A CN108229581 A CN 108229581A CN 201810098080 A CN201810098080 A CN 201810098080A CN 108229581 A CN108229581 A CN 108229581A
Authority
CN
China
Prior art keywords
sample
classification
label
kelm
weak classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810098080.7A
Other languages
Chinese (zh)
Inventor
黄新波
王享
田毅
朱永灿
魏雪倩
吴明松
杨文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN201810098080.7A priority Critical patent/CN108229581A/en
Publication of CN108229581A publication Critical patent/CN108229581A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention discloses a kind of based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, particle group optimizing core extreme learning machine model is established according to the training of characteristic of transformer data first;Then in order to further be improved to transformer fault diagnosis accuracy rate, using PSO KELM as Weak Classifier, it is further promoted using AdaBoost algorithms;Finally when carrying out often taking turns iteration PSO KELM as Weak Classifier, form interim strong classifier, and according to the classification results of interim strong classifier, count the similitude between label, dynamic adjusts the weight of sample, so as to improve the accuracy rate of transformer fault diagnosis, solve the problems, such as that the diagnosing interior faults accuracy rate of power transformer in the prior art is relatively low.

Description

Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost
Technical field
The invention belongs to transformer fault on-line monitoring technique fields, and in particular to one kind is based on the more classification of improvement The Diagnosis Method of Transformer Faults of AdaBoost.
Background technology
Power transformer is the important component of power grid, and power transformer in electric system power transmission and transforming equipment as most closing One of the equipment of key, most expensive carries the important task of voltage transformation, electric energy distribution and transfer, to the safe and stable operation of power grid It is of crucial importance.However, power transformer is in longtime running, failure caused by various inside and outside reasons and Accident is inevitable, it is therefore necessary to carry out diagnostic assessment to its health status.Utilize dissolved gas analysis mostly at present (DGA) diagnosing interior faults of power transformer are realized.In recent decades, with the development of artificial intelligence, artificial neural network, SVM scheduling algorithms are widely used in this field, but fault diagnosis accuracy rate is relatively low, are traced it to its cause as follows:Algorithm layer The defects of face, artificial neural network convergence rate is slow, and there are multiple minimal points, it is more difficult etc. that SVM seeks suitable kernel function;Become Depressor fault diagnosis is classification problem more than one, and there are certain similitudes for sample between class, influence classifier performance, cause to diagnose Error.
Invention content
The object of the present invention is to provide a kind of based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, solve The problem of diagnosing interior faults accuracy rate of power transformer in the prior art is relatively low.
The technical solution adopted in the present invention is the Diagnosis Method of Transformer Faults for the AdaBoost that more classified based on improvement, It is specifically implemented according to the following steps:
Step 1 assumes that sample set of the acquired oil-immersed transformer with class label is S={ (x1,y1),(x2, y2),...,(xm,ym), wherein xi=xi1,xi2,…,xi5, i=1,2 ..., m representative sample attributes include hydrogen, methane, second Alkane, ethylene, five attribute of acetylene, Represent class label, wherein 1,2,3,4, 5th, 6 normal condition, medium temperature overheat, hyperthermia and superheating, shelf depreciation, spark discharge, electric arc electric discharge are corresponded to respectively, for sample xi, Corresponding label yiFor one in above-mentioned 6 labels, 3 are pressed per a kind of to sample set:1 ratio is divided into training sample L and test Sample T;
Training sample and test sample is normalized in step 2 respectively, then establishes PSO-KELM Weak Classifier moulds Type;
Step 3 brings the PSO-KELM Weak Classifier models of step 2 into, establishes and improves more classification AdaBoost diagnosis moulds Type;
Step 4, the model obtained using step 3 are detected sample to be tested, obtain the last diagnostic knot of test sample Fruit.
The features of the present invention also characterized in that
Step 2 is specifically implemented according to the following steps:
Step 2.1, input training sample set L={ (x1,y1),(x2,y2),...,(xn,yn), wherein,
Step 2.2, the KELM models for establishing Weak Classifier, including input layer, hidden layer, output layer;
Step 2.3, selection PSO algorithms carry out optimizing to the output weights β of KELM, bring training sample after treatment into Data obtain the mapping X of input vectorNWith initial output weight betaint
Step 2.4, initialization particle swarm parameter, including setting population scale, set initial velocity, particle initial bit at random It is set to βint, individual extreme value and all extreme values;
Step 2.5 adjusts inertia weight strategy using dynamic, and inertia weight is as follows by the dynamic adjustment of linear decrease strategy:
ω (n)=ωmax-(ωmaxmin)(n/nmax)
Wherein, 0.1 < ωmin< ωmax< 1, nmaxFor total iterations, n is current iteration number;
Step 2.6 carries out population optimizing, the optimal output weight of hidden layer is found, according to object function in each iteration The middle fitness for calculating each particle, renewal speed, position, global optimum obtain optimal hidden layer output weight after iteration β, so as to obtain PSO-KELM models.
The KELM models that step 2.2 establishes Weak Classifier are specifically implemented according to the following steps:
Step 2.2.1, nuclear matrix Ω is definedELM, with the τ sample x in training sampleτ=(xτ1, xτ2..., xτ5) be Example:
Wherein, X=(x1,x2,…,xn) it is training sample characteristic attribute collection, H=h (X) is defeated for extreme learning machine hidden layer Go out matrix, h (xτ) it is when input vector is xτWhen hidden layer output vector, K (xτ, X) and it is the kernel function for inputting training set L, by It is proved to better performances in Radial basis kernel function, so choosing RBF cores, i.e.,
k(xτ, X) and=exp (- (xτ-X2)/σ) (2)
Wherein, σ in order to control with the high wide parameter of function;
Step 2.2.2, parameter I/C is added to unit diagonal matrix HHTLeading diagonal on, seek weight vector β*
β*=HT(I/C+HHT)-1T (3)
Wherein, I is diagonal matrix;C is penalty coefficient;T is it is expected output vector;
Step 2.2.3, convolution (1)~formula (3), the output for obtaining KELM models is:
The output weights of KELM models are:
β=(I/C+ ΩELM)-1T (5)。
Step 3 is specifically implemented according to the following steps:
Step 3.1, input training sample set L={ (x1,y1),(x2,y2),...,(xn,yn), sample weights D, weak typing Device g:X × Y → R, iterations T;
Step 3.2, initialization:D1(i)=1/n, wherein, i=1 ..., n;
Step 3.3 sets cycling condition as t=1 ..., and T, t are cycle-index:
Step a, according to sample distribution Dt, choose the PSO-KELM Weak Classifiers that certain amount step 2 is established and be trained gt:X×Y→R;
Step b, during the repetitive exercise of every wheel Weak Classifier, first according to previous training gained Weak Classifier composition The classification results of interim strong classifier, count the degree " obscured " between each classification, that is, the probability of " misclassification ", judge Similitude between label;
Step c, to weighed value adjusting factor stCarry out assignment:If l ∈ Ri, then s is enabledt(l, i)=c2, otherwise st(l, i)= c1, wherein c1,c2>0;
Step d, the weight a of Weak Classifier is calculatedt
Wherein,
Step e, weight is updated, if l ∈ Yi
Otherwise,
Wherein,
Step f, strong classifier is exported:
Step b is specifically implemented according to the following steps:
Step b.1, calculate interim strong classifier
Wherein, k=1,2 ..., T, akRepresent the weight of k-th of Weak Classifier, gkFor k-th of Weak Classifier;
Step b.2, set l=(l1,l2,…,l6), it represents 6 tag class of sample classification, counts and often take turns categorized device point Classification results after classWherein,It isPresentation class device is labelSample mistake be divided into lφNumber;
Step b.3, calculate grader labelMistake is divided into lφProbability:
Step b.4, by taking i-th of sample in training sample as an example, by xiLabel space be divided into tally set Yi(Yi= yi), label relevant episode RiWith label independent set Ui, division methods are as follows:Assuming that labelIfIt may be considered that label lφIt is labelSimilar tags, enable lφ∈Ri, otherwise enable lφ∈Ui
If step b.5, l ∈ Yi, then Yi(l)=1, otherwise Yi(l)=0.
The invention has the advantages that based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, grain is used Weak Classifier of the subgroup optimization core extreme learning machine (PSO-KELM) as AdaBoost, using label correlation principle, to weak The power readjustment rule of grader is improved;It is comprehensive to have trained the interim of gained and in the repetitive exercise to Weak Classifier The classification situation of strong classifier, the mistake that dynamic adjusts sample divide cost, and to classifying, AdaBoost algorithms are promoted more, improve and become The accuracy of depressor fault diagnosis.
Description of the drawings
Fig. 1 is that the present invention is based on KELM structural representations in the Diagnosis Method of Transformer Faults for improving more classification AdaBoost Figure;
Fig. 2 is that the present invention is based on PSO-KELM flows in the Diagnosis Method of Transformer Faults for improving more classification AdaBoost Figure;
Fig. 3 is that the present invention is based on PSO-KELM- in the Diagnosis Method of Transformer Faults for improving more classification AdaBoost AdaBoost flow charts.
Specific embodiment
The present invention is described in detail with reference to the accompanying drawings and detailed description.
AdaBoost is a kind of iterative algorithm, and core concept is that different graders is trained for same training set, That is then Weak Classifier gets up these weak classifier sets, construct a stronger final classification device.Improve classify more The Method of Fault Diagnosis in Transformer of AdaBoost, using PSO-KELM as Weak Classifier, in the iteration instruction of every wheel Weak Classifier During white silk, according to the classification results of interim strong classifier that previous training gained Weak Classifier forms, count each classification it Between by " misclassification " probability, dynamically provide sample weights Dynamic gene.
The present invention is based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, specifically according to following steps reality It applies:
Step 1 assumes that sample set of the acquired oil-immersed transformer with class label is S={ (x1,y1),(x2, y2),...,(xm,ym), wherein xi=xi1,xi2,…,xi5, i=1,2 ..., m representative sample attributes include hydrogen, methane, second Alkane, ethylene, five attribute of acetylene,I=1,2 ..., m represent class label, wherein 1,2,3,4, 5th, 6 normal condition, medium temperature overheat, hyperthermia and superheating, shelf depreciation, spark discharge, electric arc electric discharge are corresponded to respectively, for sample xi, Corresponding label yiFor one in above-mentioned 6 labels, 3 are pressed per a kind of to sample set:1 ratio is divided into training sample L and test Sample T;
Training sample and test sample is normalized in step 2 respectively, then establishes PSO-KELM Weak Classifier moulds Type, as shown in figure 3, being specifically implemented according to the following steps:
Step 2.1, input training sample set L={ (x1,y1),(x2,y2),...,(xn,yn), wherein,
Step 2.2, the KELM models for establishing Weak Classifier, as shown in Figure 1, including input layer, hidden layer, output layer, tool Body is implemented according to following steps:
Step 2.2.1, nuclear matrix Ω is definedELM, with the τ sample x in training sampleτ=(xτ1, xτ2..., xτ5) be Example:
Wherein, X=(x1,x2,…,xn) it is training sample characteristic attribute collection, H=h (X) is defeated for extreme learning machine hidden layer Go out matrix, h (xτ) it is when input vector is xτWhen hidden layer output vector, K (xτ, X) and it is the kernel function for inputting training set L, by It is proved to better performances in Radial basis kernel function, so choosing RBF cores, i.e.,
k(xτ, X) and=exp (- (xτ-X2)/σ) (2)
Wherein, σ in order to control with the high wide parameter of function;
Step 2.2.2, parameter I/C is added to unit diagonal matrix HHTLeading diagonal on, seek weight vector β*
β*=HT(I/C+HHT)-1T (3)
Wherein, I is diagonal matrix;C is penalty coefficient;T is it is expected output vector;
Step 2.2.3, convolution (1)~formula (3), the output for obtaining KELM models is:
The output weights of KELM models are:
β=(I/C+ ΩELM)-1T (5);
Step 2.3, selection PSO algorithms carry out optimizing to the output weights β of KELM, bring training sample after treatment into Data obtain the mapping X of input vectorNWith initial output weight betaint
Step 2.4, initialization particle swarm parameter, including setting population scale, set initial velocity, particle initial bit at random It is set to βint, individual extreme value and all extreme values;
Important parameter when step 2.5, inertia weight ω are the update particle rapidities of PSO algorithms, because larger Inertia weight can enhance the ability of searching optimum of algorithm, and smaller inertia weight then enhances the local search ability of algorithm, adopt Inertia weight strategy is adjusted with dynamic, inertia weight is as follows by the dynamic adjustment of linear decrease strategy:
ω (n)=ωmax-(ωmaxmin)(n/nmax)
Wherein, 0.1 < ωmin< ωmax< 1, nmaxFor total iterations, n is current iteration number;
Step 2.6 carries out population optimizing, the optimal output weight of hidden layer is found, as shown in Fig. 2, according to object function Calculate the fitness of each particle in each iteration, renewal speed, position, global optimum are obtained after iteration optimal implicit Layer output weight beta, so as to obtain PSO-KELM models;
Step 3 brings the PSO-KELM Weak Classifier models of step 2 into, establishes and improves more classification AdaBoost diagnosis moulds Type is specifically implemented according to the following steps:
Step 3.1, input training sample set L={ (x1,y1),(x2,y2),...,(xn,yn), sample weights D, weak typing Device g:X × Y → R, iterations T;
Step 3.2, initialization:D1(i)=1/n, wherein, i=1 ..., n;
Step 3.3 sets cycling condition as t=1 ..., and T, t are cycle-index:
Step a, according to sample distribution Dt, choose the PSO-KELM Weak Classifiers that certain amount step 2 is established and be trained gt:X×Y→R;
Step b, during the repetitive exercise of every wheel Weak Classifier, first according to previous training gained Weak Classifier composition The classification results of interim strong classifier, count the degree " obscured " between each classification, that is, the probability of " misclassification ", judge Similitude between label, is specifically implemented according to the following steps:
Step b.1, calculate interim strong classifier
Wherein, k=1,2 ..., T, akRepresent the weight of k-th of Weak Classifier, gkFor k-th of Weak Classifier;
Step b.2, set l=(l1,l2,…,l6), it represents 6 tag class of sample classification, counts and often take turns categorized device point Classification results after classWherein,It isPresentation class device is labelSample mistake be divided into lφNumber;
Step b.3, calculate grader labelMistake is divided into lφProbability:
Step b.4, by taking i-th of sample in training sample as an example, by xiLabel space be divided into tally set Yi(Yi= yi), label relevant episode RiWith label independent set Ui, division methods are as follows:Assuming that labelIfIt may be considered that label lφIt is labelSimilar tags, enable lφ∈Ri, otherwise enable lφ∈Ui
If step b.5, l ∈ Yi, then Yi(l)=1, otherwise Yi(l)=0;
Step c, to weighed value adjusting factor stCarry out assignment:If l ∈ Ri, then s is enabledt(l, i)=c2, otherwise st(l, i)= c1, wherein c1,c2>0;
Step d, the weight a of Weak Classifier is calculatedt
Wherein,
Step e, weight is updated, if l ∈ Yi
Otherwise,
Wherein,
Step f, strong classifier is exported:
Step 4, the model obtained using step 3 are detected sample to be tested, obtain the last diagnostic knot of test sample Fruit.
The present invention is based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, by two classification AdaBoost upgradings For strong learner of more classifying, fault diagnosis is carried out to transformer using improved more classification AdaBoost, it is simple in structure, it is not easy Over-fitting;It is identified by the use of PSO-KELM models as the Weak Classifier of more classification AdaBoost, without setting hidden layer section The parameters such as point number, initial weight and biasing, Generalization Capability is strong, training and recognition speed are fast, and stability is strong, can promote failure knowledge Not rate;The weight of correlation adjustment PSO-KELM graders between combination tag, and interim strong point is combined obtained by often wheel iteration The classification results of class device, dynamic adjust sample mistake and divide cost, promote the accuracy rate of fault diagnosis.

Claims (5)

1. based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, which is characterized in that specifically according to following steps Implement:
Step 1 assumes that sample set of the acquired oil-immersed transformer with class label is S={ (x1,y1),(x2,y2),..., (xm,ym), wherein xi=xi1,xi2,…,xi5, i=1,2 ..., m representative sample attributes, comprising hydrogen, methane, ethane, ethylene, Five attribute of acetylene, Class label is represented, wherein 1,2,3,4,5,6 difference Corresponding normal condition, medium temperature overheat, hyperthermia and superheating, shelf depreciation, spark discharge, electric arc electric discharge, for sample xi, corresponding mark Sign yiFor one in above-mentioned 6 labels, 3 are pressed per a kind of to sample set:1 ratio is divided into training sample L and test sample T;
Training sample and test sample is normalized in step 2 respectively, then establishes PSO-KELM Weak Classifier models;
Step 3 brings the PSO-KELM Weak Classifier models of step 2 into, establishes and improves more classification AdaBoost diagnostic models;
Step 4, the model obtained using step 3 are detected sample to be tested, obtain the last diagnostic result of test sample.
2. according to claim 1 based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, feature exists In the step 2 is specifically implemented according to the following steps:
Step 2.1, input training sample set L={ (x1,y1),(x2,y2),...,(xn,yn), wherein,
Step 2.2, the KELM models for establishing Weak Classifier, including input layer, hidden layer, output layer;
Step 2.3, selection PSO algorithms carry out optimizing to the output weights β of KELM, bring number of training after treatment into According to obtaining the mapping X of input vectorNWith initial output weight betaint
Step 2.4, initialization particle swarm parameter, including setting population scale, set initial velocity, particle initial position is at random βint, individual extreme value and all extreme values;
Step 2.5 adjusts inertia weight strategy using dynamic, and inertia weight is as follows by the dynamic adjustment of linear decrease strategy:
ω (n)=ωmax-(ωmaxmin)(n/nmax)
Wherein, 0.1 < ωmin< ωmax< 1, nmaxFor total iterations, n is current iteration number;
Step 2.6 carries out population optimizing, finds the optimal output weight of hidden layer, is counted in each iteration according to object function Calculate the fitness of each particle, renewal speed, position, global optimum obtain optimal hidden layer output weight beta after iteration, from And obtain PSO-KELM models.
3. according to claim 2 based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, feature exists In the KELM models that the step 2.2 establishes Weak Classifier are specifically implemented according to the following steps:
Step 2.2.1, nuclear matrix Ω is definedELM, with the τ sample x in training sampleτ=(xτ1, xτ2..., xτ5) for:
Wherein, X=(x1,x2,…,xn) it is training sample characteristic attribute collection, H=h (X) exports square for extreme learning machine hidden layer Battle array, h (xτ) it is when input vector is xτWhen hidden layer output vector, K (xτ, X) and it is the kernel function for inputting training set L, due to diameter It is proved to better performances to base kernel function, so choosing RBF cores, i.e.,
k(xτ, X) and=exp (- (xτ-X2)/σ) (2)
Wherein, σ in order to control with the high wide parameter of function;
Step 2.2.2, parameter I/C is added to unit diagonal matrix HHTLeading diagonal on, seek weight vector β*
β*=HT(I/C+HHT)-1T (3)
Wherein, I is diagonal matrix;C is penalty coefficient;T is it is expected output vector;
Step 2.2.3, convolution (1)~formula (3), the output for obtaining KELM models is:
The output weights of KELM models are:
β=(I/C+ ΩELM)-1T (5)。
4. according to claim 1 based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, feature exists In the step 3 is specifically implemented according to the following steps:
Step 3.1, input training sample set L={ (x1,y1),(x2,y2),...,(xn,yn), sample weights D, Weak Classifier g: X × Y → R, iterations T;
Step 3.2, initialization:D1(i)=1/n, wherein, i=1 ..., n;
Step 3.3 sets cycling condition as t=1 ..., and T, t are cycle-index:
Step a, according to sample distribution Dt, choose the PSO-KELM Weak Classifiers that certain amount step 2 is established and be trained gt:X× Y→R;
Step b, during the repetitive exercise of every wheel Weak Classifier, first according to the interim of previous training gained Weak Classifier composition The classification results of strong classifier count the degree " obscured " between each classification, that is, the probability of " misclassification ", judge label Between similitude;
Step c, to weighed value adjusting factor stCarry out assignment:If l ∈ Ri, then s is enabledt(l, i)=c2, otherwise st(l, i)=c1, Middle c1,c2>0;
Step d, the weight a of Weak Classifier is calculatedt
Wherein,
Step e, weight is updated, if l ∈ Yi
Otherwise,
Wherein,
Step f, strong classifier is exported:
5. according to claim 4 based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost, feature exists In the step b is specifically implemented according to the following steps:
Step b.1, calculate interim strong classifier
Wherein, k=1,2 ..., T, akRepresent the weight of k-th of Weak Classifier, gkFor k-th of Weak Classifier;
Step b.2, set l=(l1,l2,…,l6), it represents 6 tag class of sample classification, counts after often taking turns categorized device classification Classification resultsWherein,It isPresentation class device is label's Sample mistake is divided into lφNumber;
Step b.3, calculate grader labelMistake is divided into lφProbability:
Step b.4, by taking i-th of sample in training sample as an example, by xiLabel space be divided into tally set Yi(Yi=yi), mark Sign relevant episode RiWith label independent set Ui, division methods are as follows:Assuming that labelIfIt may be considered that label lφIt is labelSimilar tags, enable lφ∈Ri, otherwise enable lφ∈Ui
If step b.5, l ∈ Yi, then Yi(l)=1, otherwise Yi(l)=0.
CN201810098080.7A 2018-01-31 2018-01-31 Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost Pending CN108229581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810098080.7A CN108229581A (en) 2018-01-31 2018-01-31 Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810098080.7A CN108229581A (en) 2018-01-31 2018-01-31 Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost

Publications (1)

Publication Number Publication Date
CN108229581A true CN108229581A (en) 2018-06-29

Family

ID=62670299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810098080.7A Pending CN108229581A (en) 2018-01-31 2018-01-31 Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost

Country Status (1)

Country Link
CN (1) CN108229581A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214460A (en) * 2018-09-21 2019-01-15 西华大学 Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis
CN109739209A (en) * 2018-12-11 2019-05-10 深圳供电局有限公司 A kind of electric network failure diagnosis method based on Classification Data Mining
CN109858564A (en) * 2019-02-21 2019-06-07 上海电力学院 Modified Adaboost-SVM model generating method suitable for wind electric converter fault diagnosis
CN110286161A (en) * 2019-03-28 2019-09-27 清华大学 Main transformer method for diagnosing faults based on adaptive enhancing study
CN110969262A (en) * 2019-12-03 2020-04-07 广东电网有限责任公司 Transformer fault diagnosis method
CN111104986A (en) * 2019-12-25 2020-05-05 南京航空航天大学 Component feature-based multi-fault concurrent diagnosis method for aircraft engine
CN111767675A (en) * 2020-06-24 2020-10-13 国家电网有限公司大数据中心 Transformer vibration fault monitoring method and device, electronic equipment and storage medium
CN112581265A (en) * 2020-12-23 2021-03-30 百维金科(上海)信息科技有限公司 Internet financial client application fraud detection method based on AdaBoost
CN113341347A (en) * 2021-06-02 2021-09-03 云南大学 Dynamic fault detection method for distribution transformer based on AOELM
CN113610148A (en) * 2021-08-04 2021-11-05 北京化工大学 Fault diagnosis method based on bias weighting AdaBoost

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150268723A1 (en) * 2014-03-21 2015-09-24 Immersion Corporation Automatic tuning of haptic effects
CN106646158A (en) * 2016-12-08 2017-05-10 西安工程大学 Transformer fault diagnosis improving method based on multi-classification support vector machine
CN106874934A (en) * 2017-01-12 2017-06-20 华南理工大学 Sewage disposal method for diagnosing faults based on weighting extreme learning machine Integrated Algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150268723A1 (en) * 2014-03-21 2015-09-24 Immersion Corporation Automatic tuning of haptic effects
CN106646158A (en) * 2016-12-08 2017-05-10 西安工程大学 Transformer fault diagnosis improving method based on multi-classification support vector machine
CN106874934A (en) * 2017-01-12 2017-06-20 华南理工大学 Sewage disposal method for diagnosing faults based on weighting extreme learning machine Integrated Algorithm

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
张雪云等: "基于KELM_AdaBoost的网络入侵检测", 《信息技术》 *
徐岩等: "一种融合加权ELM和AdaBoost的交通标志识别算法", 《小型微型计算机系统》 *
裴飞等: "粒子群优化核极限学习机的变压器故障诊断", 《计算机工程与设计》 *
魏雪倩等: "基于AdaBoost多分类算法变压器故障诊断", 《西安工程大学学报》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214460B (en) * 2018-09-21 2022-01-11 西华大学 Power transformer fault diagnosis method based on relative transformation and nuclear entropy component analysis
CN109214460A (en) * 2018-09-21 2019-01-15 西华大学 Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis
CN109739209A (en) * 2018-12-11 2019-05-10 深圳供电局有限公司 A kind of electric network failure diagnosis method based on Classification Data Mining
CN109858564A (en) * 2019-02-21 2019-06-07 上海电力学院 Modified Adaboost-SVM model generating method suitable for wind electric converter fault diagnosis
CN109858564B (en) * 2019-02-21 2023-05-05 上海电力学院 Improved Adaboost-SVM model generation method suitable for wind power converter fault diagnosis
CN110286161A (en) * 2019-03-28 2019-09-27 清华大学 Main transformer method for diagnosing faults based on adaptive enhancing study
CN110969262A (en) * 2019-12-03 2020-04-07 广东电网有限责任公司 Transformer fault diagnosis method
CN111104986A (en) * 2019-12-25 2020-05-05 南京航空航天大学 Component feature-based multi-fault concurrent diagnosis method for aircraft engine
CN111104986B (en) * 2019-12-25 2023-09-29 南京航空航天大学 Multi-fault concurrent diagnosis method for aero-engine based on component characteristics
CN111767675A (en) * 2020-06-24 2020-10-13 国家电网有限公司大数据中心 Transformer vibration fault monitoring method and device, electronic equipment and storage medium
CN112581265A (en) * 2020-12-23 2021-03-30 百维金科(上海)信息科技有限公司 Internet financial client application fraud detection method based on AdaBoost
CN113341347B (en) * 2021-06-02 2022-05-03 云南大学 Dynamic fault detection method for distribution transformer based on AOELM
CN113341347A (en) * 2021-06-02 2021-09-03 云南大学 Dynamic fault detection method for distribution transformer based on AOELM
CN113610148A (en) * 2021-08-04 2021-11-05 北京化工大学 Fault diagnosis method based on bias weighting AdaBoost
CN113610148B (en) * 2021-08-04 2024-02-02 北京化工大学 Fault diagnosis method based on bias weighted AdaBoost

Similar Documents

Publication Publication Date Title
CN108229581A (en) Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost
Mehrotra et al. Generative adversarial residual pairwise networks for one shot learning
CN104155574B (en) Distribution network failure sorting technique based on Adaptive Neuro-fuzzy Inference
CN106646158B (en) Based on multi-category support vector machines transformer fault diagnosis method for improving
CN101464964B (en) Pattern recognition method capable of holding vectorial machine for equipment fault diagnosis
CN106548230A (en) Diagnosis Method of Transformer Faults based on Modified particle swarm optimization neutral net
CN108717149A (en) Diagnosis Method of Transformer Faults based on M-RVM fusion dynamic weightings AdaBoost
CN107103332A (en) A kind of Method Using Relevance Vector Machine sorting technique towards large-scale dataset
CN109298330B (en) High-voltage circuit breaker fault diagnosis method based on GHPSO-BP
CN109214460A (en) Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis
CN103605711B (en) Construction method and device, classification method and device of support vector machine
CN107656152A (en) One kind is based on GA SVM BP Diagnosis Method of Transformer Faults
CN110398650A (en) Based on k- adjacent to the Diagnosis Method of Transformer Faults of SMOTE and deep learning
CN110880369A (en) Gas marker detection method based on radial basis function neural network and application
CN103177265B (en) High-definition image classification method based on kernel function Yu sparse coding
CN108596274A (en) Image classification method based on convolutional neural networks
CN109871809A (en) A kind of machine learning process intelligence assemble method based on semantic net
CN107392310A (en) neural network model training method and device
CN109901064B (en) ICA-LVQ-based high-voltage circuit breaker fault diagnosis method
CN107423697A (en) Activity recognition method based on non-linear fusion depth 3D convolution description
Dhurandhar et al. Enhancing simple models by exploiting what they already know
CN112990593A (en) Transformer fault diagnosis and state prediction method based on CSO-ANN-EL algorithm
CN114595788B (en) Transformer fault diagnosis method, device and equipment
CN116522121A (en) Transformer online fault diagnosis method under unbalanced small sample condition
Zhou et al. Research on transformer fault diagnosis technology based on adaboost-decision tree and DGA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180629

RJ01 Rejection of invention patent application after publication