CN109871877A - Using the method for diagnosing faults of Artificial neural network ensemble - Google Patents

Using the method for diagnosing faults of Artificial neural network ensemble Download PDF

Info

Publication number
CN109871877A
CN109871877A CN201910062471.8A CN201910062471A CN109871877A CN 109871877 A CN109871877 A CN 109871877A CN 201910062471 A CN201910062471 A CN 201910062471A CN 109871877 A CN109871877 A CN 109871877A
Authority
CN
China
Prior art keywords
neural network
adaboost
formula
algorithm
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910062471.8A
Other languages
Chinese (zh)
Inventor
徐启华
师军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaihai Institute of Techology
Original Assignee
Huaihai Institute of Techology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaihai Institute of Techology filed Critical Huaihai Institute of Techology
Priority to CN201910062471.8A priority Critical patent/CN109871877A/en
Publication of CN109871877A publication Critical patent/CN109871877A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses disclose fault diagnosis technology field using the method for diagnosing faults of Artificial neural network ensemble, AdaBoost algorithm including establishing fault grader, neural network learning and Artificial neural network ensemble, using three layer feedforward neural networks as weak learner, learnt with batch processing mode, failure Weak Classifier of the neural network obtained after the completion of study as AdaBoost training, after AdaBoost training, final integrated fault grader is formed according to the ballot mode for having weight by failure Weak Classifier;The present invention is by simply training multiple general neural networks and integrating its diagnostic result according to AdaBoost algorithm, it can effectively improve the generalization ability of final fault grader, overcome the shortcomings of single neural network, being nerual network technique, further applying in intelligent trouble diagnosis provides a new way.

Description

Using the method for diagnosing faults of Artificial neural network ensemble
Technical field
The present invention relates to fault diagnosis technology fields, and in particular to using the method for diagnosing faults of Artificial neural network ensemble.
Background technique
In fault diagnosis, diagnostic reasoning is actually according to specific mapping relations from fault signature domain to failure cause The calculating solution procedure in domain.Usual situation, this mapping relations are extremely complex non-linear relation.Neural network can approach Arbitrary Nonlinear Mapping, therefore receive significant attention, a variety of neural networks are applied in fault diagnosis.But it is refreshing Through network, there is also following deficiencies in diagnosis application: (1) learning process using empirical risk minimization (ERM, Empirical risk minimization) principle, thus overfitting (over-fitting) is easy to appear under Small Sample Size Phenomenon causes generalization ability low.(2) network design lacks more strict theoretical system, and design result and application effect exist Largely the experience of user is depended on, even if solving the problems, such as same, user's difference using same method, As a result it is also likely to far from each other;For lacking the practical problem of priori knowledge, user often will be by a large amount of laborious time-consuming Experiment grope just determine suitable network model, learning algorithm and network parameter, sometimes, or even be difficult to design satisfaction Network, this makes the effect of its engineer application hardly result in guarantee.
Boosting algorithm is the effective of the raising prediction learning system predictive ability proposed by Freund and Schapire Tool, the basic thought of this ensemble learning algorithm are exactly by generating slightly good thick of several simple, ratio of precision random guess Rough predictive estimation constructs a high-precision predictive estimation.In many cases, directly acquiring a high-precision estimation is One difficult thing, and it is relatively easy to generate the several coarse estimations better than random guess, therefore, Boosting algorithm There is important practical significance.Nineteen ninety-five, Freund and Schapire on the basis of original Boosting algorithm it is modified and It is perfect, AdaBoost (Adaptive Boosting) algorithm is proposed, this algorithm does not need any elder generation about weak learner Knowledge is tested, is applied in practical problem with can be convenient.AdaBoost algorithm obtains greatly after proposing in machine learning field Pay attention to, experimental result is shown, AdaBoost can significantly improve study precision and generalization ability, has become Boosting series In representative algorithm.
Mostly use weak learner of the common multilayer feedforward neural network as fault diagnosis, AdaBoost algorithm greatly at present Neural network is integrated by iteration realization.Weak learner is easier to design since training precision is of less demanding.Pass through letter It singly trains multiple weak learners and synthesizes its prediction result, the available final event being made of integrated neural network Hinder classifier, this fault grader can effectively improve generalization ability, overcome the shortcomings of single neural network.Based on this, The present invention devises the method for diagnosing faults using Artificial neural network ensemble, to solve the above problems.
Summary of the invention
The purpose of the present invention is to provide the method for diagnosing faults of application Artificial neural network ensemble, to solve above-mentioned background technique The generalization ability of the final fault grader of raising of middle proposition, the problem for overcoming the shortcomings of single neural network.
To achieve the above object, the invention provides the following technical scheme: using Artificial neural network ensemble method for diagnosing faults, The foundation of study including establishing fault grader and neural network, the fault grader is made using three layer feedforward neural networks Input for the weak learner that AdaBoost is calculated, the neural network is M dimensional vector, and the input layer of the neural network has M A neuron;The hidden layer of the neural network has J neuron, and for k class troubleshooting issue, output layer neuron is k, The output of neural network is k dimensional vector;
In input xi=[xi,1,xi,2,…,xi,M] under effect, the input/output relation of the neural network is formula
Wherein, M × J ties up matrix wIIt is weight of the input layer to hidden layer, J × k ties up matrix wOIt is power of the hidden layer to output layer Value, the dimensional vector of J × 1 bIWith the dimensional vector of k × 1 bOIt is the threshold value of Hidden unit and output layer unit respectively;The dimensional vector of J × 1 uIAnd vI It is outputting and inputting for Hidden unit respectively;The dimensional vector of k × 1 uOAnd vOIt is outputting and inputting for output layer unit respectively;f1And f2 Respectively tangent Sigmoid type excitation function and logarithm Sigmoid type excitation function, subscript i=1,2 ..., N;J=1, 2,…,J;N=1,2 ..., k.
Preferably, the target of the neural network learning is to make pseudo- loss εtMinimum, this is one in DtAnd qtGiven In the case of optimize εtThe problem of, DtFor the distribution on training sample, qtFor label weighting function;
Neural network weight is updated using gradient method and threshold value, the formula of iterative calculation are
η in formula (1) is learning rate, the weight and threshold value of neural network when w (p) and b (p) are respectively pth time study;
Calculating εtFormula in,
In view of y and yiCan not simultaneously be n, every partial derivative in formula (2) calculates as follows:
Formula (1)~(6) constitute the basic studies algorithm of the neural network, f therein1'(·)f2' () difference Indicate f1() and f2The derivative of ().
Preferably, the neural network improved learning algorithm for introducing momentum term becomes
α is factor of momentum in formula (7), and the meaning of other each variables is identical as rudimentary algorithm, and every partial derivative is also and substantially Calculating in algorithm is similar, can be further improved the pace of learning of the neural network using innovatory algorithm.
Preferably, the AdaBoost algorithm for solving the problems, such as k class failure modes is as follows:
Input
1. N number of training sample is to (x1,y1),…(xN,yN), wherein xiBelong to some domain or instance space X, label yiBelong to In k to one in merotype, i.e. yi∈ Y=1 ..., k };
2. sample (xi, yi) on distribution D (i), assume that the Distribution Value of all samples is equal when initial, be all 1/N;
3. weak learning algorithm WeakLearn, AdaBoost exercise wheel number T;
Initialization, sets weight vectors to i=1,2 ..., Ny∈Y-{yi};
Step (t=1,2 ..., T)
1. to i=1,2 ..., N are set
2. calling WeakLearn, transmitting distribution DtWith label weighting function qtTo it, estimation h is returnedt:X×Y→[0, 1];
3. calculating htPseudo- loss
4. setting βtt/(1-εt);
5. calculating new weight vectors
Output
New samples x0The final estimation of ∈ X, which uses, has the ballot mode of weight to obtain
From above procedure as can be seen that working as xiWhen belonging to " difficulty divides sample ", ht(xi,yi) will decline, and ht(xi, y then will be upper It rises, in εtIn the case where < 1/2, βt< 1,5. according to step,It will rise, thus Wi t+1It can rise, with xiCorresponding weight point Cloth Dt+1(i) will also increase therewith, label weighting function qtAction principle it is similar.
Preferably, by controlling DtAnd qt, sample that AdaBoost algorithm effectively makes weak learner not concentrate merely on difficult point This, also focuses on the incorrect category for being most difficult to exclude and signs;Final estimation hfTraining error ε will not be over a upper bound, I.e.
Symbol in formula (10) [| hf(xi)≠yi|] meaning be
Enable γn=1/2- εn, above formula, which can arrange, is
Result explanation above, as long as the error of weak learner finally estimates h less than 1/2fTraining error will exponentially Grade decline goes to zero, although this is not meant to that the extensive error in test data must be small, Freund and Schapire are logical Cross experiment show extensive error tend to than theory imply error it is much better.
Preferably, when each round AdaBoost training, the weak learner itself carries out several weeks with batch processing mode The study of phase, failure Weak Classifier of the neural network obtained after the completion of study as epicycle;AdaBoost trains T wheel in total Afterwards, final integrated fault grader is formed using the ballot mode for having weight by T failure Weak Classifier, i.e.,
New samples x0The failure modes result of ∈ X is
Compared with prior art, the beneficial effects of the present invention are: the present invention is by will be based on the neural network of AdaBoost Integrated approach be applied to fault diagnosis, by simply train multiple general neural networks and by its diagnostic result according to AdaBoost algorithm is integrated, and the generalization ability of final fault grader can be effectively improved, and overcomes single neural network Deficiency.This is nerual network technique, and further applying in intelligent trouble diagnosis provides a new way.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will be described below to embodiment required Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is AdaBoost algorithm flow chart of the present invention.
Fig. 2 is the flow chart of engine air passage component fault diagnosis in the embodiment of the present invention.
Fig. 3 is the pseudo- loss figure of failure Weak Classifier in the embodiment of the present invention.
Fig. 4 is the training error and upper error figure that fault grader is integrated in the embodiment of the present invention.
Fig. 5 is the mistaken diagnosis number figure that fault grader is integrated in the embodiment of the present invention.
Fig. 6 is the class spacing figure of test samples in the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other Embodiment shall fall within the protection scope of the present invention.
Fig. 1-6 is please referred to, the present invention provides a kind of technical solution: using the method for diagnosing faults of Artificial neural network ensemble, packet The study for establishing fault grader and neural network is included, the foundation of the fault grader uses three layer feedforward neural networks conduct The weak learner that AdaBoost is calculated, the input of the neural network are M dimensional vector, and the input layer of the neural network has M Neuron;The hidden layer of the neural network has J neuron, and for k class troubleshooting issue, output layer neuron is k, mind Output through network is k dimensional vector;
In input xi=[xi,1,xi,2,…,xi,M] under effect, the input/output relation of the neural network is formula
Wherein, M × J ties up matrix wIIt is weight of the input layer to hidden layer, J × k ties up matrix wOIt is power of the hidden layer to output layer Value, the dimensional vector of J × 1 bIWith the dimensional vector of k × 1 bOIt is the threshold value of Hidden unit and output layer unit respectively;The dimensional vector of J × 1 uIAnd vI It is outputting and inputting for Hidden unit respectively;The dimensional vector of k × 1 uOAnd vOIt is outputting and inputting for output layer unit respectively;f1And f2 Respectively tangent Sigmoid type excitation function and logarithm Sigmoid type excitation function, subscript i=1,2 ..., N;J=1, 2,…,J;N=1,2 ..., k.
The target of neural network learning is to make pseudo- loss εtMinimum, this is one in DtAnd qtOptimize ε in the case where givent The problem of, DtFor the distribution on training sample, qtFor label weighting function;
Neural network weight is updated using gradient method and threshold value, the formula of iterative calculation are
η in formula (1) is learning rate, the weight and threshold value of neural network when w (p) and b (p) are respectively pth time study;
Calculating εtFormula in,
In view of y and yiCan not simultaneously be n, every partial derivative in formula (2) calculates as follows:
Formula (1)~(6) constitute the basic studies algorithm of the neural network, f therein1'(·)f2' () difference Indicate f1() and f2The derivative of ().
The neural network improved learning algorithm for introducing momentum term becomes
α is factor of momentum in formula (7), and the meaning of other each variables is identical as rudimentary algorithm, and every partial derivative is also and substantially Calculating in algorithm is similar, can be further improved the pace of learning of the neural network using innovatory algorithm.
Solve the problems, such as that the AdaBoost algorithm of k class failure modes is as follows:
Input
1. N number of training sample is to (x1,y1),…(xN,yN), wherein xiBelong to some domain or instance space X, label yiBelong to In k to one in merotype, i.e. yi∈ Y=1 ..., k };
2. sample (xi,yi) on distribution D (i), assume that the Distribution Value of all samples is equal when initial, be all 1/N;
3. weak learning algorithm WeakLearn, AdaBoost exercise wheel number T;
Initialization, sets weight vectors to i=1,2 ..., Ny∈Y-{yi};
Step (t=1,2 ..., T)
1. to i=1,2 ..., N are set
2. calling WeakLearn, transmitting distribution DtWith label weighting function qtTo it, estimation h is returnedt:X×Y→[0, 1];
3. calculating htPseudo- loss
4. setting βtt/(1-εt);
5. calculating new weight vectors
Output
New samples x0The final estimation of ∈ X, which uses, has the ballot mode of weight to obtain
From above procedure as can be seen that working as xiWhen belonging to " difficulty divides sample ", ht(xi,yi) will decline, and ht(xi, y then will be upper It rises, in εtIn the case where < 1/2, βt< 1,5. according to step,It will rise, thus Wi t+1It can rise, with xiCorresponding weight point Cloth Dt+1(i) will also increase therewith, label weighting function qtAction principle it is similar.
By controlling DtAnd qt, AdaBoost algorithm effectively makes weak learner not concentrate merely on difficult point of sample, also concentrates It is signed in the incorrect category for being most difficult to exclude;Final estimation hfTraining error ε will not be over a upper bound, i.e.,
Symbol in formula (10) [| hf(xi)≠yi|] meaning be
Enable γn=1/2- εn, above formula, which can arrange, is
Result explanation above, as long as the error of weak learner finally estimates h less than 1/2fTraining error will exponentially Grade decline goes to zero, although this is not meant to that the extensive error in test data must be small, Freund and Schapire are logical Cross experiment show extensive error tend to than theory imply error it is much better.
When each round AdaBoost training, described weak learner itself carries out in several periods with batch processing mode It practises, failure Weak Classifier of the neural network obtained after the completion of study as epicycle;After AdaBoost trains T wheel in total, by T Failure Weak Classifier is using there is the ballot mode of weight to form final integrated fault grader, i.e.,
New samples x0The failure modes result of ∈ X is
One concrete application of the present embodiment are as follows: the gas path component fault diagnosis of a birotary burbine jet engine Problem, Fault characteristic parameters are 8, comprising: high and low pressure revolving speed NH,NL;High and low pressure compressor delivery pressure P2,Px;It is high and low Press turbine outlet pressure Py,P4;High-pressure compressor outlet temperature T2;Low-pressure turbine exit temperature T4.Above-mentioned parameter is all made of deviation The relative quantity expression of normal condition, dimensionless.The failure of engine air circuit unit has 5 classes, is respectively: fault-free (NON), low pressure Compressor failure (LP), high-pressure compressor failure (HP), low-pressure turbine failure (LT) and high-pressure turbine failure (HT).
48 groups of training sample data (serial number 1-48), test samples data 24 (serial number 49-72).As the weak learner of failure Three layer feedforward neural networks structure be 8 × 8 × 5, factor of momentum α=0.01, learning rate η=0.5.At neural network use batch Reason mode learns, to whole training samples study once be a learning cycle, each round AdaBoost training, neural network into Thus the study in 30 periods of row obtains corresponding failure Weak Classifier.
In the pseudo- loss figure of failure Weak Classifier, solid line is that neural network is weak using each failure when basic studies algorithm The pseudo- loss ε of classifiert, it can be seen that εtConstantly reducing.This explanation, AdaBoost by last round of " difficulty divides sample " and The selective learning of " being difficult to the incorrect class label excluded ", makes the precision of weak learner obtain quick improvement, by 30 wheels Training, εtIt is reduced to the value of very little, training effect is obvious.Dotted line in Fig. 3 is neural network using event when momentum learning algorithm Hinder the pseudo- loss ε of Weak Classifiert, indicate, respective value when they are than using rudimentary algorithm is small, this proof, Error function Improve the learning effect of neural network.
The training error ε generated from integrated fault grader can be seen with the situation of change of AdaBoost exercise wheel number Out, it when training to the 13rd is taken turns, although the training error of each failure Weak Classifier is not zero, is formed by them integrated The training error of fault grader has decreased to zero.This experimental results showed that, AdaBoost passes through to single Neural Network Diagnosis Integrating as a result, effectively increases training precision.Dotted line in figure is the upper bound ε that formula (3) provides It is similar with the variation tendency of ε.
From furthermore presenting in mistaken diagnosis number of the integrated fault grader to training sample and test samples, can see Out, with the increase of the number of iterations t, two kinds of mistaken diagnosis numbers show the trend of continuous reduction, respectively in t=13 and t=14 When start to be reduced to zero and be always maintained at stabilization.AdaBoost not only effectively increases the training precision of final fault grader, Make it have good Generalization Capability.One can be with comparison result, the single neural network failure classification of same structure Device, when network training to it is correct to training sample all classification when, test samples always will appear the case where mistake is divided, usually will this No. 64 sample mistakes that should be LT class failure are divided into NON class, and worse situation is that 63, No. 64 samples (should be LT class failure) are same When mistaken diagnosis be NON class;Even if the hidden layers numbers and Hidden unit number to single neural network optimize, it is also difficult to avoid to inspection Test the mistaken diagnosis of sample.
AdaBoos fault grader is after the mistaken diagnosis number to training sample is reduced to zero, to the mistaken diagnosis number of test samples It still will continue to reduce, which means that Generalization Capability can still improve with the continuation of iteration after training error is reduced to zero, and Do not occur general neural network failure classifier and be typically easy to the overfitting phenomenon occurred, this is exactly the superior of AdaBoost algorithm Property.The experimental result of Fig. 6 can provide an explanation, sample point (x to above-mentioned phenomenon from the angle of class spacing0,y0) class between Away from being defined as
Its value only works as h between [- 1 ,+1]fClass spacing can just be greater than 0 when correct to the failure modes of sample, class The value of spacing can be regarded as to hfThe confidence level of prediction result, value is bigger, and prediction result is more credible.
Test samples press the class distance values that formula (11) calculate under three kinds of the number of iterations, to the class distance values of 24 test samples It is resequenced from small to large, this has no effect on conclusion.As seen from the figure, increase the number of iterations of AdaBoos, class spacing Average value increasing, class spacing is that the sample size of negative (mistake point) is being reduced, as t=15, the class of 24 test samples Distance values show all greater than zero without error sample.Just because of class distance values as the increase of the number of iterations is continuous Improve, even if Generalization Capability also will continue to improve after training error is minimized.
In the description of this specification, the description of reference term " one embodiment ", " example ", " specific example " etc. means Particular features, structures, materials, or characteristics described in conjunction with this embodiment or example are contained at least one implementation of the invention In example or example.In the present specification, schematic expression of the above terms may not refer to the same embodiment or example. Moreover, particular features, structures, materials, or characteristics described can be in any one or more of the embodiments or examples to close Suitable mode combines.
Present invention disclosed above preferred embodiment is only intended to help to illustrate the present invention.There is no detailed for preferred embodiment All details are described, are not limited the invention to the specific embodiments described.Obviously, according to the content of this specification, It can make many modifications and variations.These embodiments are chosen and specifically described to this specification, is in order to better explain the present invention Principle and practical application, so that skilled artisan be enable to better understand and utilize the present invention.The present invention is only It is limited by claims and its full scope and equivalent.

Claims (6)

1. the method for diagnosing faults of application Artificial neural network ensemble, the study including establishing fault grader and neural network is special Sign is: the weak learner that the foundation of the fault grader is calculated using three layer feedforward neural networks as AdaBoost, institute The input for stating neural network is M dimensional vector, and the input layer of the neural network has M neuron;The hidden layer of the neural network There is J neuron, for k class troubleshooting issue, output layer neuron is k, and the output of neural network is k dimensional vector;
In input xi=[xi,1,xi,2,…,xi,M] under effect, the input/output relation of the neural network is formula
Wherein, M × J ties up matrix wIIt is weight of the input layer to hidden layer, J × k ties up matrix wOIt is weight of the hidden layer to output layer, J × 1 dimensional vector bIWith the dimensional vector of k × 1 bOIt is the threshold value of Hidden unit and output layer unit respectively;The dimensional vector of J × 1 uIAnd vIIt is respectively Hidden unit is output and input;The dimensional vector of k × 1 uOAnd vOIt is outputting and inputting for output layer unit respectively;f1And f2Respectively Tangent Sigmoid type excitation function and logarithm Sigmoid type excitation function, subscript i=1,2 ..., N;J=1,2 ..., J;N= 1,2,…,k。
2. the method for diagnosing faults according to claim 1 using Artificial neural network ensemble, it is characterised in that: the nerve net The target of network study is to make pseudo- loss εtMinimum, this is one in DtAnd qtOptimize ε in the case where giventThe problem of, DtFor training Distribution on sample, qtFor label weighting function;
Neural network weight is updated using gradient method and threshold value, the formula of iterative calculation are
η in formula (1) is learning rate, the weight and threshold value of neural network when w (p) and b (p) are respectively pth time study;
Calculating εtFormula in,
In view of y and yiCan not simultaneously be n, every partial derivative in formula (2) calculates as follows:
Formula (1)~(6) constitute the basic studies algorithm of the neural network, f therein1'(·)f2' () respectively indicate f1() and f2The derivative of ().
3. the method for diagnosing faults according to claim 2 using Artificial neural network ensemble, it is characterised in that: introduce momentum term Neural network improved learning algorithm become
In formula (7) α be factor of momentum, the meaning of other each variables is identical as rudimentary algorithm, every partial derivative also with rudimentary algorithm In calculating it is similar, can be further improved the pace of learning of the neural network using innovatory algorithm.
4. the method for diagnosing faults according to claim 3 using Artificial neural network ensemble, which is characterized in that the solution k The AdaBoost algorithm of class failure modes problem is as follows:
Input
1. N number of training sample is to (x1,y1),…(xN,yN), wherein xiBelong to some domain or instance space X, label yiBelong to k To one in merotype, i.e. yi∈ Y=1 ..., k };
2. sample (xi,yi) on distribution D (i), assume that the Distribution Value of all samples is equal when initial, be all 1/N;
3. weak learning algorithm WeakLearn, AdaBoost exercise wheel number T;
Initialization, sets weight vectors to i=1,2 ..., N
Step (t=1,2 ..., T)
1. to i=1,2 ..., N are set
2. calling WeakLearn, transmitting distribution DtWith label weighting function qtTo it, estimation h is returnedt:X×Y→[0,1];
3. calculating htPseudo- loss
4. setting βtt/(1-εt);
5. calculating new weight vectors
Output
New samples x0The final estimation of ∈ X, which uses, has the ballot mode of weight to obtain
From above procedure as can be seen that working as xiWhen belonging to " difficulty divides sample ", ht(xi,yi) will decline, and ht(xi, y then will rise, In εtIn the case where < 1/2, βt< 1,5. according to step,It will rise, thusIt can rise, with xiCorresponding weight distribution Dt+1(i) will also increase therewith, label weighting function qtAction principle it is similar.
5. the method for diagnosing faults according to claim 4 using Artificial neural network ensemble, which is characterized in that by controlling Dt And qt, the sample that AdaBoost algorithm effectively makes weak learner not concentrate merely on difficult point, also focus on be most difficult to exclude it is non-just True category is signed;Final estimation hfTraining error ε will not be over a upper bound, i.e.,
Symbol in formula (10) [| hf(xi)≠yi|] meaning be
Enable γn=1/2- εn, above formula, which can arrange, is
Result explanation above, as long as the error of weak learner finally estimates h less than 1/2fTraining error will exponentially under Drop goes to zero, although this is not meant to that the extensive error in test data must be small, Freund and Schapire pass through reality Test show extensive error tend to than theory imply error it is much better.
6. the method for diagnosing faults according to claim 5 using Artificial neural network ensemble, it is characterised in that: each round When AdaBoost training, described weak learner itself carries out the study in several periods with batch processing mode, after the completion of study Failure Weak Classifier of the neural network arrived as epicycle;After AdaBoost trains T wheel in total, adopted by T failure Weak Classifier Final integrated fault grader is formed with the ballot mode for having weight, i.e.,
New samples x0The failure modes result of ∈ X is
CN201910062471.8A 2019-01-23 2019-01-23 Using the method for diagnosing faults of Artificial neural network ensemble Pending CN109871877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910062471.8A CN109871877A (en) 2019-01-23 2019-01-23 Using the method for diagnosing faults of Artificial neural network ensemble

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910062471.8A CN109871877A (en) 2019-01-23 2019-01-23 Using the method for diagnosing faults of Artificial neural network ensemble

Publications (1)

Publication Number Publication Date
CN109871877A true CN109871877A (en) 2019-06-11

Family

ID=66917976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910062471.8A Pending CN109871877A (en) 2019-01-23 2019-01-23 Using the method for diagnosing faults of Artificial neural network ensemble

Country Status (1)

Country Link
CN (1) CN109871877A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990274A (en) * 2021-02-20 2021-06-18 国网山东省电力公司电力科学研究院 Wind power plant abnormal data automatic identification method based on big data
CN113221475A (en) * 2021-04-02 2021-08-06 南京航空航天大学 Grid self-adaption method for high-precision flow field analysis
CN113541985A (en) * 2020-04-14 2021-10-22 中国移动通信集团浙江有限公司 Internet of things fault diagnosis method, training method of model and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐启华等: "神经网络集成在发动机故障诊断", 《航空动力学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113541985A (en) * 2020-04-14 2021-10-22 中国移动通信集团浙江有限公司 Internet of things fault diagnosis method, training method of model and related device
CN112990274A (en) * 2021-02-20 2021-06-18 国网山东省电力公司电力科学研究院 Wind power plant abnormal data automatic identification method based on big data
CN113221475A (en) * 2021-04-02 2021-08-06 南京航空航天大学 Grid self-adaption method for high-precision flow field analysis

Similar Documents

Publication Publication Date Title
CN112100865B (en) Method for predicting remaining life of aircraft engine based on parallel CNN model
CN109871877A (en) Using the method for diagnosing faults of Artificial neural network ensemble
CN110162018A (en) The increment type equipment fault diagnosis method that knowledge based distillation is shared with hidden layer
CN105590587B (en) A kind of gamma correction method and device for display module
CN102707256B (en) Fault diagnosis method based on BP-Ada Boost nerve network for electric energy meter
CN110443367A (en) A kind of method of strength neural network model robust performance
Yang et al. Ida-gan: A novel imbalanced data augmentation gan
CN108256173A (en) A kind of Gas path fault diagnosis method and system of aero-engine dynamic process
CN107944551A (en) One kind is used for electrowetting display screen defect identification method
CN110703077B (en) HPSO-TSVM-based high-voltage circuit breaker fault diagnosis method
CN106610584A (en) Remanufacturing process quality control method based on neural network and expert system
CN116562121A (en) XGBoost and FocalLoss combined cable aging state assessment method
CN109359668A (en) A kind of concurrent diagnostic method of aero-engine multiple faults
CN108717506A (en) A method of prediction coke hot strength
CN110807291B (en) On-site situation future guiding technology based on mimicry countermeasure learning mechanism
Luo et al. On functional test generation for deep neural network ips
CN110516391A (en) A kind of aero-engine dynamic model modeling method neural network based
Hui et al. Prediction of component content in rare earth extraction process based on ESNs-Adaboost
CN107506825A (en) A kind of pumping plant fault recognition method
Lai et al. Missing value imputations by rule-based incomplete data fuzzy modeling
Miron et al. Artificial neural network approach for fault recognition in a wastewater treatment process
CN112733943B (en) Heat pump fault diagnosis model migration method based on data mixed shearing technology
CN114565051A (en) Test method of product classification model based on neuron influence degree
CN109697511A (en) Data reasoning method, apparatus and computer equipment
CN113283004A (en) Aero-engine degradation state fault diagnosis method based on transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190611