CN107688825A - A kind of follow-on integrated weighting extreme learning machine sewage disposal failure examines method - Google Patents
A kind of follow-on integrated weighting extreme learning machine sewage disposal failure examines method Download PDFInfo
- Publication number
- CN107688825A CN107688825A CN201710654311.3A CN201710654311A CN107688825A CN 107688825 A CN107688825 A CN 107688825A CN 201710654311 A CN201710654311 A CN 201710654311A CN 107688825 A CN107688825 A CN 107688825A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- mtd
- mtr
- msup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
- Feedback Control In General (AREA)
Abstract
The invention discloses a kind of follow-on integrated weighting extreme learning machine sewage disposal failure to examine method, including:S1, for base grader, using the assignment formula for tending to minority class sample, assignment is carried out to the initial weight for weighting extreme learning machine;S2, training base grader;S3, new Integrated Algorithm base grader right value update formula is proposed, to weight extreme learning machine as base grader, multiple base graders are integrated with Adaboost alternative manners, establish follow-on sewage fault diagnosis model;Caused sample data in S4, input sewage disposal process, sets the base grader number T of Integrated Algorithm, the optimal core width gamma of base grader, corresponding optimal regularization coefficient C, establishes the fault diagnosis model of sewage disposal system and carry out performance test.The present invention can realize the unbalanced data classification of multiple classifications, improve the classification accuracy rate of the classification performance particularly minority class of unbalanced data, effectively increase the accuracy of fault diagnosis in sewage disposal process.
Description
Technical field
The present invention relates to the technical field of sewage disposal fault diagnosis, refers in particular to a kind of follow-on integrated weighting limit
Learning machine sewage disposal failure examines method.
Background technology
Sewage disposal is a biochemical process complicated, influence factor is very more, and sewage treatment plant is difficult to keep long-term
Stable operation, breaking down easily causes that effluent quality is up to standard, operating cost increases and seriously asked with secondary environmental pollution etc.
Topic, so needing to be monitored sewage treatment plant's running status, it is diagnosed to be operation troubles and timely processing.
The problem of fault diagnosis of sewage disposal process is really a pattern-recognition, usually further encounter in assorting process
The skewness weighing apparatus problem of sewage data set.It is more several classes of that traditional machine learning method is easily partial to classification accuracy, and
What is more valued in actual classification is the classification accuracy of the classification accuracy of minority class, i.e. failure classes.Discovery promptly and accurately
Failure can largely reduce the loss of sewage treatment plant, on the other hand improve the operating efficiency of sewage treatment plant.
The content of the invention
The present invention is directed to the troubleshooting issue of sewage treatment plant, there is provided a kind of follow-on integrated weighting limit study
Machine sewage disposal failure examines method, and this method introduces uneven evaluation of classification index G-mean to weight extreme learning machine as base
The Adaboost Ensemble classifier algorithms of grader, for sewage disposal process fault diagnosis, it is possible to achieve the injustice of multiple classifications
The data that weigh classification, improves the classification accuracy rate of the classification performance particularly minority class of unbalanced data, effectively increases sewage
The accuracy of fault diagnosis in processing procedure.
To achieve the above object, technical scheme provided by the present invention is:A kind of follow-on integrated weighting limit study
Machine sewage disposal failure examines method, comprises the following steps:
S1, for base grader, using the assignment formula for tending to minority class sample, to the initial of weighting extreme learning machine
Weights carry out assignment;
S2, training base grader:Calculate the recall rate recall and Performance Evaluating Indexes G-mean of previous base grader
Value, using the initial weight matrix update formula based on G-mean, to the weights of next weighting extreme learning machine base grader
Matrix is adjusted and establishes base sorter model, and its step process is as follows:
S2.1, given sewage sample set { (x1,y1),(x2,y2),…,(xi,yi),…,(xN,yN), wherein xi∈ X are represented
The property value of i-th of sample, yiRepresent class label corresponding to i-th of sample, N is sample total number, yi∈ Y=1,2 ...,
K ..., K }, k represents k-th of classification, and K represents a total of K classification;The base grader number of Integrated Algorithm is set and is designated as T;
S2.2, using Weighted Kernel extreme learning machine as base grader training sample is trained, obtains training pattern
ht,For t-th of base grader ht, first seek every a kind of recall rate R1,R2,…Rk,…,RK, k is kth class, and M is
The total quantity of classification, then calculate every a kind of number and be calculated as nk, and the classification results A (x of each samplei), if point to
In the case of, A (xi)=+ 1;In the case of misclassification, A (xi)=- 1;Finally seek G_mean=(R1·R2…RK)1/K;
If S2.3, G_mean≤0.5, exit iteration;
S2.4, according to calculate base grader htWeight calculation formulaCalculate t-th of base classification
The weight λ of devicet, G_mean is smaller, λtIt is smaller, represent that training error is more big, t-th of base grader accounts in whole Integrated Algorithm
Proportion it is smaller, vice versa;
S2.5, the weights distribution D for adjusting sample next round iterationt+1, Dt+1Regulation rule it is as follows:
S2.6, t=t+1 is made, return to S2.2 if t < T, otherwise terminate;
S3, new Integrated Algorithm base grader right value update formula is proposed, to weight extreme learning machine as base grader,
Multiple base graders are integrated with Adaboost alternative manners, establish follow-on sewage fault diagnosis model, its step
Process is as follows:
S3.1, the base grader number for setting Integrated Algorithm are simultaneously designated as T;
S3.2, according to weight initialization method, determine sample xiInitial weight distribution D1(i):I=1,2 ..., N;
S3.3, the method T base grader of training according to S2, according to base grader weight more new formulaCalculate the weight of base grader;
S3.4, T base grader integrated, obtain sewage fault diagnosis model:
Caused sample data in S4, input sewage disposal process, the base grader number for setting Integrated Algorithm is T, if
The optimal core width gamma of base grader is put, and corresponding optimal regularization coefficient C, the failure for establishing sewage disposal system are examined
Disconnected model simultaneously carries out performance test.
In step sl, the weight initialization scheme of selection has two kinds, and one kind is automatic weighting scheme:Wherein
W1Represent the first weighting scheme, nkIt is sample size corresponding to k for classification in training sample;
The thought of another weight initialization scheme is by minority class and more several classes of ratios is towards 0.618:1 direction pushes away
Enter, substantially, this method is to exchange the recognition accuracy to minority class for by sacrificing more several classes of niceties of grading:Wherein W2Represent second of weighting scheme.
In step S2.2, the modeling detailed process of Weighted Kernel extreme learning machine is as follows:
Extreme learning machine uses Single hidden layer feedforward neural networks SLFN framework, gives N number of sewage disposal fault diagnosis instruction
Practice sample { (x1,y1),(x2,y2),…,(xN,yN), the standard SLFN output models containing L concealed nodes represent as follows:
Wherein, βiOutput weights of i-th of hidden neuron be connected output neuron are represented, G is hidden layer nerve
First activation primitive, wiRepresent the input weights of input layer and i-th of hidden neuron, biRepresent the inclined of i-th hidden neuron
Put, ojFor the real output value of j-th of output neuron;
For the sewage disposal fault diagnosis sample that quantity is N, (a w be presenti,bi) and βiSo that
And then showing that the SLFN models approach sample set with zero error, i.e. hidden layer feedforward neural network free from error can be carried out to it
Fitting, i.e.,:It is denoted as:H β=T, wherein:
Wherein, H is hidden layer output matrix, and β is output weight matrix, and T is output layer output matrix;
When activation primitive G is differentiable function, SLFN parameters need not be all adjusted, input link weight wiWith
Hidden layer biases biSelect, and keep in the training process constant at random during network parameter initializes, then instruction
Practice the least square solution that SLFN is just equivalent to solve linear system H β=T, also can just be converted into following optimization problem:
Minimize:||Hβ-T||2With | | β | |
The optimization problem is expressed as in a mathematical format:
Wherein, ξi=[ξi,1,…ξi,K]TIt is sewage disposal fault diagnosis training sample xiIn the defeated of its corresponding output node
The error vector gone out between value and actual value, the Moore-Penrose generalized inverse matrix H exported by hidden layer neuron+It can solve
:
Can be effectively to H using orthographic projection KKT+Solve, work as HTH or HHTFor nonsingular matrix situation when H+=
(HTH)-1HTOr H+=HT(HTH)-1, in order that resulting model obtains more preferable stability and Generalization Capability, solvingWhen
Need to HTH or HHTDiagonal entry plus one on the occasion ofObtain:
I represents unit matrix, and corresponding output function is:
Or work as:
ELM output function is accordingly:
In order to preferably handle unbalanced data, each sample is weighted so that belong to inhomogeneous sample and obtain
Different weights, so the mathematical form of above-mentioned optimization problem is rewritten into:
Wherein, W be definition a N × N diagonal matrix, each main diagonal element WiiIt all correspond to a sample
xi, different classes of sample will distribute different weights automatically, and C is regular coefficient;
According to KKT optimal conditions, define Lagrange functions and solve the quadratic programming problem, be then equivalent to solve following
Formula:
Wherein, αiFor Lagrange multipliers, and all it is nonnegative number;
Corresponding KKT optimizes restrictive condition:
Algorithm for Solving hidden layer output weight is expressed as:
Weighting scheme uses the sample weights distribution D in step S2.5t;
In the case that hidden layer Feature Mapping h (x) is unknown, nuclear matrix is defined as follows:
ΩELM=HHT:ΩELMi,j=h (xi)·h(xj)=K (xi,xj) i=1,2 ..., N;J=1,2 ..., N
Here kernel function K () needs to meet Mercer conditions, is now write as output expression formula:
So ELM hidden layer Feature Mapping can keep unknown to it, while hidden layer neuron quantity L is without entering
Row is set;
The final output equation of weighting extreme learning machine based on kernel function is:
Wherein, I is unit matrix, and C is regular coefficient, and W is weighting matrix, and T is to export layer matrix, ΩELMFor nuclear moment
Battle array;
In summary, the flow of the weighting extreme learning machine training algorithm based on kernel function is:
S2.2.1, each sample weights, calculating weighting matrix W are assigned according to weighting scheme;
S2.2.2, nuclear matrix Ω calculated according to kernel functionELM;
S2.2.3, calculating network output result f (x).
In step s 4, the base grader number T=20 of integrated classifier is set, and by the way of mesh parameter optimizing
The core width gamma for the base grader for meeting algorithm optimal performance and regular coefficient C are found, wherein, γ Search Range is { 2-18,2(-18+step),…,220, step=0.5;C Search Range is { 2-18,2(-18+step),…,250, step=0.5.
The present invention compared with prior art, has the following advantages that and beneficial effect:
1st, the inventive method first introduces uneven evaluation of classification index G-mean to weight extreme learning machine as base point
The Adaboost Ensemble classifier algorithms of class device, propose the Integrated Algorithm base grader right value update formula of novelty.
2nd, the inventive method proposes the initial weight matrix update formula based on G-mean first, for weighting limit study
The modeling of machine.
3rd, the present invention can be improved point using base grader of the grader as Ensemble Learning Algorithms of weighting extreme learning machine
Class device pace of learning, is real-time and accurately monitored so as to realize to sewage treatment plant's running status.
4th, the inventive method can improve the overall classification accuracy rate of sewage disposal Gu fault diagnosis system, it is particularly possible to improve
The recognition correct rate of fault category, it is significant for the fault pre-alarming and timely processing of sewage disposal system.
5th, the inventive method can effectively ensure that stable operation and the quality of sewage disposal of sewage treatment plant, reduce secondary dirt
Dye.
Brief description of the drawings
Fig. 1 is the flow chart of the inventive method.
Embodiment
With reference to specific embodiment, the invention will be further described.
Shown in Figure 1, the integrated weighting extreme learning machine sewage disposal failure that the present embodiment is provided examines method, including
Following steps:
Step S1, base grader weighting extreme learning machine initial weight assignment.There are two kinds of weight initialization schemes, Yi Zhongshi
Automatic weighting scheme:Wherein W1Represent the first weighting scheme, nkIt is sample corresponding to k for classification in training sample
Quantity;
The thought of another weight initialization scheme is by minority class and more several classes of ratios is towards 0.618:1 direction pushes away
Enter, substantially, this method is to exchange the recognition accuracy to minority class for by sacrificing more several classes of niceties of grading:Wherein W2Represent second of weighting scheme.
Step S2, base grader is trained:
S2.1, given sewage sample set { (x1,y1),(x2,y2),…,(xi,yi),…,(xN,yN), wherein xi∈ X are represented
The property value of i-th of sample, yiRepresent class label corresponding to i-th of sample, N is sample total number, yi∈ Y=1,2 ...,
K ..., K }, k represents k-th of classification, and K represents a total of K classification;The base grader number of Integrated Algorithm is set and is designated as T;
S2.2, using Weighted Kernel extreme learning machine as base grader training sample is trained, obtains training pattern
ht,For t-th of base grader ht, first seek every a kind of recall rate R1,R2,…Rk,…,RK, k is kth class, and M is
The total quantity of classification, then calculate every a kind of number and be calculated as nk, and the classification results A (x of each samplei), if point to
In the case of, A (xi)=+ 1;In the case of misclassification, A (xi)=- 1;Finally seek G_mean=(R1·R2…RK)1/K;
If S2.3, G_mean≤0.5, exit iteration;
S2.4, according to calculate base grader htWeight calculation formulaCalculate t-th of base classification
The weight λ of devicet, G_mean is smaller, λtIt is smaller, represent that training error is more big, t-th of base grader accounts in whole Integrated Algorithm
Proportion it is smaller, vice versa;
S2.5, the weights distribution D for adjusting sample next round iterationt+1, Dt+1Regulation rule it is as follows:
S2.6, t=t+1 is made, return to S2.2 if t < T, otherwise terminate;
Base classifier training finishes.
Wherein, in above-mentioned steps S2.2, the modeling detailed process of Weighted Kernel extreme learning machine is as follows:
Extreme learning machine uses Single hidden layer feedforward neural networks SLFN framework, gives N number of sewage disposal fault diagnosis instruction
Practice sample { (x1,y1),(x2,y2),…,(xN,yN), the standard SLFN output models containing L concealed nodes represent as follows:
Wherein, βiOutput weights of i-th of hidden neuron be connected output neuron are represented, G is hidden layer nerve
First activation primitive, wiRepresent the input weights of input layer and i-th of hidden neuron, biRepresent the inclined of i-th hidden neuron
Put, ojFor the real output value of j-th of output neuron;
For the sewage disposal fault diagnosis sample that quantity is N, (a w be presenti,bi) and βiSo that
And then showing that the SLFN models approach sample set with zero error, i.e. hidden layer feedforward neural network free from error can be carried out to it
Fitting, i.e.,:It is denoted as:H β=T, wherein:
Wherein, H is hidden layer output matrix, and β is output weight matrix, and T is output layer output matrix;
When activation primitive G is differentiable function, SLFN parameters need not be all adjusted, input link weight wiWith
Hidden layer biases biSelect, and keep in the training process constant at random during network parameter initializes, then instruction
Practice the least square solution that SLFN is just equivalent to solve linear system H β=T, also can just be converted into following optimization problem:
Minimize:||Hβ-T||2With | | β | |
The optimization problem is expressed as in a mathematical format:
Wherein, ξi=[ξi,1,…ξi,K]TIt is sewage disposal fault diagnosis training sample xiIn the defeated of its corresponding output node
The error vector gone out between value and actual value, the Moore-Penrose generalized inverse matrix H exported by hidden layer neuron+It can solve
:
Can be effectively to H using orthographic projection KKT+Solve, work as HTH or HHTFor nonsingular matrix situation when H+=
(HTH)-1HTOr H+=HT(HTH)-1, in order that resulting model obtains more preferable stability and Generalization Capability, solvingWhen
Need to HTH or HHTDiagonal entry plus one on the occasion ofObtain:
I represents unit matrix, and corresponding output function is:
Or work as:
ELM output function is accordingly:
In order to preferably handle unbalanced data, each sample is weighted so that belong to inhomogeneous sample and obtain
Different weights, so the mathematical form of above-mentioned optimization problem is rewritten into:
Wherein, W be definition a N × N diagonal matrix, each main diagonal element WiiIt all correspond to a sample
xi, different classes of sample will distribute different weights automatically, and C is regular coefficient;
According to KKT optimal conditions, define Lagrange functions and solve the quadratic programming problem, be then equivalent to solve following
Formula:
Wherein, αiFor Lagrange multipliers, and all it is nonnegative number;
Corresponding KKT optimizes restrictive condition:
Algorithm for Solving hidden layer output weight is expressed as:
Weighting scheme uses the sample weights distribution D in step S2.5t;
In the case that hidden layer Feature Mapping h (x) is unknown, nuclear matrix is defined as follows:
ΩELM=HHT:ΩELMi,j=h (xi)·h(xj)=K (xi,xj) i=1,2 ..., N;J=1,2 ..., N
Here kernel function K () needs to meet Mercer conditions, is now write as output expression formula:
So ELM hidden layer Feature Mapping can keep unknown to it, while hidden layer neuron quantity L is without entering
Row is set;
The final output equation of weighting extreme learning machine based on kernel function is:
Wherein, I is unit matrix, and C is regular coefficient, and W is weighting matrix, and T is to export layer matrix, ΩELMFor nuclear moment
Battle array;
In summary, the flow of the weighting extreme learning machine training algorithm based on kernel function is:
S2.2.1, each sample weights, calculating weighting matrix W are assigned according to weighting scheme;
S2.2.2, nuclear matrix Ω calculated according to kernel functionELM;
S2.2.3, calculating network output result f (x).
Step S3, new Integrated Algorithm base grader right value update formula is proposed, to weight extreme learning machine as base point
Class device, multiple base graders are integrated with Adaboost alternative manners, establish follow-on sewage fault diagnosis model, its
Step process is as follows:
S3.1, the base grader number for setting Integrated Algorithm are simultaneously designated as T;
S3.2, according to weight initialization method, determine sample xiInitial weight distribution D1(i):I=1,2 ..., N;
S3.3, the method T base grader of training according to S2, according to base grader weight more new formulaCalculate the weight of base grader;
S3.4, T base grader integrated, obtain sewage fault diagnosis model:
The modeling of sewage fault diagnosis model finishes.
Step S4, the base grader number T=20 of integrated classifier is set, and found by the way of mesh parameter optimizing
Meet the core width gamma of the base grader of algorithm optimal performance and regular coefficient C.γ Search Range is { 2-18,2(-18+step),…,220, step=0.5;C Search Range is { 2-18,2(-18+step),…,250, step=0.5.
The data of experiment simulation come from University of California's database (UCI), are the daily monitoring numbers of a sewage treatment plant
According to the whole each sample dimension of data set is 38, and all completely record has 380 to whole property values, and monitored water body has altogether
There are 13 kinds of states, each state is replaced with numeral.In order to simplify the complexity of classification, we, will according to the property of sample class
Sample is divided into 4 major classes, as shown in table 1 below.In table 1, classification 1 is normal condition, and classification 2 is the positive reason that performance exceedes average value
Condition, classification 3 are the low normal condition of flow of inlet water, and classification 4 is second pond failure, abnormal condition and solid are molten caused by heavy rain
Spend failure situation caused by the reasons such as load.The number of the sample of classification 1 of normal condition is relatively more, belongs to more several classes of;And class
Other 3 and classification 4 due to number of samples it is fewer, therefore belong to minority class, by the abbreviation of data category, the distribution ratio of four class samples
Example is 39.6:14.6:8:1.Understood through parameter optimization, it is optimal needed for two kinds of weight initialization schemes that this software instances uses
Parameter is respectively:W1:(C=226.5, γ=213),W2:(C=227.5, γ=213.5)。
According to above step, emulation experiment first uses the 3/4 of sewage sample set, i.e., altogether 285 groups of samples as training sample
Collection, after producing final disaggregated model by integrated iteration using different weight initialization schemes, by remaining sample set
Disaggregated model, which is substituted into, as test sample draws final classification results, i.e. sewage disposal fault diagnosis result.Wherein
AdaG1WELM represents the algorithm using W1 initial weight schemes, and AdaG2WELM represents the algorithm using W2 initial weight schemes.
The sample class distributed number of table 1
The result of table 2 and traditional classification algorithm comparison
The comparing result of table 3 and current similar algorithm
Table 2 and table 3 sets forth algorithm used in the present invention (AdaG1WKELM and AdaG2WKELM) and calculated with traditional classification
The contrast and experiment of method, current similar research algorithm.Wherein traditional classification algorithm include reverse transmittance nerve network (BPNN),
SVMs (SVM), Method Using Relevance Vector Machine (RVM), fast correlation vector machine (Fast RVM), extreme learning machine (ELM), it is based on
The weighting extreme learning machine (K-WELM) of kernel function;Current similar research algorithm includes B-PCA-CBPNN*, WELM* and Pre-
processed Fast RVM*.R1-acc, R2-acc, R3-acc, R4-acc represent every a kind of classification accuracy respectively,
Total acc represent overall classification accuracy, G-mean=(R1×R2×R3×R4)1/4, Training time expression model instructions
Practice the time.As seen from the table, although AdaG1WKELM and AdaG2WKELM will be less than for the classification accuracy of more several classes of samples
Other a few class algorithms, but it is higher to the classification accuracy of minority class sample, and especially the 4th class is that the classification of failure classes is accurate
Rate, and overall G-mean values and overall accuracy rate are all maximum.It follows that used by this software algorithm comparison be adapted to for
Unbalanced dataset classify.In summary, the failure for the integrated extreme learning machine based on G-mean that this software uses
Diagnostic method can make accurate judgement for the failure being likely to occur in sewage disposal process, strengthen sewage treatment plant
For the disposal ability of failure.
Embodiment described above is only the preferred embodiments of the invention, and the practical range of the present invention is not limited with this, therefore
The change that all shape, principles according to the present invention are made, it all should cover within the scope of the present invention.
Claims (4)
1. a kind of follow-on integrated weighting extreme learning machine sewage disposal failure examines method, it is characterised in that including following step
Suddenly:
S1, for base grader, using the assignment formula for tending to minority class sample, the initial weight to weighting extreme learning machine
Carry out assignment;
S2, training base grader:The recall rate recall and Performance Evaluating Indexes G-mean values of previous base grader are calculated, is adopted
With the initial weight matrix update formula based on G-mean, the weight matrix of next weighting extreme learning machine base grader is entered
Row adjusts and establishes base sorter model, and its step process is as follows:
S2.1, given sewage sample set { (x1,y1),(x2,y2),…,(xi,yi),…,(xN,yN), wherein xi∈ X represent i-th
The property value of individual sample, yiRepresent class label corresponding to i-th of sample, N is sample total number, yi∈ Y=1,2 ...,
K ..., K }, k represents k-th of classification, and K represents a total of K classification;The base grader number of Integrated Algorithm is set and is designated as T;
S2.2, using Weighted Kernel extreme learning machine as base grader training sample is trained, obtains training pattern ht,For t-th of base grader ht, first seek every a kind of recall rate R1,R2,…Rk,…,RK, k is kth class, and M is class
Other total quantity, then calculate every a kind of number and be calculated as nk, and the classification results A (x of each samplei), if point to feelings
Under condition, A (xi)=+ 1;In the case of misclassification, A (xi)=- 1;Finally seek G_mean=(R1·R2…RK)1/K;
If S2.3, G_mean≤0.5, exit iteration;
S2.4, according to calculate base grader htWeight calculation formulaCalculate t-th base grader
Weight λt, G_mean is smaller, λtIt is smaller, represent that training error is more big, the ratio that t-th of base grader accounts in whole Integrated Algorithm
Weight is smaller, and vice versa;
S2.5, the weights distribution D for adjusting sample next round iterationt+1, Dt+1Regulation rule it is as follows:
S2.6, t=t+1 is made, return to S2.2 if t < T, otherwise terminate;
S3, new Integrated Algorithm base grader right value update formula is proposed, to weight extreme learning machine as base grader, with
Adaboost alternative manners integrate to multiple base graders, establish follow-on sewage fault diagnosis model, its step mistake
Journey is as follows:
S3.1, the base grader number for setting Integrated Algorithm are simultaneously designated as T;
S3.2, according to weight initialization method, determine sample xiInitial weight distribution D1(i):I=1,2 ..., N;
S3.3, the method T base grader of training according to S2, according to base grader weight more new formula
Calculate the weight of base grader;
S3.4, T base grader integrated, obtain sewage fault diagnosis model:
Caused sample data in S4, input sewage disposal process, the base grader number for setting Integrated Algorithm is T, sets base
The optimal core width gamma of grader, and corresponding optimal regularization coefficient C, establish the fault diagnosis mould of sewage disposal system
Type simultaneously carries out performance test.
2. the follow-on integrated weighting extreme learning machine sewage disposal failure of one kind according to claim 1 examines method, its
It is characterised by:In step sl, the weight initialization scheme of selection has two kinds, and one kind is automatic weighting scheme:Its
Middle W1Represent the first weighting scheme, nkIt is sample size corresponding to k for classification in training sample;
The thought of another weight initialization scheme is by minority class and more several classes of ratios is towards 0.618:1 direction promotes,
Substantially, this method is to exchange the recognition accuracy to minority class for by sacrificing more several classes of niceties of grading:Wherein W2Represent second of weighting scheme.
3. the follow-on integrated weighting extreme learning machine sewage disposal failure of one kind according to claim 1 examines method, its
It is characterised by, in step S2.2, the modeling detailed process of Weighted Kernel extreme learning machine is as follows:
Extreme learning machine uses Single hidden layer feedforward neural networks SLFN framework, gives N number of sewage disposal fault diagnosis training sample
This { (x1,y1),(x2,y2),…,(xN,yN), the standard SLFN output models containing L concealed nodes represent as follows:
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>L</mi>
</munderover>
<msub>
<mi>&beta;</mi>
<mi>i</mi>
</msub>
<mi>G</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>o</mi>
<mi>j</mi>
</msub>
<mo>,</mo>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<mi>N</mi>
</mrow>
Wherein, βiOutput weights of i-th of hidden neuron be connected output neuron are represented, G activates for hidden layer neuron
Function, wiRepresent the input weights of input layer and i-th of hidden neuron, biRepresent the biasing of i-th of hidden neuron, ojFor
The real output value of j-th of output neuron;
For the sewage disposal fault diagnosis sample that quantity is N, (a w be presenti,bi) and βiSo thatAnd then
Show that the SLFN models approach sample set with zero error, i.e. hidden layer feedforward neural network free from error can be intended it
Close, i.e.,:It is denoted as:H β=T, wherein:
<mrow>
<mi>H</mi>
<mo>=</mo>
<mi>H</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<msub>
<mi>w</mi>
<mi>L</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<msub>
<mi>b</mi>
<mi>L</mi>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<msub>
<mi>x</mi>
<mi>N</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>G</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<mo>&CenterDot;</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mrow>
<mi>G</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mi>L</mi>
</msub>
<mo>&CenterDot;</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<mi>b</mi>
<mi>L</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>G</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<mo>&CenterDot;</mo>
<msub>
<mi>x</mi>
<mi>N</mi>
</msub>
<mo>+</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<mrow>
<mi>G</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mi>L</mi>
</msub>
<mo>&CenterDot;</mo>
<msub>
<mi>x</mi>
<mi>N</mi>
</msub>
<mo>+</mo>
<msub>
<mi>b</mi>
<mi>L</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>h</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>h</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>N</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
<mrow>
<mi>&beta;</mi>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<msup>
<msub>
<mi>&beta;</mi>
<mn>1</mn>
</msub>
<mi>T</mi>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<msub>
<mi>&beta;</mi>
<mi>N</mi>
</msub>
<mi>T</mi>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<mi>L</mi>
<mo>&times;</mo>
<mi>K</mi>
</mrow>
</msub>
<mo>,</mo>
<mi>T</mi>
<mo>=</mo>
<msub>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<msup>
<msub>
<mi>y</mi>
<mn>1</mn>
</msub>
<mi>T</mi>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msup>
<msub>
<mi>y</mi>
<mi>N</mi>
</msub>
<mi>T</mi>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<mi>N</mi>
<mo>&times;</mo>
<mi>K</mi>
</mrow>
</msub>
</mrow>
Wherein, H is hidden layer output matrix, and β is output weight matrix, and T is output layer output matrix;
When activation primitive G is differentiable function, SLFN parameters need not be all adjusted, input link weight wiWith hide
Layer biasing biSelect, and keep in the training process constant at random during network parameter initializes, then training
SLFN is just equivalent to solve linear system H β=T least square solution, also can just be converted into following optimization problem:
Minimize:||Hβ-T||2With | | β |
The optimization problem is expressed as in a mathematical format:
Minimize:
Subject to:
Wherein, ξi=[ξi,1,…ξi,K]TIt is sewage disposal fault diagnosis training sample xiIn the output valve of its corresponding output node
Error vector between actual value, the Moore-Penrose generalized inverse matrix H+ exported by hidden layer neuron can be solved:
Can be effectively to H using orthographic projection KKT+Solve, work as HTH or HHTFor nonsingular matrix situation when H+=
(HTH)-1HTOr H+=HT(HTH)-1, in order that resulting model obtains more preferable stability and Generalization Capability, solvingWhen
Need to HTH or HHTDiagonal entry plus one on the occasion ofObtain:
<mrow>
<mi>&beta;</mi>
<mo>=</mo>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msup>
<mi>HH</mi>
<mi>T</mi>
</msup>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>T</mi>
</mrow>
I represents unit matrix, and corresponding output function is:
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mi>&beta;</mi>
<mo>=</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msup>
<mi>HH</mi>
<mi>T</mi>
</msup>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>T</mi>
</mrow>
Or work as:
<mrow>
<mi>&beta;</mi>
<mo>=</mo>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msup>
<mi>HH</mi>
<mi>T</mi>
</msup>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<mi>T</mi>
</mrow>
ELM output function is accordingly:
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mi>&beta;</mi>
<mo>=</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msup>
<mi>HH</mi>
<mi>T</mi>
</msup>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<mi>T</mi>
</mrow>
In order to preferably handle unbalanced data, each sample is weighted so that belong to inhomogeneous sample and obtain difference
Weights, so the mathematical form of above-mentioned optimization problem is rewritten into:
Minimize:
Subject to:
Wherein, W be definition a N × N diagonal matrix, each main diagonal element WiiIt all correspond to a sample xi, no
Generic sample will distribute different weights automatically, and C is regular coefficient;
According to KKT optimal conditions, define Lagrange functions and solve the quadratic programming problem, be then equivalent to solve following public affairs
Formula:
Minimize:
Wherein, αiFor Lagrange multipliers, and all it is nonnegative number;
Corresponding KKT optimizes restrictive condition:
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>D</mi>
<mi>E</mi>
<mi>L</mi>
<mi>M</mi>
</mrow>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>&beta;</mi>
</mrow>
</mfrac>
<mo>=</mo>
<mn>0</mn>
<mo>&RightArrow;</mo>
<mi>&beta;</mi>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mi>h</mi>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<mo>=</mo>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<mi>&alpha;</mi>
</mrow>
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>D</mi>
<mi>E</mi>
<mi>L</mi>
<mi>M</mi>
</mrow>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
</mrow>
</mfrac>
<mo>=</mo>
<mn>0</mn>
<mo>&RightArrow;</mo>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<msub>
<mi>CW&xi;</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<mi>N</mi>
</mrow>
<mrow>
<mfrac>
<mrow>
<mo>&part;</mo>
<msub>
<mi>L</mi>
<mrow>
<mi>D</mi>
<mi>E</mi>
<mi>L</mi>
<mi>M</mi>
</mrow>
</msub>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>&beta;</mi>
</mrow>
</mfrac>
<mo>=</mo>
<mn>0</mn>
<mo>&RightArrow;</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>&beta;</mi>
<mo>-</mo>
<msub>
<mi>t</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mn>0</mn>
<mo>,</mo>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<mi>N</mi>
</mrow>
Algorithm for Solving hidden layer output weight is expressed as:
<mrow>
<mover>
<mi>&beta;</mi>
<mo>^</mo>
</mover>
<mo>=</mo>
<msup>
<mi>H</mi>
<mo>+</mo>
</msup>
<mi>T</mi>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msup>
<mi>WHH</mi>
<mi>T</mi>
</msup>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>W</mi>
<mi>T</mi>
<mo>,</mo>
<mi>N</mi>
<mo><</mo>
<mi>L</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<mi>W</mi>
<mi>H</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<mi>W</mi>
<mi>T</mi>
<mo>,</mo>
<mi>N</mi>
<mo>&GreaterEqual;</mo>
<mi>L</mi>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Weighting scheme uses the sample weights distribution D in step S2.5t;
In the case that hidden layer Feature Mapping h (x) is unknown, nuclear matrix is defined as follows:
ΩELM=HHT:ΩELMi,j=h (xi)·h(xj)=K (xi,xj) i=1,2 ..., N;J=1,2 ..., N
Here kernel function K () needs to meet Mercer conditions, is now write as output expression formula:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mi>&beta;</mi>
<mo>=</mo>
<mi>h</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<msup>
<mi>H</mi>
<mi>T</mi>
</msup>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msup>
<mi>WHH</mi>
<mi>T</mi>
</msup>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>W</mi>
<mi>T</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<msub>
<mi>x</mi>
<mi>N</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msup>
<mi>WHH</mi>
<mi>T</mi>
</msup>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>W</mi>
<mi>T</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
So ELM hidden layer Feature Mapping can keep unknown to it, while hidden layer neuron quantity L is without being set
Put;
The final output equation of weighting extreme learning machine based on kernel function is:
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>K</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<msub>
<mi>x</mi>
<mi>N</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<msup>
<mrow>
<mo>(</mo>
<mfrac>
<mi>I</mi>
<mi>C</mi>
</mfrac>
<mo>+</mo>
<msub>
<mi>W&Omega;</mi>
<mrow>
<mi>E</mi>
<mi>L</mi>
<mi>M</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mi>W</mi>
<mi>T</mi>
</mrow>
Wherein, I is unit matrix, and C is regular coefficient, and W is weighting matrix, and T is to export layer matrix, ΩELMFor nuclear matrix;
In summary, the flow of the weighting extreme learning machine training algorithm based on kernel function is:
S2.2.1, each sample weights, calculating weighting matrix W are assigned according to weighting scheme;
S2.2.2, nuclear matrix Ω calculated according to kernel functionELM;
S2.2.3, calculating network output result f (x).
4. the follow-on integrated weighting extreme learning machine sewage disposal failure of one kind according to claim 1 examines method, its
It is characterised by:In step s 4, the base grader number T=20 of integrated classifier is set, and using the side of mesh parameter optimizing
Formula finds the core width gamma for the base grader for meeting algorithm optimal performance and regular coefficient C, wherein, γ Search Range is
{2-18,2(-18+step),…,220, step=0.5;C Search Range is { 2-18,2(-18+step),…,250, step=0.5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710654311.3A CN107688825B (en) | 2017-08-03 | 2017-08-03 | Improved integrated weighted extreme learning machine sewage treatment fault diagnosis method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710654311.3A CN107688825B (en) | 2017-08-03 | 2017-08-03 | Improved integrated weighted extreme learning machine sewage treatment fault diagnosis method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107688825A true CN107688825A (en) | 2018-02-13 |
CN107688825B CN107688825B (en) | 2020-02-18 |
Family
ID=61153142
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710654311.3A Active CN107688825B (en) | 2017-08-03 | 2017-08-03 | Improved integrated weighted extreme learning machine sewage treatment fault diagnosis method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107688825B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190280A (en) * | 2018-09-18 | 2019-01-11 | 东北农业大学 | A kind of pollution source of groundwater inverting recognition methods based on core extreme learning machine alternative model |
CN109492710A (en) * | 2018-12-07 | 2019-03-19 | 天津智行瑞祥汽车科技有限公司 | A kind of new-energy automobile fault detection householder method |
CN109558893A (en) * | 2018-10-31 | 2019-04-02 | 华南理工大学 | Fast integration sewage treatment method for diagnosing faults based on resampling pond |
CN109739209A (en) * | 2018-12-11 | 2019-05-10 | 深圳供电局有限公司 | A kind of electric network failure diagnosis method based on Classification Data Mining |
CN109858564A (en) * | 2019-02-21 | 2019-06-07 | 上海电力学院 | Modified Adaboost-SVM model generating method suitable for wind electric converter fault diagnosis |
CN110084291A (en) * | 2019-04-12 | 2019-08-02 | 湖北工业大学 | A kind of students ' behavior analysis method and device based on the study of the big data limit |
CN110363230A (en) * | 2019-06-27 | 2019-10-22 | 华南理工大学 | Stacking integrated sewage handling failure diagnostic method based on weighting base classifier |
CN111160457A (en) * | 2019-12-27 | 2020-05-15 | 南京航空航天大学 | Turboshaft engine fault detection method based on soft class extreme learning machine |
CN112183676A (en) * | 2020-11-10 | 2021-01-05 | 浙江大学 | Water quality soft measurement method based on mixed dimensionality reduction and kernel function extreme learning machine |
CN112257942A (en) * | 2020-10-29 | 2021-01-22 | 中国特种设备检测研究院 | Stress corrosion cracking prediction method and system |
CN113323823A (en) * | 2021-06-08 | 2021-08-31 | 云南大学 | AWKELM-based fan blade icing fault detection method and system |
CN113551904A (en) * | 2021-06-29 | 2021-10-26 | 西北工业大学 | Gear box multi-type concurrent fault diagnosis method based on hierarchical machine learning |
CN113965449A (en) * | 2021-09-28 | 2022-01-21 | 南京航空航天大学 | Method for improving fault diagnosis accuracy rate of self-organizing cellular network based on evolution weighted width learning system |
CN114492164A (en) * | 2021-12-24 | 2022-05-13 | 吉林大学 | Organic pollutant migration numerical model substitution method based on multi-core extreme learning machine |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473598A (en) * | 2013-09-17 | 2013-12-25 | 山东大学 | Extreme learning machine based on length-changing particle swarm optimization algorithm |
KR20140127061A (en) * | 2013-04-24 | 2014-11-03 | 주식회사 지넬릭스 | Oral Hygiene functional composition and a method of manufacturing |
CN105631477A (en) * | 2015-12-25 | 2016-06-01 | 天津大学 | Traffic sign recognition method based on extreme learning machine and self-adaptive lifting |
CN105740619A (en) * | 2016-01-28 | 2016-07-06 | 华南理工大学 | On-line fault diagnosis method of weighted extreme learning machine sewage treatment on the basis of kernel function |
CN106874934A (en) * | 2017-01-12 | 2017-06-20 | 华南理工大学 | Sewage disposal method for diagnosing faults based on weighting extreme learning machine Integrated Algorithm |
-
2017
- 2017-08-03 CN CN201710654311.3A patent/CN107688825B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140127061A (en) * | 2013-04-24 | 2014-11-03 | 주식회사 지넬릭스 | Oral Hygiene functional composition and a method of manufacturing |
CN103473598A (en) * | 2013-09-17 | 2013-12-25 | 山东大学 | Extreme learning machine based on length-changing particle swarm optimization algorithm |
CN105631477A (en) * | 2015-12-25 | 2016-06-01 | 天津大学 | Traffic sign recognition method based on extreme learning machine and self-adaptive lifting |
CN105740619A (en) * | 2016-01-28 | 2016-07-06 | 华南理工大学 | On-line fault diagnosis method of weighted extreme learning machine sewage treatment on the basis of kernel function |
CN106874934A (en) * | 2017-01-12 | 2017-06-20 | 华南理工大学 | Sewage disposal method for diagnosing faults based on weighting extreme learning machine Integrated Algorithm |
Non-Patent Citations (2)
Title |
---|
LI K ET AL: "《boosting weighted ELM for imbalanced learning》", 《NEUROCOMPUTING》 * |
姚乔兵: "《不平衡模糊加权极限学习机及其集成方法研究》", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190280A (en) * | 2018-09-18 | 2019-01-11 | 东北农业大学 | A kind of pollution source of groundwater inverting recognition methods based on core extreme learning machine alternative model |
CN109558893B (en) * | 2018-10-31 | 2022-12-16 | 华南理工大学 | Rapid integrated sewage treatment fault diagnosis method based on resampling pool |
CN109558893A (en) * | 2018-10-31 | 2019-04-02 | 华南理工大学 | Fast integration sewage treatment method for diagnosing faults based on resampling pond |
CN109492710A (en) * | 2018-12-07 | 2019-03-19 | 天津智行瑞祥汽车科技有限公司 | A kind of new-energy automobile fault detection householder method |
CN109739209A (en) * | 2018-12-11 | 2019-05-10 | 深圳供电局有限公司 | A kind of electric network failure diagnosis method based on Classification Data Mining |
CN109858564A (en) * | 2019-02-21 | 2019-06-07 | 上海电力学院 | Modified Adaboost-SVM model generating method suitable for wind electric converter fault diagnosis |
CN109858564B (en) * | 2019-02-21 | 2023-05-05 | 上海电力学院 | Improved Adaboost-SVM model generation method suitable for wind power converter fault diagnosis |
CN110084291A (en) * | 2019-04-12 | 2019-08-02 | 湖北工业大学 | A kind of students ' behavior analysis method and device based on the study of the big data limit |
CN110363230B (en) * | 2019-06-27 | 2021-07-20 | 华南理工大学 | Stacking integrated sewage treatment fault diagnosis method based on weighted base classifier |
CN110363230A (en) * | 2019-06-27 | 2019-10-22 | 华南理工大学 | Stacking integrated sewage handling failure diagnostic method based on weighting base classifier |
CN111160457A (en) * | 2019-12-27 | 2020-05-15 | 南京航空航天大学 | Turboshaft engine fault detection method based on soft class extreme learning machine |
CN112257942A (en) * | 2020-10-29 | 2021-01-22 | 中国特种设备检测研究院 | Stress corrosion cracking prediction method and system |
CN112257942B (en) * | 2020-10-29 | 2023-11-14 | 中国特种设备检测研究院 | Stress corrosion cracking prediction method and system |
CN112183676A (en) * | 2020-11-10 | 2021-01-05 | 浙江大学 | Water quality soft measurement method based on mixed dimensionality reduction and kernel function extreme learning machine |
CN113323823A (en) * | 2021-06-08 | 2021-08-31 | 云南大学 | AWKELM-based fan blade icing fault detection method and system |
CN113551904A (en) * | 2021-06-29 | 2021-10-26 | 西北工业大学 | Gear box multi-type concurrent fault diagnosis method based on hierarchical machine learning |
CN113551904B (en) * | 2021-06-29 | 2023-06-30 | 西北工业大学 | Gear box multi-type concurrent fault diagnosis method based on hierarchical machine learning |
CN113965449A (en) * | 2021-09-28 | 2022-01-21 | 南京航空航天大学 | Method for improving fault diagnosis accuracy rate of self-organizing cellular network based on evolution weighted width learning system |
CN113965449B (en) * | 2021-09-28 | 2023-04-18 | 南京航空航天大学 | Method for improving fault diagnosis accuracy rate of self-organizing cellular network based on evolution weighted width learning system |
CN114492164A (en) * | 2021-12-24 | 2022-05-13 | 吉林大学 | Organic pollutant migration numerical model substitution method based on multi-core extreme learning machine |
Also Published As
Publication number | Publication date |
---|---|
CN107688825B (en) | 2020-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107688825A (en) | A kind of follow-on integrated weighting extreme learning machine sewage disposal failure examines method | |
CN105740619B (en) | Weighting extreme learning machine sewage disposal on-line fault diagnosis method based on kernel function | |
CN106874934A (en) | Sewage disposal method for diagnosing faults based on weighting extreme learning machine Integrated Algorithm | |
CN102707256B (en) | Fault diagnosis method based on BP-Ada Boost nerve network for electric energy meter | |
CN111539515B (en) | Complex equipment maintenance decision method based on fault prediction | |
CN105930901B (en) | A kind of Diagnosis Method of Transformer Faults based on RBPNN | |
CN108228716A (en) | SMOTE_Bagging integrated sewage handling failure diagnostic methods based on weighting extreme learning machine | |
CN106355030A (en) | Fault detection method based on analytic hierarchy process and weighted vote decision fusion | |
CN109495296A (en) | Intelligent substation communication network state evaluation method based on clustering and neural network | |
CN106371427A (en) | Industrial process fault classification method based on analytic hierarchy process and fuzzy fusion | |
CN106845544A (en) | A kind of stripe rust of wheat Forecasting Methodology based on population Yu SVMs | |
CN110826774B (en) | Bus load prediction method and device, computer equipment and storage medium | |
CN103605711B (en) | Construction method and device, classification method and device of support vector machine | |
CN106843195A (en) | Based on the Fault Classification that the integrated semi-supervised Fei Sheer of self adaptation differentiates | |
CN105894125A (en) | Transmission and transformation project cost estimation method | |
CN110009030A (en) | Sewage treatment method for diagnosing faults based on stacking meta learning strategy | |
CN106022954A (en) | Multiple BP neural network load prediction method based on grey correlation degree | |
CN109558893A (en) | Fast integration sewage treatment method for diagnosing faults based on resampling pond | |
CN110363230A (en) | Stacking integrated sewage handling failure diagnostic method based on weighting base classifier | |
CN106530082A (en) | Stock predication method and stock predication system based on multi-machine learning | |
CN111242380A (en) | Lake (reservoir) eutrophication prediction method based on artificial intelligence algorithm | |
CN107656152A (en) | One kind is based on GA SVM BP Diagnosis Method of Transformer Faults | |
CN104656620A (en) | Comprehensive evaluation system for remanufacturing of heavy-duty machine tool | |
Setnes et al. | Transparent fuzzy modelling | |
CN115861671A (en) | Double-layer self-adaptive clustering method considering load characteristics and adjustable potential |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |