CN103606006A - Sludge volume index (SVI) soft measuring method based on self-organized T-S fuzzy nerve network - Google Patents

Sludge volume index (SVI) soft measuring method based on self-organized T-S fuzzy nerve network Download PDF

Info

Publication number
CN103606006A
CN103606006A CN201310558054.5A CN201310558054A CN103606006A CN 103606006 A CN103606006 A CN 103606006A CN 201310558054 A CN201310558054 A CN 201310558054A CN 103606006 A CN103606006 A CN 103606006A
Authority
CN
China
Prior art keywords
layer
partiald
node
input
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310558054.5A
Other languages
Chinese (zh)
Other versions
CN103606006B (en
Inventor
乔俊飞
许少鹏
韩红桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201310558054.5A priority Critical patent/CN103606006B/en
Publication of CN103606006A publication Critical patent/CN103606006A/en
Application granted granted Critical
Publication of CN103606006B publication Critical patent/CN103606006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Feedback Control In General (AREA)

Abstract

The invention discloses a sludge volume index (SVI) soft measuring method based on a self-organized T-S fuzzy nerve network and belongs to both the field of control and the field of sewage treatment. The accurate prediction of an SVI is the guarantee for normal operation of a sewage treatment process. The method comprises: first of all, taking the output quantity of a rule layer, i.e., the space activation intensity of the rule layer as a basis for determining whether a fuzzy rule is increased; secondly, on the basis of generating a new fuzzy rule, taking the output quantity of a membership function layer as a basis for determining whether a fuzzy set is increased; and finally obtaining a self-organized T-S fuzzy recursion nerve network by using a gradient decrease algorithm to adjust the weight value parameter of a model and the center value and width of a Gauss function, and establishing an SVI on-line soft measuring model based on an SOTSFEN such that real-time detection of the SVI is realized, and an effective method is provided for preventing sludge expansion.

Description

Sludge volume index flexible measurement method based on self-organization T-S fuzzy neural network
Technical field
The present invention utilizes the fuzzy Recursive Networks of self-organization T-S to set up the soft-sensing model of sludge volume index (SVI) SVI, realizes the real-time estimate to sludge settling index S VI.The Accurate Prediction of sludge volume index SVI is the assurance of the normal operation of sewage disposal process, and the present invention had both belonged to control field, belonged to again sewage treatment area.
Background technology
Wastewater treatment is the important component part of Important Action ,Ye Shi China strategy of sustainable development of Chinese government's water resources comprehensive utilization.At present, each city, the whole nation, county have set up urban wastewater treatment firm substantially, and the national developed countries such as sewage treatment capacity and the U.S. are suitable.But wastewater treatment operation conditions allows of no optimist, wherein sludge bulking problem is seriously restricting the development of wastewater treatment.Once sludge bulking occurs, der Pilz amount reproduction, sludge settling property variation, Separation of Solid and Liquid difficulty, causes effluent quality to exceed standard, and mud overflows loss, even may draw barmy generation, causes sewage disposal system collapse.Therefore the soft measurement research that, the present invention is based on the SVI of the fuzzy recurrent neural network SOTSRFNN of self-organization T-S is with a wide range of applications.
Sludge volume index (SVI) SVI is one of important evaluation index of sludge settling property.At present, for the detection method of SVI, mainly contain two classes: 1. manual detection method, utilize graduated cylinder timing sampling to detect, calculate SVI value, but the method is consuming time and error is large, be difficult to meet the wastewater treatment actual requirement of complexity day by day; 2. automatic detection method, but there is the shortcomings such as equipment manufacturing cost is high, and the life-span is short, poor stability in the method, and be subject to site environment and manually-operated impact, accuracy of detection to can not get ensureing.Soft-measuring technique utilizes the relation between system change and parameter, sets up the model between input and output, by easy survey water quality variable, estimates SVI value.Have small investment, time short, be swift in response, be easy to the advantages such as maintenance and maintenance.Therefore, the flexible measurement method of research SOTSRFNN has important practical significance to solving SVI measurement problem in real time.
The present invention proposes the online soft sensor method of SVI a kind of: first, with the space intensity of activation of rules layer, i.e. the output of rules layer is as the foundation of judging whether fuzzy rule increases; Secondly, generating on the basis of new fuzzy rule, usining subordinate function layer output quantity as the foundation of judging whether fuzzy set increases; Finally, utilize the weighting parameter of gradient descent algorithm adjustment model and the central value of Gaussian function and width, obtain the fuzzy recurrent neural network of a kind of self-organization T-S, and based on SOTSRFNN, set up the online soft sensor model of SVI, realized the real-time detection of SVI, for prevention sludge bulking provides a kind of effective ways.
Summary of the invention
The present invention is directed to the problem of SVI on-line measurement difficulty, analyze the formation reason of sludge bulking, sum up and the closely-related easy survey water quality parameter of SVI, utilize principle component analysis PCA to determine the input quantity of model; And a kind of improved fuzzy recurrent neural network has been proposed, and based on structural self-organizing algorithm, designed SOTSRFNN, set up the online soft sensor model of SVI; Finally, utilize the model of setting up to carry out the soft measurement of SVI, realize the on-line measurement of SVI;
The present invention has adopted following technical scheme and performing step:
1 one kinds of SVI flexible measurement methods, is characterized in that, comprise the following steps:
(1) data pre-service and auxiliary variable are selected;
Sample set data are normalized with zero-mean standardized method, by pivot analysis PCA, carry out the selected of auxiliary variable, finally determine mixed liquor concentration of suspension MLSS, acidity-basicity ph, aeration tank water temperature T, aeration tank ammonia NH 4input variable as model.
(2) set up the Recurrent Fuzzy Neural Network model of the soft measurement of SVI, input quantity is MLSS, pH, T, NH 4, the output quantity of model is SVI.Recurrent Fuzzy Neural Network topological structure: input layer is that ground floor, subordinate function layer are that the 3rd layer of the second layer, rules layer, the 4th layer, parameter layer, output layer are layer 5, feedback layer.
Neural network structure: input layer has 4 input nodes, each input layer connects m subordinate function node layer, rules layer has m node, m represents number of fuzzy rules, the number of fuzzy rules generating in network structure training process is determined the value of m, parameter node layer number, feedback layer nodes equate with the nodes of rules layer, and output layer has 1 node.X=[x 1, x 2, x 3, x 4] represent the input of neural network, y dthe desired output that represents neural network.If k group sample data is x (k)=[x 1(k), x 2(k), x 3(k), x 4(k)].During the input of k group sample data:
The output of i node of input layer is expressed as:
o i ( 1 ) ( k ) = x i ( k ) , i = 1,2,3,4 - - - ( 1 )
Wherein, the output of i node of expression input layer when input k group sample;
The node of subordinate function layer adds up to: 4m, and the node of each input layer all connects the node of m subordinate function layer, and subordinate function layer is output as:
o ij ( 2 ) ( k ) = exp ( - ( o i ( 1 ) ( k ) - c ij ( k ) ) 2 ( σ ij ( k ) ) 2 ) , j = 1,2 . . . m - - - ( 2 )
Wherein, subordinate function adopts Gaussian function, c ijand σ ijrepresent respectively central value and the width of the Gaussian function of j the node of subordinate function layer that i node of input layer is corresponding, each Gaussian function is a fuzzy set, in the structural adjustment stage to Gaussian function initialize.
Figure BDA0000412124170000023
the output of j the node that represents the subordinate function layer that i node of input layer is corresponding when inputting k and organize sample;
The nodes of rules layer is m, and j node of the subordinate function layer that each node of input layer is corresponding is all connected to j node of rules layer.In rules layer, introduce feedback link, at feedback layer, add built-in variable.The node that feedback layer node comprises two types: accept node, the output quantity of rules layer and weights are weighted to summation operation, calculate built-in variable; Feedback node, adopts sigmoid function as subordinate function, calculates the output quantity of feedback layer.Each node of rules layer all connects all feedback layers and accepts node.Feedback layer to accept node corresponding one by one with feedback node, nodes equates, feedback node is corresponding one by one with rules layer node, nodes is equal, and changes along with the variation of number of fuzzy rules, equals all the time number of fuzzy rules.J node of rules layer is output as:
h q = Σ j = 1 m o j ( 3 ) ( k - 1 ) ω jq , j = 1,2 , . . . , m ; q = 1,2 , . . . , m - - - ( 3 )
f q = 1 1 + exp ( - h q ) - - - ( 4 )
o j ( 3 ) ( k ) = f q Π i = 1 4 o ij ( 2 ) ( k )
(5)
Wherein, ω jqbe j node of rules layer and the q of feedback layer the weights that are connected of accepting node, initial assignment is the random number of 0 to 1. represent that j node of rules layer is in the output quantity of k-1 group sample,
Figure BDA0000412124170000035
h qrepresent that feedback layer q accepts the built-in variable of node.F qthe output quantity that represents q feedback node of feedback layer.The output of rules layer
Figure BDA0000412124170000036
be the intensity of activation of fuzzy rule, wherein representation space intensity of activation, f qexpression time intensity of activation.
The node of the node of rules layer and parameter layer is one to one, and parameter layer has m node, and the output of this layer is expressed as:
W j ( k ) = Σ i = 1 4 a ij x i ( k ) - - - ( 6 )
o j ( 4 ) ( k ) = o j ( 3 ) ( k ) W j ( k )
(7)
Wherein, a ijrepresent linear dimensions, initial value assignment is 0 to 1 random number, a j=[a 1j, a 2j, a 3j, a 4j].W j(k) represent the value of the linear dimensions weighted sum of j node of parameter layer when input k group sample,
Figure BDA00004121241700000310
represent that j node of parameter layer is in the output of k group sample.
Network model is the single outputs of many inputs, and output layer has 1 node, and all parameter node layers are connected to output node.Network output is expressed as:
y ( k ) = Σ j = 1 m o j ( 4 ) ( k ) Σ j = 1 m o j ( 3 ) ( k ) - - - ( 8 )
Wherein, y (k) represents the network output of k sample.
(3) first fuzzy neural network determines network structure by structural self-organizing adjustment.The structural self-organizing adjustment of network: x (k)=[x 1(k), x 2(k), x 3(k), x 4(k)] represent the current input of model k group sample, since first group of sample data input, to total data, inputted, one group of sample data of every input all adopts following steps to judge whether to increase new fuzzy rule, on the basis that increases fuzzy rule, judges whether to increase new fuzzy set.
1. in network model initial configuration, number of fuzzy rules is 0, the first group of data input, and number of fuzzy rules becomes 1, and increases new fuzzy set.The initialization of fuzzy set, the initial table of Gaussian function central value and width value is shown:
c(1)=x(1)=[x 1(1),x 2(1),x 3(1),x 4(1),] (9)
σ (1)=[σ 11, σ 21, σ 31, σ 41]=[0.5,0.5,0.5,0.5] (10) wherein c (1) represent the central value of first group of fuzzy set generating, the i.e. central value of subordinate function.σ (1) represents the width value of first fuzzy set of generation, the i.e. width value of subordinate function.
2. input data and input successively, one group of data of every input, all judging whether increases new fuzzy rule, and judgment formula is expressed as:
Figure BDA0000412124170000041
Figure BDA0000412124170000042
Wherein
Figure BDA0000412124170000044
the space intensity of activation that represents j node of rules layer.J represents to work as
Figure BDA0000412124170000045
the value of j while getting maximal value.N represents present Fuzzy rule number.
Figure BDA0000412124170000046
for predefined threshold value, value is 0.24.
If meet formula (13), increase a new fuzzy rule, 3. N'=N+1, carry out.
If do not meet formula (13), repeat 2., input next group data;
3. increasing on the basis of a new fuzzy rule, judging whether increases new fuzzy set, and judgment formula is expressed as:
I = arg max 1 < = j < = h ( o ij ( 2 ) ( k ) ) - - - ( 14 )
o iI ( 2 ) ( k ) > I th
(15)
Wherein I represents to work as
Figure BDA0000412124170000049
the value of j while getting maximal value, h represents the fuzzy set number of "current" model, h=N.I thfor predefined threshold value, value is 0.92.
If meet formula (15), increase a new fuzzy set, h'=h+1, h'=N'.To i.e. newly-increased Gaussian function central value and the width value initialize to subordinate function layer of newly-increased fuzzy set initialize.Initial table is shown:
c N+1=x(k) (16)
c + = arg min 1 < = p < = N | | x ( k ) - c p | | - - - ( 17 )
&sigma; i , N + 1 = r | x i ( k ) - c i + | , i = 1,2,3,4 - - - ( 18 )
Wherein, N represents the existing number of fuzzy rules of "current" model, p=1, and 2 ..., N, c n+1the initial value that represents the central value of newly-increased subordinate function, r represents overlap coefficient, value is 0.6.C pbe expressed as the central value of p Gaussian function in "current" model.C +represent as x (k) and c pspace length is hour c pvalue.σ i, N+1the width initial value that represents newly-increased subordinate function, input quantity x (k) and c +the absolute value of difference and the product of overlap coefficient.Increase new fuzzy set, at subordinate function layer corresponding to each input layer, increase a new node, rules layer, feedback layer, parameter layer respectively increase corresponding node accordingly.
If do not met (15), do not increase new fuzzy set h'=h, and N''=N'-1, the fuzzy rule that is about to newly increase is deleted.
4. continue to adjust the structure of neural network, input data are inputted successively, repeat 2. 3., and after all input data have been inputted, neural network structure adjustment has been trained.
(4) determined the structure of network model, then network parameter is just adjusted, by the data after proofreading and correct, carry out neural network training, totally 150 groups of sample datas wherein, 90 groups of training sample data, 60 groups of test sample book data, train epochs is 1000 steps, in the training process of each step, 90 groups of training sample data are all inputted, every group of training sample data input all adopts gradient descent algorithm to adjust network parameter, the parameter of wherein adjusting comprises: the linear dimensions a of parameter layer, rules layer is to the connection weights ω of feedback layer, the central value c of subordinate function layer Gaussian function and width value σ.
The objective function of definition training is that systematic error is defined as:
E ( k ) = 1 2 ( y ( k ) - y d ( k ) ) 2 - - - ( 19 )
The actual output quantity that wherein y (k) is network, y d(k) be the desired output amount of network, E (k) represents systematic error.
Gradient descent algorithm: adopt objective function to ask partial derivative to the output quantity of every one deck, calculate the error propagation item of every layer of output.By function of functions, ask partial derivative, the partial derivative of calculating target function to parameter value, partial derivative is the adjustment amount of each parameter.
The error propagation item δ of parameter layer (4), the partial derivative that objective function is exported parameter layer:
&delta; j ( 4 ) ( k ) = - &PartialD; E ( k ) &PartialD; y ( k ) &PartialD; y ( k ) &PartialD; o j ( 4 ) ( k ) - - - ( 20 )
To linear dimensions a jadjustment amount be:
&PartialD; E ( k ) &PartialD; a j ( k ) = &PartialD; E ( k ) &PartialD; o j ( 4 ) ( k ) &PartialD; o j ( 4 ) ( k ) &PartialD; a j ( k ) = - &delta; j ( 4 ) ( k ) o j ( 3 ) ( k ) &Sigma; j = 1 m o j ( 3 ) ( k ) - - - ( 21 )
a j ( k + 1 ) = a j ( k ) - &eta; &PartialD; E ( k ) &PartialD; a j ( k ) - - - ( 22 )
Rules layer error propagation item
Figure BDA0000412124170000063
as follows:
&delta; j ( 3 ) ( k ) = - &PartialD; E ( k ) &PartialD; o j ( 3 ) ( k ) = &delta; ( 4 ) ( k ) &PartialD; o ( 4 ) ( k ) &PartialD; o j ( 3 ) ( k ) - - - ( 23 )
Recurrence layer connects weights ω to rules layer jqregulation rule as follows:
&PartialD; E ( k ) &PartialD; &omega; jq ( k ) = &PartialD; E ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; &omega; jq ( k ) = - &delta; j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; &omega; jq ( k ) - - - ( 24 )
&omega; jq ( k + 1 ) = &omega; jq ( k ) - &eta; &PartialD; E &PartialD; &omega; jq ( k ) - - - ( 25 )
The error propagation item of subordinate function layer
Figure BDA0000412124170000067
as follows:
&delta; ij ( 2 ) ( k ) = - &PartialD; E ( k ) &PartialD; o ij ( 2 ) ( k ) = &delta; j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; o ij ( 2 ) ( k ) - - - ( 26 )
The central value c of subordinate function layer Gaussian function ijregulation rule as follows:
&PartialD; E ( k ) &PartialD; c ij ( k ) = &PartialD; E ( k ) &PartialD; o j ( 2 ) ( k ) &PartialD; o j ( 2 ) ( k ) &PartialD; c ij ( k ) ( k ) = - &delta; j ( 2 ) ( k ) &PartialD; o j ( 2 ) ( k ) &PartialD; c ij ( k ) - - - ( 27 )
c ij ( k + 1 ) = c ij ( k ) - &eta; &PartialD; E ( k ) &PartialD; c ij ( k ) - - - ( 28 )
The width cs of subordinate function layer Gaussian function ijregulation rule as follows:
&PartialD; E ( k ) &PartialD; &sigma; ij ( k ) = &PartialD; E ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; &sigma; ij ( k ) = - &delta; j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; &sigma; ij ( k ) - - - ( 29 )
&sigma; ij ( k + 1 ) = &sigma; ij ( k ) - &eta; &PartialD; E ( k ) &PartialD; &sigma; ij ( k ) - - - ( 30 )
The learning rate that wherein η is parameter, value is 0.15.
(5) test sample book is predicted, the input using test sample book data as the neural network training, the output of neural network is predicting the outcome of SVI.
Creativeness of the present invention is mainly reflected in:
(1). the present invention is directed to current sludge volume index (SVI) SVI and be difficult to the online problem detecting, proposed the flexible measurement method based on the fuzzy Recursive Networks of a kind of self-organization T-S, realized mapping relations between auxiliary variable and SVI and the online detection of SVI; This model has that precision is high, good stability, the feature such as real-time, for prevention sludge bulking provides a kind of effective detection method, has promoted the raising of wastewater treatment automatization level;
(2). the present invention is directed to the characteristic of sludge bulking, the feedback layer of fuzzy recurrent neural network is improved.Rule-based generation criterion, generates fuzzy rule automatically, by effective fuzzy set generating algorithm, dynamically adjusts network structure.Solve network structure and be difficult to definite problem, when guaranteeing model accuracy, effectively simplified network structure.Adopt Gradient Descent Parameter Learning Algorithm, improved the learning ability of network.
Accompanying drawing explanation
Fig. 1. the topology diagram of the fuzzy recurrent neural network of self-organization T-S;
Fig. 2. online fuzzy rule of the present invention generates figure;
Fig. 3. network training fitting result figure of the present invention;
Fig. 4. the soft measurement result figure of the present invention;
Embodiment
Experimental data derives from the actual daily sheet of Beijing small sewage treatment plant.Fig. 1 has provided the neural network prediction model of SVI, and its input is respectively mixed liquor concentration of suspension MLSS, acidity-basicity ph, aeration tank water temperature T, aeration tank ammonia NH 4, model is output as sludge volume index (SVI) SVI.Wherein MLSS refers to the weight of the contained dewatered sludge of unit volume biochemistry pool mixed liquor; The soda acid degree of pH reaction influent quality; T is the current sewage temperature in aeration tank; NH 4represent the ammonia content of aeration tank water inlet, SVI represents that aeration tank mixed liquor is after 30 minutes precipitations, corresponding 1 gram of volume that dewatered sludge is shared.In input quantity, except pH and T, other unit is mg/litre.Output quantity unit ml/g.Totally 150 groups of data, wherein 90 groups of data are for training network, another 60 groups as test sample book, adopt structural self-organizing algorithm to carry out dynamic change to neural network.
Utilize the fuzzy recurrent neural network of self-organization T-S to set up the soft-sensing model of SVI, SVI is detected in real time; Concrete steps are as follows:
(1) initialization neural network, input node is 4, and output node is 1, and number of fuzzy rules is 0, the weights initialize to neural network, it is 0 to 1 random number that the initial weight of this experiment is.
(2) sample data is proofreaied and correct, then do normalized.
(3) structure of neural network is carried out to self-organization adjustment, formula (13)
Figure BDA0000412124170000071
formula (15) I th=0.92, the input data that input has been proofreaied and correct:
1. input first group of data x (1)=[4.72 23.1 43.6 250.4], normalized x (1)=[0.1092 0.8867 0.7572 0.3915].Number of fuzzy rules becomes 1, according to formula (9) (10) to first group of fuzzy set Gaussian function central value and width value initialization: central value c 1(1)=[0.1092 0.8867 0.7572 0.3915], width value σ 1(1)=[0.5 0.5 0.5 0.5];
2. input second group of data x (2)=[5.63 23.4 37.2 250.1], the output of subordinate function layer is calculated in normalized x (2)=[0.3707 0.9434 0.5527 0.33907] according to formula (1) (2)
Figure BDA0000412124170000072
formula (11) computer memory intensity of activation
Figure BDA0000412124170000073
value,
Figure BDA0000412124170000074
because only have a fuzzy rule,
Figure BDA0000412124170000075
for maximal value,
Figure BDA0000412124170000076
do not meet formula (13), do not increase new fuzzy rule;
3. input successively data, until input the 23rd group of data, normalized x (23)=[0.1954 0.1886 0.5495 0.3968].According to formula (1) (2), calculate the output of subordinate function layer
Figure BDA0000412124170000081
formula (11) computer memory intensity of activation value because only have a fuzzy rule,
Figure BDA0000412124170000083
for maximal value, meet formula (13):
Figure BDA0000412124170000084
increase a new fuzzy rule;
Calculate the output of subordinate function node layer o 1 ( 2 ) ( 23 ) = 0.9881 0.4555 0.5636 0.8579 , o 11 ( 2 ) ( 23 ) = 0.9881 For maximal value, meet formula (15)
Figure BDA0000412124170000086
the fuzzy rule newly increasing is generated to corresponding fuzzy set, according to formula (16)~(18) parameter initialization to newly-increased fuzzy set: central value initial value value c 2=x (23), r value is 0.6, width initial value σ i2=r*|x i(23)-c i1|, i=1,2,3,4, σ i2=[0.0517 0.0.4189 0.1246 0.0032].
4. when input k group data, according to (1) (2), calculate the node of subordinate function layer and export
Figure BDA0000412124170000087
the space intensity of activation of computation rule node layer
Figure BDA0000412124170000088
value, according to formula (12), find out maximal value; Whether the value that judges space intensity of activation is less than predetermined threshold if meet formula (13), increase a new fuzzy rule, carry out the 5. step; If do not meet formula (13), input k+1 group data, carry out the 6. step.
5. according to the output of formula (15) judgement subordinate function node
Figure BDA00004121241700000811
maximal value whether be greater than predetermined threshold I th, meet formula (15), for the fuzzy rule newly increasing, generate corresponding fuzzy set, according to formula (16)~(18) parameter initialization to fuzzy set; If do not meet formula (15), do not generate new fuzzy set, and delete the fuzzy rule newly increasing;
6. whether judgement input data all inputs, complete and enter (4) step, otherwise forward the to, 4. walk, and total data has trained symbiosis to become 4 fuzzy rules.The initial value of Gaussian function is:
The central value initial value of Gaussian function is c ( 1 ) = 0.1092 0.8867 0.7572 0.3915 0.1954 0.1886 0.5495 0.3968 0.7052 0.5495 0.7124 0.6166 0.2328 0.3968 0.3723 0.5638
The width value initial value of Gaussian function is &sigma; ( 1 ) = 0.5 0.5 0.5 0.5 0.0517 0.4189 0.1246 0.0032 0.4394 0.1456 0.0127 0.2776 0.3704 0.2701 0.0046 0.4414
Rules layer to the initial weight of feedback layer is &omega; ( 1 ) = 0.6229 0.7149 0.6219 0.8923 0.8571 0.0309 0.3152 0.8153 0.3965 0.9725 0.8241 0.6114 0.7359 0.4310 0.8726 0.2938
The initial value of linear dimensions is a ( 1 ) = 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 . 5 0.5 0.5 ,
W ( 1 ) = &Sigma; i = 1 4 a ij x i ( 1 ) = 1.0723 1.0723 1.0723 1.0723
(4) with the training sample data neural network training after proofreading and correct, train epochs s=1000, if train epochs choosing is too small, the quantity of information gathering is inadequate; If train epochs choosing is excessive, there will be over-fitting phenomenon.
(5) parameter adjustment to neural network, train epochs s=1, input data are inputted from first group of data, calculate the network input and output amount of every one deck, according to formula (19) calculating target function value, adopt gradient descent method to 4 parameter adjustments: the linear dimensions a of parameter layer, rules layer is to the connection weights ω of feedback layer, the central value c of subordinate function layer Gaussian function and width value σ.
1. input first group of data x (1)=[0.1092 0.8867 0.7572 0.3915]
The output of input layer: o (1)(1)=x (1)=[0.1092 0.8867 0.7572 0.3915]
The output of subordinate function layer: o ( 2 ) ( 1 ) = exp ( - ( o ( 1 ) ( 1 ) - c ( 1 ) ) 2 ( &sigma; ( 1 ) ) 2 ) = 1 1 1 1 0.0620 0.0622 0.0621 0.0646 0.1588 0.0047 0 0.5181 0.8946 0.0373 0 0.8412
Feedback layer and rules layer output:
Figure BDA0000412124170000092
o (3)(0)=[0 00 0], so h=[0 00 0]
f = 1 1 + exp ( - h ) = 0.5 0.5 0.5 0.5
so o (3)(1)=[0.0044 00 0.0281]
The output of parameter layer: o (4)(1)=o (3)(1) W (1)=[0.0047 00 0.0301]
The output of model: y ( 1 ) = &Sigma; j = 1 4 o j ( 4 ) ( 1 ) &Sigma; j = 1 4 o j ( 3 ) ( 1 ) = 1.0723
2. y d(1)=0.5960, computing system error E ( 1 ) = 1 2 ( y ( 1 ) - y d ( 1 ) ) 2 = 0.1134
3. adopt gradient descent method to parameter adjustment: according to formula (20)~(22), calculating parameter layer linear dimensions a adjustment amount, obtains the parameter value after adjusting; According to formula (23)~(25), computation rule layer, to the connection weights ω adjustment amount of feedback layer, obtains the parameter value after adjusting; According to formula (26)~(30), calculate central value c and the width value σ adjustment amount of Gaussian function, obtain the parameter value after adjusting.
4. input next group data, the output quantity of the every one deck of computation model, adopts gradient descent algorithm to adjust each parameter value, until all training datas have all been inputted.Train epochs adds 1.
(6) continue neural network training, repeat (5) step, until train epochs reaches 1000 steps, complete network training.
(7) test sample book is predicted: the input using test sample book data as the neural network training, neural network is output as sludge volume index (SVI) SVI.

Claims (1)

1. a SVI flexible measurement method, is characterized in that, comprises the following steps:
(1) data pre-service and auxiliary variable are selected;
Sample set data are normalized with zero-mean standardized method, by pivot analysis PCA, carry out the selected of auxiliary variable, finally determine mixed liquor concentration of suspension MLSS, acidity-basicity ph, aeration tank water temperature T, aeration tank ammonia NH 4input variable as model;
(2) set up the Recurrent Fuzzy Neural Network model of the soft measurement of SVI, input quantity is MLSS, pH, T, NH 4, the output quantity of model is SVI; Recurrent Fuzzy Neural Network topological structure: input layer is that ground floor, subordinate function layer are that the 3rd layer of the second layer, rules layer, the 4th layer, parameter layer, output layer are layer 5, feedback layer;
Neural network structure: input layer has 4 input nodes, each input layer connects m subordinate function node layer, rules layer has m node, m represents number of fuzzy rules, the number of fuzzy rules generating in network structure training process is determined the value of m, parameter node layer number, feedback layer nodes equate with the nodes of rules layer, and output layer has 1 node; X=[x 1, x 2, x 3, x 4] represent the input of neural network, y dthe desired output that represents neural network; If k group sample data is x (k)=[x 1(k), x 2(k), x 3(k), x 4(k)]; During the input of k group sample data:
The output of i node of input layer is expressed as:
o i ( 1 ) ( k ) = x i ( k ) , i = 1,2,3,4
(1)
Wherein,
Figure FDA0000412124160000014
the output of i node of expression input layer when input k group sample;
The node of subordinate function layer adds up to: 4m, and the node of each input layer all connects the node of m subordinate function layer, and subordinate function layer is output as:
o ij ( 2 ) ( k ) = exp ( - ( o i ( 1 ) ( k ) - c ij ( k ) ) 2 ( &sigma; ij ( k ) ) 2 ) , j = 1,2 . . . m
(2)
Wherein, subordinate function adopts Gaussian function, c ijand σ ijrepresent respectively central value and the width of the Gaussian function of j the node of subordinate function layer that i node of input layer is corresponding, each Gaussian function is a fuzzy set, in the structural adjustment stage to Gaussian function initialize;
Figure FDA0000412124160000013
the output of j the node that represents the subordinate function layer that i node of input layer is corresponding when inputting k and organize sample;
The nodes of rules layer is m, and j node of the subordinate function layer that each node of input layer is corresponding is all connected to j node of rules layer; In rules layer, introduce feedback link, at feedback layer, add built-in variable; The node that feedback layer node comprises two types: accept node, the output quantity of rules layer and weights are weighted to summation operation, calculate built-in variable; Feedback node, adopts sigmoid function as subordinate function, calculates the output quantity of feedback layer; Each node of rules layer all connects all feedback layers and accepts node; Feedback layer to accept node corresponding one by one with feedback node, nodes equates, feedback node is corresponding one by one with rules layer node, nodes is equal, and changes along with the variation of number of fuzzy rules, equals all the time number of fuzzy rules; J node of rules layer is output as:
h q = &Sigma; j = 1 m o j ( 3 ) ( k - 1 ) &omega; jq , j = 1,2 , . . . , m ; q = 1,2 , . . . , m
(3)
f q = 1 1 + exp ( - h q )
(4)
o j ( 3 ) ( k ) = f q &Pi; i = 1 4 o ij ( 2 ) ( k )
(5)
Wherein, ω jqbe j node of rules layer and the q of feedback layer the weights that are connected of accepting node, initial assignment is the random number of 0 to 1;
Figure FDA0000412124160000027
represent that j node of rules layer is in the output quantity of k-1 group sample,
Figure FDA0000412124160000028
h qrepresent that feedback layer q accepts the built-in variable of node; f qthe output quantity that represents q feedback node of feedback layer; The output of rules layer
Figure FDA0000412124160000029
be the intensity of activation of fuzzy rule, wherein
Figure FDA00004121241600000210
representation space intensity of activation, f qexpression time intensity of activation;
The node of the node of rules layer and parameter layer is one to one, and parameter layer has m node, and the output of this layer is expressed as:
W j ( k ) = &Sigma; i = 1 4 a ij x i ( k )
(6)
o j ( 4 ) ( k ) = o j ( 3 ) ( k ) W j ( k )
(7)
Wherein, a ijrepresent linear dimensions, initial value assignment is 0 to 1 random number, a j=[a 1j, a 2j, a 3j, a 4j]; W j(k) represent the value of the linear dimensions weighted sum of j node of parameter layer when input k group sample,
Figure FDA0000412124160000026
represent that j node of parameter layer is in the output of k group sample;
Network model is the single outputs of many inputs, and output layer has 1 node, and all parameter node layers are connected to output node; Network output is expressed as:
y ( k ) = &Sigma; j = 1 m o j ( 4 ) ( k ) &Sigma; j = 1 m o j ( 3 ) ( k )
(8)
Wherein, y (k) represents the network output of k sample;
(3) first fuzzy neural network determines network structure by structural self-organizing adjustment; The structural self-organizing adjustment of network: x (k)=[x 1(k), x 2(k), x 3(k), x 4(k)] represent the current input of model k group sample, since first group of sample data input, to total data, inputted, one group of sample data of every input all adopts following steps to judge whether to increase new fuzzy rule, on the basis that increases fuzzy rule, judges whether to increase new fuzzy set;
1. in network model initial configuration, number of fuzzy rules is 0, the first group of data input, and number of fuzzy rules becomes 1, and increases new fuzzy set; The initialization of fuzzy set, the initial table of Gaussian function central value and width value is shown:
c(1)=x(1)=[x 1(1),x 2(1),x 3(1),x 4(1),] (9)
σ(1)=[σ 11213141]=[0.5,0.5,0.5,0.5] (10)
Wherein c (1) represents the central value of first group of fuzzy set of generation, the i.e. central value of subordinate function; σ (1) represents the width value of first fuzzy set of generation, the i.e. width value of subordinate function;
2. input data and input successively, one group of data of every input, all judging whether increases new fuzzy rule, and judgment formula is expressed as:
(11)
(12)
Figure FDA0000412124160000034
(13)
Wherein
Figure FDA0000412124160000035
the space intensity of activation that represents j node of rules layer; J represents to work as the value of j while getting maximal value; N represents present Fuzzy rule number;
Figure FDA0000412124160000037
for predefined threshold value, value is 0.24;
If meet formula (13), increase a new fuzzy rule, 3. N'=N+1, carry out;
If do not meet formula (13), repeat 2., input next group data;
3. increasing on the basis of a new fuzzy rule, judging whether increases new fuzzy set, and judgment formula is expressed as:
I = arg max 1 < = j < = h ( o ij ( 2 ) ( k ) ) - - - ( 14 )
o iI ( 2 ) ( k ) > I th
(15)
Wherein I represents to work as
Figure FDA0000412124160000042
the value of j while getting maximal value, h represents the fuzzy set number of "current" model, h=N; I thfor predefined threshold value, value is 0.92;
If meet formula (15), increase a new fuzzy set, h'=h+1, h'=N'; To i.e. newly-increased Gaussian function central value and the width value initialize to subordinate function layer of newly-increased fuzzy set initialize; Initial table is shown:
c N+1=x(k)
(16)
c + arg min 1 < = p < = N | | x ( k ) - c p | | - - - ( 17 )
&sigma; i , N + 1 = r | x i ( k ) - c i + | , i = 1,2,3,4 - - - ( 18 )
Wherein, N represents the existing number of fuzzy rules of "current" model, p=1, and 2 ..., N, c n+1the initial value that represents the central value of newly-increased subordinate function, r represents overlap coefficient, value is 0.6; c pbe expressed as the central value of p Gaussian function in "current" model; c +represent as x (k) and c pspace length is hour c pvalue; σ i, N+1the width initial value that represents newly-increased subordinate function, input quantity x (k) and c +the absolute value of difference and the product of overlap coefficient; Increase new fuzzy set, at subordinate function layer corresponding to each input layer, increase a new node, rules layer, feedback layer, parameter layer respectively increase corresponding node accordingly;
If do not met (15), do not increase new fuzzy set h'=h, and N''=N'-1, the fuzzy rule that is about to newly increase is deleted;
4. continue to adjust the structure of neural network, input data are inputted successively, repeat 2. 3., and after all input data have been inputted, neural network structure adjustment has been trained;
(4) determined the structure of network model, then network parameter is just adjusted, by the data after proofreading and correct, carry out neural network training, totally 150 groups of sample datas wherein, 90 groups of training sample data, 60 groups of test sample book data, train epochs is 1000 steps, in the training process of each step, 90 groups of training sample data are all inputted, every group of training sample data input all adopts gradient descent algorithm to adjust network parameter, the parameter of wherein adjusting comprises: the linear dimensions a of parameter layer, rules layer is to the connection weights ω of feedback layer, the central value c of subordinate function layer Gaussian function and width value σ,
The objective function of definition training is that systematic error is defined as:
E ( k ) = 1 2 ( y ( k ) - y d ( k ) ) 2
(19)
The actual output quantity that wherein y (k) is network, y d(k) be the desired output amount of network, E (k) represents systematic error;
Gradient descent algorithm: adopt objective function to ask partial derivative to the output quantity of every one deck, calculate the error propagation item of every layer of output; By function of functions, ask partial derivative, the partial derivative of calculating target function to parameter value, partial derivative is the adjustment amount of each parameter;
The error propagation item δ of parameter layer (4), the partial derivative that objective function is exported parameter layer:
&delta; j ( 4 ) ( k ) = - &PartialD; E ( k ) &PartialD; y ( k ) &PartialD; y ( k ) &PartialD; o j ( 4 ) ( k ) - - - ( 20 )
To linear dimensions a jadjustment amount be:
&PartialD; E ( k ) &PartialD; a j ( k ) = &PartialD; E ( k ) &PartialD; o j ( 4 ) ( k ) &PartialD; o j ( 4 ) ( k ) &PartialD; a j ( k ) = - &delta; j ( 4 ) ( k ) o j ( 3 ) ( k ) &Sigma; j = 1 m o j ( 3 ) ( k )
( 21 ) a j ( k + 1 ) = a j ( k ) - &eta; &PartialD; E ( k ) &PartialD; a j ( k ) - - - ( 22 )
Rules layer error propagation item
Figure FDA0000412124160000055
as follows:
&delta; j ( 3 ) ( k ) = - &PartialD; E ( k ) &PartialD; o j ( 3 ) ( k ) = &delta; ( 4 ) ( k ) &PartialD; o ( 4 ) ( k ) &PartialD; o j ( 3 ) ( k ) - - - ( 23 )
Recurrence layer connects weights ω to rules layer jqregulation rule as follows:
&PartialD; E ( k ) &PartialD; &omega; jq ( k ) = &PartialD; E ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; &omega; jq ( k ) = - &delta; j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; &omega; jq ( k ) - - - ( 24 )
&omega; jq ( k + 1 ) = &omega; jq ( k ) - &eta; &PartialD; E &PartialD; &omega; jq ( k ) - - - ( 25 )
The error propagation item of subordinate function layer as follows:
&delta; ij ( 2 ) ( k ) = - &PartialD; E ( k ) &PartialD; o ij ( 2 ) ( k ) = &delta; j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; o ij ( 2 ) ( k ) - - - ( 26 )
The central value c of subordinate function layer Gaussian function ijregulation rule as follows:
&PartialD; E ( k ) &PartialD; c ij ( k ) = &PartialD; E ( k ) &PartialD; o j ( 2 ) ( k ) &PartialD; o j ( 2 ) ( k ) &PartialD; c ij ( k ) ( k ) = - &delta; j ( 2 ) ( k ) &PartialD; o j ( 2 ) ( k ) &PartialD; c ij ( k ) - - - ( 27 )
c ij ( k + 1 ) = c ij ( k ) - &eta; &PartialD; E ( k ) &PartialD; c ij ( k )
(28)
The width cs of subordinate function layer Gaussian function ijregulation rule as follows:
&PartialD; E ( k ) &PartialD; &sigma; ij ( k ) = &PartialD; E ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; &sigma; ij ( k ) = - &delta; j ( 3 ) ( k ) &PartialD; o j ( 3 ) ( k ) &PartialD; &sigma; ij ( k ) - - - ( 29 )
&sigma; ij ( k + 1 ) = &sigma; ij ( k ) - &eta; &PartialD; E ( k ) &PartialD; &sigma; ij ( k )
(30)
The learning rate that wherein η is parameter, value is 0.15;
(5) test sample book is predicted, the input using test sample book data as the neural network training, the output of neural network is predicting the outcome of SVI.
CN201310558054.5A 2013-11-12 2013-11-12 Sludge volume index (SVI) soft measuring method based on self-organized T-S fuzzy nerve network Active CN103606006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310558054.5A CN103606006B (en) 2013-11-12 2013-11-12 Sludge volume index (SVI) soft measuring method based on self-organized T-S fuzzy nerve network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310558054.5A CN103606006B (en) 2013-11-12 2013-11-12 Sludge volume index (SVI) soft measuring method based on self-organized T-S fuzzy nerve network

Publications (2)

Publication Number Publication Date
CN103606006A true CN103606006A (en) 2014-02-26
CN103606006B CN103606006B (en) 2017-05-17

Family

ID=50124226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310558054.5A Active CN103606006B (en) 2013-11-12 2013-11-12 Sludge volume index (SVI) soft measuring method based on self-organized T-S fuzzy nerve network

Country Status (1)

Country Link
CN (1) CN103606006B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886369A (en) * 2014-03-27 2014-06-25 北京工业大学 Method for predicting effluent TP based on fuzzy neural network
CN104634265A (en) * 2015-02-15 2015-05-20 中南大学 Soft measurement method for thickness of mineral floating foam layer based on multivariate image feature fusion
CN105574326A (en) * 2015-12-12 2016-05-11 北京工业大学 Self-organizing fuzzy neural network-based soft measurement method for effluent ammonia-nitrogen concentration
CN105676649A (en) * 2016-04-09 2016-06-15 北京工业大学 Control method for sewage treatment process based on self-organizing neural network
CN106371321A (en) * 2016-12-06 2017-02-01 杭州电子科技大学 PID control method for fuzzy network optimization of coking-furnace hearth pressure system
CN108563118A (en) * 2018-03-22 2018-09-21 北京工业大学 A kind of dissolved oxygen model predictive control method based on Adaptive Fuzzy Neural-network
CN108628164A (en) * 2018-03-30 2018-10-09 浙江大学 A kind of semi-supervised flexible measurement method of industrial process based on Recognition with Recurrent Neural Network model
CN110928187A (en) * 2019-12-03 2020-03-27 北京工业大学 Sewage treatment process fault monitoring method based on fuzzy width self-adaptive learning model
CN110942208A (en) * 2019-12-10 2020-03-31 萍乡市恒升特种材料有限公司 Method for determining optimal production conditions of silicon carbide foam ceramic
CN111222529A (en) * 2019-09-29 2020-06-02 上海上实龙创智慧能源科技股份有限公司 GoogLeNet-SVM-based sewage aeration tank foam identification method
CN111479982A (en) * 2017-11-15 2020-07-31 吉奥奎斯特系统公司 In-situ operating system with filter
CN112435683A (en) * 2020-07-30 2021-03-02 珠海市杰理科技股份有限公司 Adaptive noise estimation and voice noise reduction method based on T-S fuzzy neural network
CN114911159A (en) * 2022-04-26 2022-08-16 西北工业大学 Simulated bat aircraft depth control method based on T-S fuzzy neural network

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711707B (en) * 2018-12-21 2021-05-04 中国船舶工业系统工程研究院 Comprehensive state evaluation method for ship power device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06328091A (en) * 1993-05-25 1994-11-29 Meidensha Corp Sludge capacity index estimating method in control system for biological treatment device
CN102494979A (en) * 2011-10-19 2012-06-13 北京工业大学 Soft measurement method for SVI (sludge volume index)

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06328091A (en) * 1993-05-25 1994-11-29 Meidensha Corp Sludge capacity index estimating method in control system for biological treatment device
CN102494979A (en) * 2011-10-19 2012-06-13 北京工业大学 Soft measurement method for SVI (sludge volume index)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
G.CIVELEKOGLU ETAL.: "Modelling of COD removal in a biological wastewater treatment plant using adaptive neuro-fuzzy inference system and artifical neural network", 《WATER SCIENCE AND TECHNOLOGY》 *
余颖 等: "基于神经网络的污水处理过程建模的研究", 《第五届全球智能控制与自动化大会》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886369A (en) * 2014-03-27 2014-06-25 北京工业大学 Method for predicting effluent TP based on fuzzy neural network
CN103886369B (en) * 2014-03-27 2016-10-26 北京工业大学 A kind of water outlet total phosphorus TP Forecasting Methodology based on fuzzy neural network
CN104634265A (en) * 2015-02-15 2015-05-20 中南大学 Soft measurement method for thickness of mineral floating foam layer based on multivariate image feature fusion
CN104634265B (en) * 2015-02-15 2017-06-20 中南大学 A kind of mineral floating froth bed soft measurement method of thickness based on multiplex images Fusion Features
CN105574326A (en) * 2015-12-12 2016-05-11 北京工业大学 Self-organizing fuzzy neural network-based soft measurement method for effluent ammonia-nitrogen concentration
CN105676649A (en) * 2016-04-09 2016-06-15 北京工业大学 Control method for sewage treatment process based on self-organizing neural network
CN106371321A (en) * 2016-12-06 2017-02-01 杭州电子科技大学 PID control method for fuzzy network optimization of coking-furnace hearth pressure system
CN111479982A (en) * 2017-11-15 2020-07-31 吉奥奎斯特系统公司 In-situ operating system with filter
US11591894B2 (en) 2017-11-15 2023-02-28 Schlumberger Technology Corporation Field operations system with particle filter
US11674375B2 (en) 2017-11-15 2023-06-13 Schlumberger Technology Corporation Field operations system with filter
US11603749B2 (en) * 2017-11-15 2023-03-14 Schlumberger Technology Corporation Field operations system
CN108563118A (en) * 2018-03-22 2018-09-21 北京工业大学 A kind of dissolved oxygen model predictive control method based on Adaptive Fuzzy Neural-network
CN108563118B (en) * 2018-03-22 2020-10-16 北京工业大学 Dissolved oxygen model prediction control method based on self-adaptive fuzzy neural network
CN108628164A (en) * 2018-03-30 2018-10-09 浙江大学 A kind of semi-supervised flexible measurement method of industrial process based on Recognition with Recurrent Neural Network model
CN111222529A (en) * 2019-09-29 2020-06-02 上海上实龙创智慧能源科技股份有限公司 GoogLeNet-SVM-based sewage aeration tank foam identification method
CN110928187A (en) * 2019-12-03 2020-03-27 北京工业大学 Sewage treatment process fault monitoring method based on fuzzy width self-adaptive learning model
CN110928187B (en) * 2019-12-03 2021-02-26 北京工业大学 Sewage treatment process fault monitoring method based on fuzzy width self-adaptive learning model
CN110942208A (en) * 2019-12-10 2020-03-31 萍乡市恒升特种材料有限公司 Method for determining optimal production conditions of silicon carbide foam ceramic
CN110942208B (en) * 2019-12-10 2023-07-07 萍乡市恒升特种材料有限公司 Method for determining optimal production conditions of silicon carbide foam ceramic
CN112435683A (en) * 2020-07-30 2021-03-02 珠海市杰理科技股份有限公司 Adaptive noise estimation and voice noise reduction method based on T-S fuzzy neural network
CN112435683B (en) * 2020-07-30 2023-12-01 珠海市杰理科技股份有限公司 Adaptive noise estimation and voice noise reduction method based on T-S fuzzy neural network
CN114911159A (en) * 2022-04-26 2022-08-16 西北工业大学 Simulated bat aircraft depth control method based on T-S fuzzy neural network

Also Published As

Publication number Publication date
CN103606006B (en) 2017-05-17

Similar Documents

Publication Publication Date Title
CN103606006A (en) Sludge volume index (SVI) soft measuring method based on self-organized T-S fuzzy nerve network
CN105510546B (en) A kind of biochemical oxygen demand (BOD) BOD intelligent detecting methods based on self-organizing Recurrent RBF Neural Networks
CN104376380B (en) A kind of ammonia nitrogen concentration Forecasting Methodology based on recurrence self organizing neural network
CN108898215B (en) Intelligent sludge bulking identification method based on two-type fuzzy neural network
CN104965971B (en) A kind of ammonia nitrogen concentration flexible measurement method based on fuzzy neural network
CN108469507B (en) Effluent BOD soft measurement method based on self-organizing RBF neural network
CN104182794B (en) Method for soft measurement of effluent total phosphorus in sewage disposal process based on neural network
CN111291937A (en) Method for predicting quality of treated sewage based on combination of support vector classification and GRU neural network
CN102313796B (en) Soft measuring method of biochemical oxygen demand in sewage treatment
CN103197544B (en) Sewage disposal process multi-purpose control method based on nonlinear model prediction
CN102854296A (en) Sewage-disposal soft measurement method on basis of integrated neural network
CN103226741B (en) Public supply mains tube explosion prediction method
CN102662040B (en) Ammonian online soft measuring method for dynamic modularized nerve network
CN105574326A (en) Self-organizing fuzzy neural network-based soft measurement method for effluent ammonia-nitrogen concentration
CN106096730B (en) A kind of intelligent detecting method of the MBR film permeability rates based on Recurrent RBF Neural Networks
CN107247888B (en) Method for soft measurement of total phosphorus TP (thermal transfer profile) in sewage treatment effluent based on storage pool network
Li et al. Sensitivity analysis of groundwater level in Jinci Spring Basin (China) based on artificial neural network modeling
CN114037163A (en) Sewage treatment effluent quality early warning method based on dynamic weight PSO (particle swarm optimization) optimization BP (Back propagation) neural network
CN103714382A (en) Multi-index comprehensive evaluation method for reliability of urban rail train security detection sensor network
CN111125907B (en) Sewage treatment ammonia nitrogen soft measurement method based on hybrid intelligent model
CN112819087B (en) Method for detecting abnormality of BOD sensor of outlet water based on modularized neural network
CN109408896B (en) Multi-element intelligent real-time monitoring method for anaerobic sewage treatment gas production
CN113343601A (en) Dynamic simulation method for water level and pollutant migration of complex water system lake
CN106802983B (en) Optimized BP neural network-based biogas yield modeling calculation method and device
Zhang et al. Effluent Quality Prediction of Wastewater Treatment System Based on Small-world ANN.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant