CN113780639A - Urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning framework - Google Patents
Urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning framework Download PDFInfo
- Publication number
- CN113780639A CN113780639A CN202110999702.5A CN202110999702A CN113780639A CN 113780639 A CN113780639 A CN 113780639A CN 202110999702 A CN202110999702 A CN 202110999702A CN 113780639 A CN113780639 A CN 113780639A
- Authority
- CN
- China
- Prior art keywords
- output
- submodule
- neuron
- hidden layer
- neurons
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- MWUXSHHQAYIFBG-UHFFFAOYSA-N Nitric oxide Chemical compound O=[N] MWUXSHHQAYIFBG-UHFFFAOYSA-N 0.000 title claims abstract description 241
- 238000000034 method Methods 0.000 title claims abstract description 36
- 239000010813 municipal solid waste Substances 0.000 title claims abstract description 30
- 238000004056 waste incineration Methods 0.000 title claims abstract description 26
- 238000013528 artificial neural network Methods 0.000 claims abstract description 15
- 230000007246 mechanism Effects 0.000 claims abstract description 9
- 238000012360 testing method Methods 0.000 claims abstract description 8
- 210000002569 neuron Anatomy 0.000 claims description 201
- 238000012549 training Methods 0.000 claims description 22
- XSQUKJJJFZCRTK-UHFFFAOYSA-N Urea Chemical compound NC(N)=O XSQUKJJJFZCRTK-UHFFFAOYSA-N 0.000 claims description 17
- 239000004202 carbamide Substances 0.000 claims description 17
- 210000004205 output neuron Anatomy 0.000 claims description 10
- 210000002364 input neuron Anatomy 0.000 claims description 8
- 238000002485 combustion reaction Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 claims description 5
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 claims description 5
- 239000004576 sand Substances 0.000 claims description 5
- 239000000126 substance Substances 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 239000002904 solvent Substances 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 3
- 229910052757 nitrogen Inorganic materials 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 230000005012 migration Effects 0.000 claims 1
- 238000013508 migration Methods 0.000 claims 1
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- QGZKDVFQNNGYKY-UHFFFAOYSA-N Ammonia Chemical compound N QGZKDVFQNNGYKY-UHFFFAOYSA-N 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000003638 chemical reducing agent Substances 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000003344 environmental pollutant Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 231100000719 pollutant Toxicity 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 229910021529 ammonia Inorganic materials 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010531 catalytic reduction reaction Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 238000007086 side reaction Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Human Resources & Organizations (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Development Economics (AREA)
- Marketing (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- General Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Game Theory and Decision Science (AREA)
- Geometry (AREA)
- Primary Health Care (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Entrepreneurship & Innovation (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Feedback Control In General (AREA)
Abstract
A method for predicting NOx emission of nitrogen oxides in municipal solid waste incineration based on a multitask learning framework relates to the field of artificial intelligence. The method realizes two-step prediction of the concentration of the NOx by using the NOx emission prediction model based on the multitask learning framework. Firstly, determining input variables related to the prediction of NOx concentration by combining the mechanism of municipal solid waste incineration; then, establishing an integral framework of a multi-task learning model by utilizing the basic idea of multi-task learning; next, a submodule of the multitask learning model is constructed using the self-organizing RBF neural network. And finally, testing the established prediction model to realize two-step prediction of the emission of the NOx generated by the municipal solid waste incineration. The method has better performance in predicting the concentration of NOx generated by urban solid waste incineration.
Description
Technical Field
The invention relates to the field of artificial intelligence, is directly applied to the field of urban solid waste incineration, and particularly relates to a NOx concentration prediction method based on a multi-task learning framework.
Background
The NOx generated in the urban solid waste incineration process is a main atmospheric pollutant, and the realization of ultralow emission of the NOx is a consistent requirement of the international society. With the increasing emphasis of the state on environmental protection, increasingly strict laws and regulations restrict NOx emissions from incineration plants. In order to make the NOx emission reach the standard, an incinerator adopts a selective non-catalytic reduction system to carry out denitration in the incinerator. The denitration system guarantees that NOx can fully reduce through spouting excessive urea, and nevertheless excessive input urea not only causes the raw materials extravagant, and the cost improves, can bring ammonia escape moreover, causes secondary pollution to the environment. In addition, a large amount of generated ammonia gas can generate side reaction with other substances to generate cohesive substances, and the smoke discharge pipeline is easy to block. In order to accurately control the amount of injected urea, the NOx concentration needs to be analyzed according to its concentration, and therefore NOx concentration prediction is of great significance for the optimization and control of the denitration system.
Because the urban solid waste incineration process involves numerous reactions, has complex mechanism and strong nonlinearity, and has high difficulty in accurately predicting the concentration of NOx, most of the currently widely adopted NOx prediction models can only realize the prediction of the concentration of NOx at a certain moment. Adjusting the amount of injected reductant based on NOx concentration at a single time is not scientific and reasonable, and therefore, the NOx emission trend needs to be predicted for a future period of time. The existing multi-step time sequence prediction method comprises a direct strategy and a recursive strategy, wherein the direct strategy realizes the prediction of each moment respectively by constructing a plurality of prediction models, the modeling time is usually too long, the recursive strategy takes the predicted value of the previous moment as the input of the next moment, and the prediction precision gradually deteriorates along with the increase of the prediction step length due to the accumulation of errors.
Aiming at various defects of the existing method, the invention provides a NOx concentration prediction method based on a multitask learning frame, and two-step prediction of NOx concentration is realized.
Disclosure of Invention
1. Problems that the invention needs and can solve:
the invention provides a method for predicting the concentration of nitric oxide (NOx) generated by burning urban solid waste based on a multitask learning framework. By performing mechanism analysis on generation and removal of the urban solid waste incineration NOx, selecting input variables related to NOx concentration prediction, designing a multi-task learning submodule based on a RBF neural network, completing construction of a prediction model based on a multi-task learning framework, realizing two-step prediction of the urban solid waste incineration NOx, and aiming at providing a rapid and high-precision multi-step prediction method.
2. The invention adopts the following technical scheme and implementation steps:
the invention provides a method for predicting the concentration of nitric oxide (NOx) generated by burning urban solid waste based on a multitask learning framework. The method is characterized by comprising the following steps:
(1) preprocessing data;
through the mechanism analysis of the generation and removal of the NOx generated and removed by the municipal solid waste incineration, 6 input variables relevant to the prediction of the NOx are determined, and the input variables comprise: normalizing the NOx concentration, the right side temperature of the primary combustion chamber, the primary air volume of the furnace, the secondary air volume of the furnace, the accumulated amount of the urea solution of the furnace and the supply flow of the urea solution at the time t to [0, 1] according to a formula (1); the NOx concentration at the time when the output variable is t +1, t +2, is normalized to [0, 1] according to equation (2):
wherein, IiDenotes the ith input variable, OmDenotes the m-th output variable, xiAnd ymRespectively representing the ith input variable and the mth output variable after normalization; min (I)i) And max (I)i) Respectively representing the minimum value and the maximum value in the ith input variable; min (O)m) And max (O)m) Respectively representing the minimum value and the maximum value in the m-th output variable;
(2) constructing a prediction model of nitrogen oxide (NOx) generated by burning urban solid waste based on a multitask learning framework based on training samples;
and establishing a NOx prediction model by utilizing a multitask learning frame based on the RBF neural network to realize two-step prediction of NOx concentration. The multi-task learning model consists of two sub-modules, different modules realize the prediction of NOx concentration at different moments, and knowledge sharing is carried out among the modules. The first sub-module acts as a base module that will migrate as shared knowledge to the second sub-module. The second submodule is constructed by adding a task-specific module to the basic module.
Two sub-modules are established based on an RBF neural network, and the method comprises the following steps: an input layer, a hidden layer and an output layer; at the initial moment, the topological structure of the first sub-module is 6-K-1, namely the input layer is provided with 6 neurons which respectively correspond to the 6 input variables normalized in the step 1, the hidden layer is provided with K neurons, the output layer is provided with 1 neuron and corresponds to the concentration of NOx at the moment of t + 1; the topological structure of the second submodule is 6-J-1, namely the input layer has 6 neurons, which respectively correspond to the 6 input variables normalized in the step 1, the hidden layer has J neurons, the output layer has 1 neuron, and the output layer corresponds to the NOx concentration at the time of t + 2. The topological structure of the second submodule is that hidden layer neurons are added on the basis of the first submodule to serve as task specific modules, and therefore knowledge sharing among the modules is achieved.
Assuming a total of S training samples, the two sub-modules use the same input vector x ═ x1,x2,...,x6]T,x1,x2,x3,x4,x5,x6Respectively corresponding to the normalized input variables: NOx concentration, temperature on the right side of a primary combustion chamber, primary air quantity of a furnace, secondary air quantity of the furnace, accumulated amount of urea solution of the furnace and supply flow of urea solvent at the time t; output y1,y2NOx concentrations at time t +1 and t + 2.
In the first sub-module, the NOx concentration at time t +1 is calculated as follows:
input layer of the first submodule: this layer consists of 6 neurons, the output of each input neuron being:
ui=xi (3)
wherein u isiIs the ith input nerveOutput of element, xiIs the ith element of the input vector, i ═ 1,2, …, 6;
② hidden layer of the first submodule: the hidden layer consists of K neurons, the output of each neuron being:
wherein phi isk(xs) Representing the s-th input vector xsThe output of the kth hidden neuron, c, upon entering the first submodulekIs the center of the kth hidden layer neuron, b is the width of the kth hidden layer neuron;
output layer of the first submodule: the output of the first submodule is:
wherein, y1,sFor the s-th input vector xsWhen entering the first sub-module, the predicted value, w, corresponding to the time t +1k,1Is the connection weight of the kth hidden layer neuron of the first submodule to the output layer, phik(xs) Is the output of the kth hidden layer neuron.
The NOx concentration at time t +2 in the second sub-module is calculated as follows:
input layer of the second sub-module: this layer consists of 6 neurons, each with an output of:
vi=xi (6)
wherein v isiIs the output of the ith input neuron, xiIs the ith element of the input vector, i ═ 1,2, …, 6;
② hidden layer of second submodule: the hidden layer consists of J neurons, and the output of each hidden layer neuron of the second submodule is as follows:
wherein phi isj(xs) Representing the s-th input vector xsOutput of the jth hidden neuron upon entry into the second submodule, cjIs the center of the jth hidden layer neuron, and b is the width of the jth hidden layer neuron;
output layer of the second submodule: the output of the second submodule is:
wherein, y2,sFor the s-th input vector xsWhen entering the second submodule, the predicted value, w, corresponding to the time t +2j,1And the connection weight value from the jth hidden layer neuron of the second submodule to the output layer. Phi is aj(xs) The output of the jth hidden layer neuron of the second submodule.
The second submodule is realized by migrating the first submodule and adding the task-specific module together, so that the first K neurons of the second submodule are the same as the first submodule. The calculation of the output of the second submodule may also be:
wn,1is the connection weight value phi from the nth hidden layer neuron to the output layer in the task specific modulen(xs) Is the output of the nth hidden layer neuron. w is ak,1Is the connection weight of the kth hidden layer neuron of the first submodule to the output layer, phik(xs) Is the output of the kth hidden layer neuron. y is1,sFor the s-th input vector xsAnd when entering the first sub-module, corresponding to the predicted value at the t +1 moment.
(3) Designing an RBF neural network based on the training samples to realize the construction of a multi-task learning frame submodule;
hidden layer neurons of the RBF neural networks of the two sub-modules are added in a self-organizing mode, after one neuron is added each time, a connection weight between the current hidden layer neuron and an output neuron is calculated, errors under the current network structure are solved according to formulas (10) and (11), if the errors do not reach the expected errors, the neurons are continuously added until the errors are lower than the expected errors or the number of the largest neurons is reached. The maximum number of the hidden layer neurons of the first submodule in the invention is K. The maximum number of hidden layer neurons of the second submodule is J
The mean square error of the two prediction tasks is defined as:
wherein the content of the first and second substances,expected values at times t +1 and t +2, y1,s、y2,sAnd S is the number of training samples, and is the predicted values at the time of t +1 and t + 2.
1) Determining the network structure and parameters of a first submodule;
determining the center and width of hidden layer neuron of the first submodule;
and taking S input samples as a set of hidden layer neuron centers to be selected, and selecting the S input samples as centers from the samples. Principle of adding neurons: the center that maximizes the synthetic error is sought in the fitting center as the center to which the neuron is to be added. The maximum position of the comprehensive error is as follows:
d=argmax[e1,1,e1,2,...,e1,i,...,e1,S] (12)
wherein e1,iWhen the prediction is carried out at the t +1 moment, the comprehensive errors of all samples are shown when the ith sample is taken as the center of the hidden layer neuron, and the higher the comprehensive error is, the need is shownThe sample is taken as the center of the hidden layer neuron to compensate the comprehensive error, and the comprehensive error is calculated as follows:
wherein p iss,iTo select the ith sample as the center, the s input sample is at the output of the hidden layer.Expected value at time t +1 for the s-th sample.
Initially, the hidden layer output of the s-th input sample when passing through the i-th candidate neuron is:
ps,i(0)=φi(xs) (14)
after each center is added, the center needs to be deleted from the set of the intended center, so as to ensure that the same center is not used for multiple times. So after k neurons are determined, the number of candidate centers becomes S-k.
When a hidden layer neuron is added, the radial basis output needs to be updated to ensure that the output response range of the radial basis function is large enough, so that the fitting capability of the model is improved, and the following updating is performed:
ps,i(k),ps,i(k-1) radial basis outputs for the current iteration and the last iteration when the ith sample is selected as the center. p is a radical ofs,k-1The radial basis of the s-th sample is output for the last selected sample as the center.
The iteration times, namely the number of the added neurons, adopt the same width for all the hidden layer neuronsAnd remain unchanged during the iteration processAnd (6) changing. And calculating the optimal connection weight from the hidden layer to the output layer under the current structure every time one neuron is added.
Determining the connection weight between hidden layer neuron and output neuron.
After each neuron is added, the center and the width are fixed in the training process, and only the optimal connection weight from the hidden layer to the output layer needs to be calculated. Because the hidden layer output and the network output value are in a linear relation, the output weight can be calculated by adopting a least square method to obtain:
wherein WT=[w1,1,w2,1,...wk,1],wk,1Hidden layer output as the connection weight between the kth hidden layer neuron and the output neuronDesired output
After each neuron is added, calculating an output weight value, under the current network structure and parameters, calculating a prediction error at the t +1 moment according to formulas (5) and (10), if the prediction error does not reach an expected error, continuing to add the neurons until the prediction error is lower than the expected error, and stopping adding the neurons until the condition can reach the maximum number K of the neurons. After training is completed, the center, width and output weight of the first submodule can be determined.
2) Determination of the configuration and parameters of a second network of submodules
Since the same data is used as input, the prediction of t +2 needs more neurons to learn more knowledge from the input, and the prediction accuracy is guaranteed. In the invention, the number of hidden layer neurons of the second submodule is J, wherein J is K + N, K is the number of hidden layer neurons of the first submodule, and N is the number of neurons of the task specific module of the second submodule.
Due to the characteristic of multi-task learning information sharing, the first sub-module is migrated to be used as a part of the second sub-module, and the second sub-module is constructed only by adding neurons as task specific modules. That is, the center, width and output weight of the first K hidden layer neurons of the second submodule are the same as those of the first submodule, and the parameters of the shared module are kept unchanged when neurons are added to the task specific module. The mechanism of adding the neurons by the specific module of the task two is the same as that of the first submodule.
In order to ensure that only the output weight corresponding to the task two specific modules is updated when the neuron is added, the output weight of the task specific module is calculated as follows by combining the formula (9):
wherein w1,K+nThe connection weight value phi between the nth neuron and the output neuron added for the task specific moduleK+n(xs) The output of the nth neuron at the hidden layer of the task specific module is the s-th sample.Expected value, y, output for the time t +2 of the s-th sample1,sIs the predicted value at the time of the s-th sample t + 1.
And after the task specific module adds the neurons each time, calculating the output weight of the task specific module under the current structure, and combining the parameters of the first submodule to obtain all the parameters of the second submodule. Under the current structure and parameters, according to the formulas (8) and (11), the prediction error at the time of t +2 is calculated, if the expected error is not reached, the neuron addition is continued until the expected error is not reached, and the condition of stopping the neuron addition can be that the maximum number J of neurons is reached. After training is completed, the center, width and output weight of the second submodule can be determined.
(4) NOx concentration prediction is performed, and performance of the prediction model is evaluated
And C test samples are set, the test sample data is used as the input of the trained multi-task learning prediction model to obtain the output of the two sub-modules, and the output is subjected to inverse normalization to obtain the predicted value of the NOx concentration at two moments. By calculating the root mean square error RMSE, the mean absolute percent error MAPE and the regression coefficient R2And evaluating the precision of the prediction model. The smaller the RMSE and MAPE, the R2The larger the prediction accuracy, the higher the prediction accuracy.
For the prediction of the t +1 moment, the three evaluation indexes are respectively calculated as follows:
whereiny1,c,The actual value, the predicted value and the predicted average value of the NOx concentration at the t +1 moment are respectively.
For the prediction of the time t +2, the three evaluation indexes are calculated as follows:
whereiny2,c,The actual value, the predicted value and the predicted average value of the NOx concentration at the t +2 moment are respectively.
3. Compared with the prior art, the invention has the following obvious advantages and beneficial effects:
aiming at the defects of the existing multi-step prediction method of NOx, the invention provides the prediction method of the concentration of the NOx in the municipal solid waste incineration based on the multitask learning framework by analyzing the mechanism of the generation and removal of the NOx in the municipal solid waste incineration and selecting the input variable related to the prediction of the concentration of the NOx.
Particular attention is paid to: the invention is only for convenience of description, adopts two-step advanced prediction of the concentration of NOx in the municipal solid waste incineration, and is also applicable to multi-step advanced prediction of the concentration of other atmospheric pollutants in the municipal solid waste incineration, and the invention is within the scope of the invention as long as the principle of the invention is adopted for improvement or optimization.
Drawings
FIG. 1 is a diagram of a prediction model architecture based on a multi-task learning framework
FIG. 2 is a diagram showing the result of predicting NOx concentration at time t +1 in the multi-task learning model
FIG. 3 is a diagram showing the result of predicting NOx concentration at time t +2 in the multi-task learning model
Detailed Description
According to the method for predicting the concentration of the nitric oxide (NOx) generated by the municipal solid waste incineration based on the multitask learning framework, two-step prediction of the concentration of the NOx is realized according to data collected by the municipal solid waste incineration, and the problem of low multi-step prediction precision of the concentration of the NOx generated by the municipal solid waste incineration is solved, so that an incinerator can be guided to reasonably adjust the amount of the reducing agent sprayed by a denitration system, the purpose of environmental protection is achieved, and the cost is greatly saved.
The experimental data were 2200 groups of samples from a waste incineration plant in Beijing. Each set of samples contains 6 input variables: NOx concentration, primary combustion chamber right side temperature, furnace primary air volume, furnace secondary air volume, furnace urea solution cumulative amount, urea solvent supply flow at time t, 2 output variables: NOx concentration at time t +1, NOx concentration at time t + 2. The entire 2200 sample set was divided into two portions: wherein 1540 groups of data are used as training samples, and the other 660 groups of data are used as testing samples;
step 1: preprocessing data;
input variables related to NOx prediction, including: normalizing the NOx concentration, the right side temperature of the primary combustion chamber, the primary air volume of the furnace, the secondary air volume of the furnace, the accumulated amount of the urea solution of the furnace and the supply flow of the urea solution at the time t to [0, 1] according to a formula (1); the NOx concentration at the time when the output variable is t +1, t +2, is normalized to [0, 1] according to equation (2):
wherein, IiDenotes the ith input variable, OmDenotes the m-th output variable, xiAnd ymRespectively representing the ith input variable and the mth output variable after normalization; min (I)i) And max (I)i) Respectively representing the ith input variableMinimum and maximum values in the quantities; min (O)m) And max (O)m) Respectively representing the minimum value and the maximum value in the m-th output variable;
step 2: constructing a prediction model of nitrogen oxide (NOx) generated by burning urban solid waste based on a multitask learning framework based on training samples;
and establishing a NOx prediction model by utilizing a multitask learning frame based on the RBF neural network to realize two-step prediction of NOx concentration. The multi-task learning model consists of two sub-modules, different modules realize the prediction of NOx concentration at different moments, and knowledge sharing is carried out among the modules. The first sub-module acts as a base module that will migrate as shared knowledge to the second sub-module. The second submodule is constructed by adding a task-specific module to the basic module.
Two sub-modules are established based on an RBF neural network, and the method comprises the following steps: an input layer, a hidden layer and an output layer; at the initial moment, the topological structure of the first sub-module is 6-K-1, namely the input layer is provided with 6 neurons which respectively correspond to the 6 input variables normalized in the step 1, the hidden layer is provided with K neurons, the output layer is provided with 1 neuron and corresponds to the concentration of NOx at the moment of t + 1; the topological structure of the second submodule is 6-J-1, namely the input layer has 6 neurons, which respectively correspond to the 6 input variables normalized in the step 1, the hidden layer has J neurons, the output layer has 1 neuron, and the output layer corresponds to the NOx concentration at the time of t + 2. The topological structure of the second submodule is that N hidden layer neurons are added on the basis of the first submodule to serve as task specific modules, and therefore knowledge sharing among the modules is achieved. In this example, K is 15, J is 17, and N is 2.
Assuming a total of S training samples, the two sub-modules use the same input vector x ═ x1,x2,...,x6]T,x1,x2,x3,x4,x5,x6Respectively corresponding to the normalized input variables: NOx concentration, temperature on the right side of a primary combustion chamber, primary air quantity of a furnace, secondary air quantity of the furnace, accumulated amount of urea solution of the furnace and supply flow of urea solvent at the time t; output y1,y2NOx concentrations at time t +1 and t + 2. The true bookIn the example, S is 1540.
In the first sub-module, the NOx concentration at time t +1 is calculated as follows:
fourthly, the input layer of the first submodule: this layer consists of 6 neurons, the output of each input neuron being:
ui=xi (3)
wherein u isiIs the output of the ith input neuron, xiIs the ith element of the input vector, i ═ 1,2, …, 6;
fifth, hidden layer of the first sub-module: the hidden layer consists of K neurons, the output of each neuron being:
wherein phi isk(xs) Representing the s-th input vector xsThe output of the kth hidden neuron, c, upon entering the first submodulekIs the center of the kth hidden layer neuron, b is the width of the kth hidden layer neuron;
sixthly, the output layer of the first submodule: the output of the first submodule is:
wherein, y1,sFor the s-th input vector xsWhen entering the first sub-module, the predicted value, w, corresponding to the time t +1k,1Is the connection weight of the kth hidden layer neuron of the first submodule to the output layer, phik(xs) Is the output of the kth hidden layer neuron.
The NOx concentration at time t +2 in the second sub-module is calculated as follows:
input layer of the second submodule: this layer consists of 6 neurons, each with an output of:
vi=xi (6)
wherein,viIs the output of the ith input neuron, xiIs the ith element of the input vector, i ═ 1,2, …, 6;
fifth, the hidden layer of the second sub-module: the hidden layer consists of J neurons, and the output of each hidden layer neuron of the second submodule is as follows:
wherein phi isj(xs) Representing the s-th input vector xsOutput of the jth hidden neuron upon entry into the second submodule, cjIs the center of the jth hidden layer neuron, and b is the width of the jth hidden layer neuron;
sixthly, the output layer of the second submodule: the output of the second submodule is:
wherein, y2,sFor the s-th input vector xsWhen entering the second submodule, the predicted value, w, corresponding to the time t +2j,1And the connection weight value from the jth hidden layer neuron of the second submodule to the output layer. Phi is aj(xs) The output of the jth hidden layer neuron of the second submodule.
The second submodule is realized by migrating the first submodule and adding the task-specific module together, so that the first K neurons of the second submodule are the same as the first submodule. The calculation of the output of the second submodule may also be:
wn,1is the connection weight value phi from the nth hidden layer neuron to the output layer in the task specific modulen(xs) Is the output of the nth hidden layer neuron. w is ak,1For the kth concealment of the first sub-moduleConnection weight, phi, of layer neurons to output layersk(xs) Is the output of the kth hidden layer neuron. y is1,sFor the s-th input vector xsAnd when entering the first sub-module, corresponding to the predicted value at the t +1 moment.
And step 3: designing an RBF neural network based on the training samples to realize the construction of a multi-task learning frame submodule;
hidden layer neurons of the RBF neural networks of the two sub-modules are added in a self-organizing mode, after one neuron is added each time, a connection weight between the current hidden layer neuron and an output neuron is calculated, errors under the current network structure are solved according to formulas (10) and (11), if the errors do not reach the expected errors, the neurons are continuously added until the errors are lower than the expected errors or the number of the largest neurons is reached. In the invention, the expected error of the prediction results at two moments is set to be 0.0005, and the maximum number of hidden layer neurons in a first submodule is 15. The maximum number of hidden layer neurons in the second submodule is 17.
The errors defining the two prediction tasks are:
wherein the content of the first and second substances,expected values at times t +1 and t +2, y1,s、y2,sAnd S is the number of training samples, and is the predicted values at the time of t +1 and t + 2.
3) Determining the network structure and parameters of a first submodule;
determining the center and width of hidden layer neuron of the first submodule;
and taking S input samples as a set of hidden layer neuron centers to be selected, and selecting the S input samples as centers from the samples. Principle of adding neurons: the center that maximizes the synthetic error is sought in the fitting center as the center to which the neuron is to be added. The maximum position of the comprehensive error is as follows:
d=argmax[e1,1,e1,2,...,e1,i,...,e1,S] (12)
wherein e1,iWhen the ith sample is taken as the hidden layer neuron center during the prediction at the time t +1, the higher the comprehensive error of all samples is, which indicates that the sample needs to be taken as the hidden layer neuron center to compensate the comprehensive error, and the calculation of the comprehensive error is as follows:
wherein p iss,iTo select the ith sample as the center, the s input sample is at the output of the hidden layer.Expected value at time t +1 for the s-th sample.
Initially, the hidden layer output of the s-th input sample when passing through the i-th candidate neuron is:
ps,i(0)=φi(xs) (14)
after each center is added, the center needs to be deleted from the set of the intended center, so as to ensure that the same center is not used for multiple times. So after k neurons are determined, the number of candidate centers becomes S-k.
When a hidden layer neuron is added, the radial basis output needs to be updated to ensure that the output response range of the radial basis function is large enough, so that the fitting capability of the model is improved, and the following updating is performed:
ps,i(k),ps,i(k-1) is selectedAnd when the ith sample is taken as the center, outputting the radial basis of the current iteration and the last iteration. p is a radical ofs,k-1The radial basis of the s-th sample is output for the last selected sample as the center.
The iteration times, namely the number of the added neurons, adopt the same width for all the hidden layer neuronsAnd remain unchanged during the iteration. And calculating the optimal connection weight from the hidden layer to the output layer under the current structure every time one neuron is added.
Determining the connection weight between hidden layer neuron and output neuron.
After each neuron is added, the center and the width are fixed in the training process, and only the optimal connection weight from the hidden layer to the output layer needs to be calculated. Because the hidden layer output and the network output value are in a linear relation, the output weight can be calculated by adopting a least square method to obtain:
wherein WT=[w1,1,w2,1,...wk,1],wk,1Hidden layer output as the connection weight between the kth hidden layer neuron and the output neuronDesired output
After adding the neurons every time, calculating an output weight value, under the current network structure and parameters, calculating a prediction error at the t +1 moment according to the formulas (5) and (10), if the expected error is not reached to 0.0005, continuously adding the neurons until the expected error is not reached, and stopping adding the neurons until the condition that the number of the neurons is not increased to 15 can be reached to the maximum hidden layer neuron number of the first submodule. After training is completed, the center, width and output weight of the first submodule can be determined.
4) Determination of the configuration and parameters of a second network of submodules
Since the same data is used as input, the prediction of t +2 needs more neurons to learn more knowledge from the input, and the prediction accuracy is guaranteed. In the invention, the maximum number of hidden layer neurons of the second submodule is J, J is 17, wherein J is K + N, K is 15 and is the number of hidden layer neurons of the first submodule, N is 2 and is the number of neurons of the task specific module of the second submodule.
Due to the characteristic of multi-task learning information sharing, the first sub-module is migrated to be used as a part of the second sub-module, and the second sub-module is constructed only by adding neurons as task specific modules. That is, the center, width and output weight of the first 15 hidden layer neurons of the second submodule are the same as those of the first submodule, and the parameters of the shared module remain unchanged while neurons are added by the task specific module. The mechanism of adding the neurons by the specific module of the task two is the same as that of the first submodule.
In order to ensure that only the output weight corresponding to the task two specific modules is updated when the neuron is added, the output weight of the task specific module is calculated as follows by combining the formula (9):
wherein w1,K+nThe connection weight value phi between the nth neuron and the output neuron added for the task specific moduleK+n(xs) The output of the nth neuron at the hidden layer of the task specific module is the s-th sample.Expected value, y, output for the time t +2 of the s-th sample1,sIs the predicted value at the time of the s-th sample t + 1.
And after the task specific module adds the neurons each time, calculating the output weight of the task specific module under the current structure, and combining the parameters of the first submodule to obtain all the parameters of the second submodule. Under the current structure and parameters, according to the formulas (8) and (11), the prediction error at the time of t +2 is calculated, if the expected error is not reached, the neuron addition is continued until the expected error is not reached, and the condition of stopping the neuron addition can be that the maximum number of neurons 17 is reached. After training is completed, the center, width and output weight of the second submodule can be determined.
And after the two sub-modules are trained, the construction of a prediction model based on a multi-task learning prediction framework is completed. In this embodiment, the prediction model based on the multitask learning framework is as shown in fig. 1.
And 4, step 4: and taking the test sample data as the input of the trained multi-task learning prediction model to obtain the output of two sub-modules, performing inverse normalization on the output to obtain the predicted values of NOx concentration at two moments, and evaluating the performance of the proposed model.
In the present embodiment, the results of predicting NOx concentrations at time t +1 and time t +2 are shown in fig. 2 and 3, and the X axis: number of samples, in units of units per sample, Y-axis: NOx concentration in mg/m3The green solid line is the actual output value of the NOx concentration, and the blue dotted line is the predicted output value of the NOx concentration.
Three performance indicators are defined: root mean square error RMSE, mean absolute percent error MAPE, and regression coefficient R2And evaluating the precision of the prediction model.
For the prediction of the t +1 moment, the three evaluation indexes are respectively calculated as follows:
whereiny1,c,The actual value, the predicted value and the predicted average value of the NOx concentration at the t +1 moment are respectively.
For the prediction of the time t +2, the three evaluation indexes are calculated as follows:
whereiny2,c,The actual value, the predicted value and the predicted average value of the NOx concentration at the t +2 moment are respectively.
And comparing the performance with a direct prediction method and an iterative prediction method, wherein the comparison results are shown in tables 1 and 2, and the results show the effectiveness of the urban solid waste incineration NOx concentration prediction model based on the multi-task learning framework.
TABLE 1 comparison of predictive Performance at time t +1 for multitask learning and other multi-step predictive methods
TABLE 2 comparison of predictive Performance at time t +2 for multitask learning and other multi-step prediction methods
Claims (4)
1. A prediction method for nitrogen oxide (NOx) emission in municipal solid waste incineration based on a multitask learning framework is characterized by comprising the following steps:
step 1: preprocessing data;
through the mechanism analysis of the generation and removal of the NOx generated and removed by the municipal solid waste incineration, 6 input variables relevant to the prediction of the NOx are determined, and the input variables comprise: normalizing the NOx concentration, the right side temperature of the primary combustion chamber, the primary air volume of the furnace, the secondary air volume of the furnace, the accumulated amount of the urea solution of the furnace and the supply flow of the urea solution at the time t to [0, 1] according to a formula (1); the NOx concentration at the time when the output variable is t +1, t +2, is normalized to [0, 1] according to equation (2):
wherein, IiDenotes the ith input variable, OmDenotes the m-th output variable, xiAnd ymRespectively representing the ith input variable and the mth output variable after normalization; min (I)i) Andmax(Ii) Respectively representing the minimum value and the maximum value in the ith input variable; min (O)m) And max (O)m) Respectively representing the minimum value and the maximum value in the m-th output variable;
step 2: constructing a prediction model of nitrogen oxide NOx generated by burning urban solid waste based on a multitask learning framework based on training samples;
establishing a NOx prediction model by utilizing a multitask learning frame based on a RBF neural network to realize two-step prediction of NOx concentration; the multi-task learning model consists of two sub-modules, different modules realize the prediction of NOx concentration at different moments, and the modules share knowledge; the first sub-module is used as a basic module, and the basic module is used as shared knowledge and is migrated to the second sub-module; the construction of a second sub-module is realized by adding a task-specific module on the basis of a basic module; two sub-modules are established based on an RBF neural network, and the method comprises the following steps: an input layer, a hidden layer and an output layer; at the initial moment, the topological structure of the first sub-module is 6-K-1, namely the input layer is provided with 6 neurons which respectively correspond to the 6 input variables normalized in the step 1, the hidden layer is provided with K neurons, the output layer is provided with 1 neuron and corresponds to the concentration of NOx at the moment of t + 1; the topological structure of the second submodule is 6-J-1, namely an input layer is provided with 6 neurons which respectively correspond to the 6 input variables normalized in the step 1, the hidden layer is provided with J neurons, an output layer is provided with 1 neuron and corresponds to the concentration of NOx at the time of t + 2; the topological structure of the second submodule is that N hidden layer neurons are added on the basis of the first submodule to serve as task specific modules, and therefore knowledge sharing among the modules is achieved;
step 3, designing an RBF neural network based on the training samples, and realizing the construction of a multi-task learning frame submodule;
and 4, taking the test sample data as the input of the trained prediction model, predicting the NOx concentration at two moments in the future, and evaluating the performance of the prediction model.
2. The urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning as claimed in claim 1, characterized in that in step 2, the NOx concentration prediction at the time t +1 and t +2 is realized by two sub-modules, specifically:
assuming a total of S training samples, the two sub-modules use the same input vector x ═ x1,x2,...,x6]T,x1,x2,x3,x4,x5,x6Respectively corresponding to the normalized input variables: NOx concentration, temperature on the right side of a primary combustion chamber, primary air quantity of a furnace, secondary air quantity of the furnace, accumulated amount of urea solution of the furnace and supply flow of urea solvent at the time t; output y1,y2NOx concentrations at time t +1 and t + 2;
in the first sub-module, the NOx concentration at time t +1 is calculated as follows:
input layer of the first submodule: this layer consists of 6 neurons, the output of each input neuron being:
ui=xi (3)
wherein u isiIs the output of the ith input neuron, xiIs the ith element of the input vector, i ═ 1,2, …, 6;
② hidden layer of the first submodule: the hidden layer consists of K neurons, the output of each neuron being:
wherein phi isk(xs) Representing the s-th input vector xsThe output of the kth hidden neuron, c, upon entering the first submodulekIs the center of the kth hidden layer neuron, b is the width of the kth hidden layer neuron;
output layer of the first submodule: the output of the first submodule is:
wherein, y1,sFor the s-th input vector xsWhen entering the first sub-module, the predicted value, w, corresponding to the time t +1k,1Is the connection weight of the kth hidden layer neuron of the first submodule to the output layer, phik(xs) Is the output of the kth hidden layer neuron;
the NOx concentration at time t +2 in the second sub-module is calculated as follows:
input layer of the second sub-module: this layer consists of 6 neurons, each with an output of:
vi=xi (6)
wherein v isiIs the output of the ith input neuron, xiIs the ith element of the input vector, i ═ 1,2, …, 6;
② hidden layer of second submodule: the hidden layer consists of J neurons, and the output of each hidden layer neuron of the second submodule is as follows:
wherein phi isj(xs) Representing the s-th input vector xsOutput of the jth hidden neuron upon entry into the second submodule, cjIs the center of the jth hidden layer neuron, and b is the width of the jth hidden layer neuron;
output layer of the second submodule: the output of the second submodule is:
wherein, y2,sFor the s-th input vector xsWhen entering the second submodule, the predicted value, w, corresponding to the time t +2j,1The connection weight value from the jth hidden layer neuron of the second submodule to the output layer; phi is aj(xs) The output of the jth hidden layer neuron that is the second submodule;
the second submodule is realized by the first submodule migration and the task specific module addition together, so that the first K neurons of the second submodule are the same as the first submodule; the output of the second submodule is calculated as:
wn,1is the connection weight value phi from the nth hidden layer neuron to the output layer in the task specific modulen(xs) Is the output of the nth hidden layer neuron; w is ak,1Is the connection weight of the kth hidden layer neuron of the first submodule to the output layer, phik(xs) Is the output of the kth hidden layer neuron; y is1,sFor the s-th input vector xsAnd when entering the first sub-module, corresponding to the predicted value at the t +1 moment.
3. The urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning as claimed in claim 1, wherein the construction of the multitask learning submodule is realized based on an RBF neural network, and the step 3 specifically comprises the following steps:
hidden layer neurons of RBF neural networks of the two sub-modules are added in a self-organizing mode, after one neuron is added each time, a connection weight between the current hidden layer neuron and an output neuron is calculated, errors under the current network structure are solved according to formulas (10) and (11), if the errors do not reach the expected errors, the neurons are continuously added until the errors are lower than the expected errors or the number of the largest neurons is reached; the maximum number of first submodule hidden layer neurons is K; the maximum number of hidden layer neurons of the second submodule is J
The mean square error of the two prediction tasks is defined as:
wherein the content of the first and second substances,expected values at times t +1 and t +2, y1,s、y2,sThe predicted values at the time t +1 and the time t +2 are obtained, and S is the number of training samples;
(1) determining the network structure and parameters of a first submodule;
determining the center and width of hidden layer neuron of the first submodule;
taking S input samples as a set of hidden layer neuron centers to be selected, and selecting the samples as centers; principle of adding neurons: searching the center of the fitting center, which maximizes the comprehensive error, as the center of the neuron to be added; the maximum position of the comprehensive error is as follows:
d=argmax[e1,1,e1,2,...,e1,i,...,e1,S] (12)
wherein e1,iWhen the ith sample is taken as the hidden layer neuron center during the prediction at the time t +1, the higher the comprehensive error of all samples is, which indicates that the sample needs to be taken as the hidden layer neuron center to compensate the comprehensive error, and the calculation of the comprehensive error is as follows:
wherein p iss,iWhen the ith sample is selected as the center, the output of the s input sample at the hidden layer;the expected value at the t +1 moment of the s-th sample;
initially, the hidden layer output of the s-th input sample when passing through the i-th candidate neuron is:
ps,i(0)=φi(xs) (14)
after each center is added, the center needs to be deleted from the set of the selected center, so that the same center is ensured not to be used for multiple times; therefore, after k neurons are determined, the number of the candidate centers is changed into S-k;
when a hidden layer neuron is added, the radial basis output needs to be updated to ensure that the output response range of the radial basis function is large enough, so that the fitting capability of the model is improved, and the following updating is performed:
ps,i(k),ps,i(k-1) radial basis outputs for the current iteration and the last iteration when the ith sample is selected as the center; p is a radical ofs,k-1The radial basis output for the s-th sample, centered on the last selected sample;
the iteration times, namely the number of the added neurons, adopt the same width for all the hidden layer neuronsAnd remain unchanged during the iteration process; every time a neuron is added, calculating the optimal connection weight from the hidden layer to the output layer under the current structure;
determining a connection weight between a hidden layer neuron and an output neuron;
after each neuron is added, the center and the width are fixed in the training process, and only the optimal connection weight from the hidden layer to the output layer needs to be calculated; because the hidden layer output and the network output value are in a linear relation, the output weight can be calculated by adopting a least square method to obtain:
wherein WT=[w1,1,w2,1,...wk,1],wk,1Is the k-thThe connection weight between hidden layer neuron and output neuron, hidden layer outputDesired output
After adding the neurons every time, calculating an output weight, under the current network structure and parameters, calculating a prediction error at the t +1 moment according to formulas (5) and (10), if the expected error is not reached, continuing to add the neurons until the expected error is not reached, and stopping adding the neurons until the condition can reach the maximum number K of the neurons; after training is finished, the center, the width and the output weight of the first submodule can be determined;
(2) determination of the configuration and parameters of a second network of submodules
Because the same data is used as input, more neurons are needed for the prediction of t +2, so that more knowledge can be learned from the input, and the prediction precision is ensured; in the invention, the number of hidden layer neurons of a second submodule is J, wherein J is K + N, K is the number of hidden layer neurons of a first submodule, and N is the number of neurons of a task specific module of the second submodule;
due to the characteristic of multi-task learning information sharing, the first sub-module is migrated to be used as a part of the second sub-module, and the second sub-module is constructed only by adding neurons as task specific modules; the center, width and output weight of the first K hidden layer neurons of the second submodule are the same as those of the first submodule, and when the neurons are added to the task specific module, the parameters of the shared module are kept unchanged; the mechanism of adding the neurons to the specific module of the task two is the same as that of the first submodule;
in order to ensure that only the output weight corresponding to the task two specific modules is updated when the neuron is added, the output weight of the task specific module is calculated as follows by combining the formula (9):
wherein w1,K+nThe connection weight value phi between the nth neuron and the output neuron added for the task specific moduleK+n(xs) The output of the nth neuron of the hidden layer of the task specific module for the s sample;expected value, y, output for the time t +2 of the s-th sample1,sThe predicted value at the moment of the s sample t +1 is obtained;
after the task specific module adds the neuron every time, calculating the output weight of the task specific module under the current structure, and combining the parameters of the first submodule to obtain all the parameters of the second submodule; under the current structure and parameters, according to the formulas (6) and (11), calculating the prediction error at the t +2 moment, if the expected error is not reached, continuing to add the neurons until the expected error is not reached, and stopping adding the neurons until the condition can reach the maximum number J of the neurons; after training is completed, the center, width and output weight of the second submodule can be determined.
4. The urban solid waste incineration NOx emission prediction method based on multitask learning as claimed in claim 1, wherein in step 4, the prediction model performance evaluation based on the multitask learning framework specifically comprises:
setting C test samples, taking test sample data as input of the trained multi-task learning prediction model to obtain output of two sub-modules, and performing inverse normalization on the output to obtain predicted values of NOx concentration at two moments; by calculating the root mean square error RMSE, the mean absolute percent error MAPE and the regression coefficient R2To, forEvaluating the precision of the prediction model; the smaller the RMSE and MAPE, the R2The larger the prediction accuracy, the higher the prediction accuracy;
for the prediction of the t +1 moment, the three evaluation indexes are respectively calculated as follows:
whereiny1,c,Respectively obtaining a real value, a predicted value and a predicted average value of the NOx concentration at the t +1 moment;
for the prediction of the time t +2, the three evaluation indexes are calculated as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110999702.5A CN113780639B (en) | 2021-08-29 | 2021-08-29 | Urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning framework |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110999702.5A CN113780639B (en) | 2021-08-29 | 2021-08-29 | Urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning framework |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113780639A true CN113780639A (en) | 2021-12-10 |
CN113780639B CN113780639B (en) | 2024-04-02 |
Family
ID=78839833
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110999702.5A Active CN113780639B (en) | 2021-08-29 | 2021-08-29 | Urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning framework |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113780639B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114611398A (en) * | 2022-03-17 | 2022-06-10 | 北京工业大学 | Brain-like modular neural network-based soft measurement method for nitrogen oxides in urban solid waste incineration process |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105574326A (en) * | 2015-12-12 | 2016-05-11 | 北京工业大学 | Self-organizing fuzzy neural network-based soft measurement method for effluent ammonia-nitrogen concentration |
CN106503380A (en) * | 2016-10-28 | 2017-03-15 | 中国科学院自动化研究所 | Coking nitrogen oxides in effluent concentration prediction method and forecasting system |
CN112733876A (en) * | 2020-10-28 | 2021-04-30 | 北京工业大学 | Soft measurement method for nitrogen oxides in urban solid waste incineration process based on modular neural network |
CN112990004A (en) * | 2021-03-12 | 2021-06-18 | 中国科学技术大学智慧城市研究院(芜湖) | Black smoke vehicle detection method based on optical flow method and deep learning convolutional neural network |
-
2021
- 2021-08-29 CN CN202110999702.5A patent/CN113780639B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105574326A (en) * | 2015-12-12 | 2016-05-11 | 北京工业大学 | Self-organizing fuzzy neural network-based soft measurement method for effluent ammonia-nitrogen concentration |
CN106503380A (en) * | 2016-10-28 | 2017-03-15 | 中国科学院自动化研究所 | Coking nitrogen oxides in effluent concentration prediction method and forecasting system |
CN112733876A (en) * | 2020-10-28 | 2021-04-30 | 北京工业大学 | Soft measurement method for nitrogen oxides in urban solid waste incineration process based on modular neural network |
CN112990004A (en) * | 2021-03-12 | 2021-06-18 | 中国科学技术大学智慧城市研究院(芜湖) | Black smoke vehicle detection method based on optical flow method and deep learning convolutional neural network |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114611398A (en) * | 2022-03-17 | 2022-06-10 | 北京工业大学 | Brain-like modular neural network-based soft measurement method for nitrogen oxides in urban solid waste incineration process |
CN114611398B (en) * | 2022-03-17 | 2023-05-12 | 北京工业大学 | Soft measurement method for nitrogen oxides in urban solid waste incineration process based on brain-like modularized neural network |
Also Published As
Publication number | Publication date |
---|---|
CN113780639B (en) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111804146B (en) | Intelligent ammonia injection control method and intelligent ammonia injection control device | |
CN111899510A (en) | Intelligent traffic system flow short-term prediction method and system based on divergent convolution and GAT | |
Zhu et al. | An on-line wastewater quality predication system based on a time-delay neural network | |
CN111814956B (en) | Multi-task learning air quality prediction method based on multi-dimensional secondary feature extraction | |
CN112488145B (en) | NO based on intelligent methodxOnline prediction method and system | |
CN110716512A (en) | Environmental protection equipment performance prediction method based on coal-fired power plant operation data | |
CN112949894B (en) | Output water BOD prediction method based on simplified long-short-term memory neural network | |
CN114678080A (en) | Converter end point phosphorus content prediction model, construction method and phosphorus content prediction method | |
CN112733876A (en) | Soft measurement method for nitrogen oxides in urban solid waste incineration process based on modular neural network | |
CN113780639A (en) | Urban solid waste incineration nitrogen oxide NOx emission prediction method based on multitask learning framework | |
Hao et al. | Prediction of nitrogen oxide emission concentration in cement production process: a method of deep belief network with clustering and time series | |
CN112613237B (en) | CFB unit NOx emission concentration prediction method based on LSTM | |
CN111667189A (en) | Construction engineering project risk prediction method based on one-dimensional convolutional neural network | |
CN113112072A (en) | NOx emission content prediction method based on deep bidirectional LSTM | |
CN114330815A (en) | Ultra-short-term wind power prediction method and system based on improved GOA (generic object oriented architecture) optimized LSTM (least Square TM) | |
Syberfeldt et al. | Multi-objective evolutionary simulation-optimisation of a real-world manufacturing problem | |
CN113192569A (en) | Harmful gas monitoring method based on improved particle swarm and error feedback neural network | |
Liu et al. | Relationship between lean tools and operational and environmental performance by integrated ISM–Bayesian network approach | |
JPH03134706A (en) | Knowledge acquiring method for supporting operation of sewage-treatment plant | |
CN115730456A (en) | Motor vehicle multielement tail gas prediction method and system based on double attention fusion network | |
CN116541774A (en) | Method for monitoring and early warning environmental factors in long tunnel asphalt pavement hole | |
CN115860270A (en) | Network supply load prediction system and method based on LSTM neural network | |
CN112348175B (en) | Method for performing feature engineering based on reinforcement learning | |
Sun et al. | A novel air quality index prediction model based on variational mode decomposition and SARIMA-GA-TCN | |
CN113408183A (en) | Vehicle base short-term composite prediction method based on prediction model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |