CN116861256A - Furnace temperature prediction method, system, equipment and medium for solid waste incineration process - Google Patents

Furnace temperature prediction method, system, equipment and medium for solid waste incineration process Download PDF

Info

Publication number
CN116861256A
CN116861256A CN202311032380.2A CN202311032380A CN116861256A CN 116861256 A CN116861256 A CN 116861256A CN 202311032380 A CN202311032380 A CN 202311032380A CN 116861256 A CN116861256 A CN 116861256A
Authority
CN
China
Prior art keywords
furnace temperature
case
difference
network
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311032380.2A
Other languages
Chinese (zh)
Inventor
严爱军
程子均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202311032380.2A priority Critical patent/CN116861256A/en
Publication of CN116861256A publication Critical patent/CN116861256A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/211Schema design and management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Incineration Of Waste (AREA)

Abstract

The application relates to a furnace temperature prediction method, a system, equipment and a medium for a solid waste incineration process, wherein the prediction method comprises the steps of obtaining a plurality of groups of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form case descriptions, and constructing a difference database D according to the case description sequence; performing data preprocessing based on the difference database D to obtain a training set; based on a depth Q network algorithm, constructing a case difference prediction model according to the training set; according to the characteristic variable of the current furnace temperature, the case difference data of the furnace temperature is obtained and is used as the input data of the case difference prediction model, and the predicted value of the furnace temperature at the next moment is obtained according to the output data of the case difference prediction model, so that the furnace temperature change trend in the urban solid waste incineration process can be accurately predicted, the guarantee is provided for the optimal control of the incineration process, and the working efficiency is improved.

Description

Furnace temperature prediction method, system, equipment and medium for solid waste incineration process
Technical Field
The application relates to the technical field of solid waste incineration, in particular to a furnace temperature prediction method, a system, equipment and a medium for a solid waste incineration process.
Background
The solid waste incineration power generation not only can realize the reduction and harmless treatment of the garbage, but also can utilize the heat energy generated by the incineration to generate power, thereby realizing the recycling of the garbage. In the incineration process, the furnace temperature is one of key parameters for ensuring the full incineration of urban solid waste and inhibiting pollution. However, urban solid waste is not subjected to classification treatment before incineration, so that the temperature change in the incineration process is difficult to accurately predict. Meanwhile, due to the influence of hysteresis of the incineration process, an operator is difficult to make adjustment operation according to the change of the furnace temperature in time, so that the furnace temperature is easy to be in an abnormal range, and therefore, the establishment of an accurate furnace temperature prediction model is a basis for maintaining stable and efficient operation of the solid waste incineration process.
At present, a furnace temperature prediction method in the urban solid waste incineration process mainly comprises a mechanism modeling prediction method and a data driving prediction method. The mechanism modeling prediction mainly starts from the reaction mechanism of the industrial process, a mechanism model of a controlled object is established based on the law of conservation of energy or conservation of materials, and a parameter prediction value is obtained through calculation of a physical equation and a chemical equation, so that the method has good interpretability. However, the incineration process is complex in process, has strong nonlinearity and strong coupling, and makes an accurate mechanism model difficult to build. With the improvement of technologies such as sensors, historical data of the production process are largely reserved, and the data implies the operation rule and the change information of the technological parameters, so that the establishment of a data driving model is guaranteed. Therefore, a furnace temperature prediction method based on data driving is paid attention to by researchers.
The data-driven modeling prediction method is mainly represented by machine learning methods such as support vector regression, BP (back propagation) neural networks, random configuration networks and the like. However, as the characteristic variables influencing the furnace temperature variable in the incineration process are numerous and the change process is complex, the method only obtains the furnace temperature prediction result by learning the change relation between the characteristic variables and the furnace temperature, and does not consider the reference information provided by the historical data on the furnace temperature change in the current incineration state, so that the prediction accuracy of the furnace temperature is limited, the furnace temperature change cannot be judged in time according to the solid waste incineration process, the optimization control of the operation process is influenced, and the working efficiency is reduced.
Disclosure of Invention
The application provides a furnace temperature prediction method, a system, equipment and a medium for a solid waste incineration process, which can improve the accuracy of the furnace temperature and the oxygen content of flue gas in the urban solid waste incineration process as much as possible, provide guarantee for the optimal control of the incineration process, enable on-site operators to master the incineration working condition of the solid waste in the furnace in time and improve the working efficiency.
In a first aspect, the present application provides a furnace temperature prediction method for a solid waste incineration process, the method comprising: acquiring a plurality of groups of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form case description, and constructing a difference database D according to the case description sequence; performing data preprocessing based on the difference database D to obtain a training set; based on a depth Q network algorithm, constructing a case difference prediction model according to the training set; obtaining case difference data of the furnace temperature according to characteristic variables of the current furnace temperature, using the case difference data as input data of the case difference prediction model, and obtaining a predicted value of the furnace temperature at the next moment according to output data of the case difference prediction model.
Optionally, the obtaining a plurality of sets of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form a case description, and constructing a difference database D according to the case description sequence, including: acquiring multiple sets includes, but is not limited to: the characteristic variables of the fire grate speed, the fire grate temperature, the primary air flow, the secondary air flow and the fan pressure are taken as the problem description characteristic X of the case C, the furnace temperature value y at the next moment is taken as the solution description, the case description is formed together, N cases are taken together to construct a furnace temperature case library C, and the furnace temperature case library C is shown by the formula (1):
sequentially selecting a pair of cases C from the furnace temperature case base C according to the case description sequence i ,c j Obtaining a case difference description, and showing the case difference description by a formula (2):
e=Δ(c i ,c j )=(ΔX ij ,Δy ij ) (11);
based on the obtained case difference description, constructing a case difference database D, wherein the number of case differences is N 2 Described by formula (3):
optionally, the performing data preprocessing based on the difference database D to obtain a training set includes: the difference case problem features in the case difference database D are subjected to min-max standardization, and are described by a formula (4):
wherein DeltaX k,m Represents the mth feature variable in the kth difference case, k=1, 2, …, N 2 ,N 2 Representing the number of case differences, m=1, 2, …, M representing the number of difference case features, where m=66.
Optionally, the constructing a case difference prediction model according to the training set based on the depth Q network algorithm includes: setting the hidden layer number L, the hidden layer node number n, the learning rate lr and the discount factor gamma of the deep Q network based on the deep Q network, defining a related state space S, an action space A and a reward function r, and initializing parameters; wherein the state space S is a series of state descriptions, which are the problem features in the case of furnace temperature difference, and is expressed as s= [ Δx ] 1 ,…,ΔX N 2 ]The method comprises the steps of carrying out a first treatment on the surface of the The motion space is a series of motion values of interval l, which represents the solution size in the case of furnace temperature difference, namely A= [ delta y ] min ,Δy min +l,…,Δy max ]The method comprises the steps of carrying out a first treatment on the surface of the The bonus function is described by equation (5):
r=-|Δy-a| (14),
where Δy represents the solution of the difference case and a represents the motion in the motion space a.
Optionally, the constructing a case difference prediction model according to the training set based on the depth Q network algorithm further includes: constructing a Q network and a target Q network, correspondingly defining a Q function Q (s, a|theta) of the Q network, and describing by a formula (6):
wherein θ represents a network weight, S represents a state in a state space S, and a represents an action in an action space a; q functions corresponding to the target Q network are represented by Q ' (s, a|theta '), and theta ' represents the weight of the target network; the selection of action a is realized according to the epsilon-greedy strategy, and is described by a formula (7):
Taking the training set as input data of a Q (s, a|theta) network, and setting a loss function L (theta) related to the reward value according to the reward value corresponding to training and learning of the Q network and a target Q network; and continuously updating the Q neural network parameter theta according to the loss function L (theta) until the loss function meets the set condition, and obtaining a case difference prediction model of the furnace temperature.
Optionally, taking the training set as input data of the Q (s, a|θ) network, setting a loss function L (θ) related to the reward value according to the reward value corresponding to the training learning of the Q network and the target Q network, including: setting the current state as s t According to action a t Awarding r to the network t And proceeds to the next state s t+1 And(s) t ,a t ,r t ,s t+1 ) Storing the experience pool B; based on sampling(s) in empirical pool B i ,a i ,r i ,s i+1 ) Calculating a target value t based on Q '(s, a|θ') of the target network i Described by equation (8):
t i =r i +γmaxQ′(s t+1 ,a′|θ′) (16),
wherein the experience pool B is used for storing the current state s t Action a t Next state s t+1 Prize r t A' represents an action value selected from the action space S by the target network; and continuously learning and training according to the training set based on the constructed Q network and the target Q network to obtain a loss function L (theta).
Optionally, the step of continuously updating the Q neural network parameter θ according to the loss function L (θ) until the loss function meets a set condition includes: the update of the Q network is realized according to the Bellman equation, a loss function L (theta) is obtained, and the method is described by a formula (9):
L ii )=E (s,a,r,s′) [(t i -Q(s i ,a ii )) 2 ]
=E (s,a,r,s′) [(r i +γmaxQ′(s t+1 ,a′|θ′)-Q(s i ,a ii )) 2 ] (17)。
Optionally, the obtaining case difference data of the furnace temperature according to the characteristic variable of the current furnace temperature, and the obtaining a predicted value of the furnace temperature at the next moment according to the output data of the case difference prediction model as the input data of the case difference prediction model includes: characteristic variable X according to current furnace temperature t Similarity measurement calculation is carried out with historical data in a furnace temperature case library C, and the calculation is described by a formula (10):
wherein X is j Representing the problem characteristics of a certain case in the case library, wherein M is the characteristic variable number; retrieving and obtaining the most similar case c according to the similarity xim =(X sim ,y sim ) Case difference data Δx of composition furnace temperature t,sim The method comprises the steps of carrying out a first treatment on the surface of the Will DeltaX t,sim As input, an output action value a=q (Δx) is obtained by the constructed case difference prediction model t,sim ) The value and selecting the corresponding action value as case difference solution delta from action spacey t,sim =A[a]The method comprises the steps of carrying out a first treatment on the surface of the Resolving deltay from the case differences obtained t,sim The predicted value y of the furnace temperature is described by the formula (11):
y=y t,sim +Δy t,sim (18)。
in a second aspect, the present application provides a system for furnace temperature prediction for solid waste incineration processes, the system comprising: the acquisition module is used for acquiring a plurality of groups of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form case description, and constructing a difference database D according to the case description sequence; the data preprocessing module is used for preprocessing data based on the difference database D to obtain a training set; the prediction model building module is used for building a case difference prediction model based on a depth Q network algorithm; the training module is used for training the case difference prediction model according to the training set to obtain a case difference prediction model of the furnace temperature; the result prediction module is used for obtaining case difference data of the furnace temperature according to the characteristic variable of the current furnace temperature, and obtaining a predicted value of the furnace temperature at the next moment according to the output data of the case difference prediction model as input data of the case difference prediction model.
In a third aspect, the present application also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the above method are implemented when the processor executes the computer program.
In a fourth aspect, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method described above.
The application has at least the following advantages:
according to the technical content provided by the embodiment of the application, the case description is formed by collecting the characteristic variable generated in the incineration process of the solid waste incineration power plant, the corresponding current furnace temperature value and the corresponding historical data of the furnace temperature value at the next moment, and the difference database D is constructed according to the case description sequence to obtain the training set. The adaptation process of the case-based reasoning prediction model is improved based on the deep Q network algorithm, a training set is used as input data, the prediction model is obtained by continuously learning and training, the change of the furnace temperature is researched from case difference data of the furnace temperature, the influence of each parameter on the furnace temperature is contained, the reference information of historical data on the change of the furnace temperature is also utilized, the furnace temperature prediction model is finally obtained, the furnace temperature change trend of the urban solid waste incineration process can be accurately predicted, the guarantee is provided for the optimal control of the incineration process, the foundation is laid for timely judging the change of the furnace temperature and the optimal control of the operation process in the solid waste incineration process, and the working efficiency is improved.
Drawings
FIG. 1 is an application environment diagram showing a furnace temperature prediction method of a solid waste incineration process in one embodiment;
FIG. 2 is a view showing an application environment of a furnace temperature prediction method for solid waste incineration process in one embodiment
FIG. 3 is a schematic diagram of a furnace temperature prediction model structure diagram showing a solid waste incineration process in one embodiment;
FIG. 4 is a flow diagram illustrating the construction of a variance database in one embodiment;
FIG. 5 is a flow diagram illustrating the construction of a case-difference prediction model in one embodiment;
FIG. 6 is a flow diagram illustrating a set-up loss function in one embodiment;
FIG. 7 is a flow chart showing a process for obtaining a furnace temperature prediction value based on a case difference prediction model in one embodiment;
FIG. 8 is a schematic diagram of broken lines showing a comparison of furnace temperature prediction results of solid waste incineration processes in one embodiment;
FIG. 9 is a block diagram of a furnace temperature prediction system showing a solid waste incineration process in one embodiment;
fig. 10 is a schematic structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the singular is "a," an, "and/or" the "when used in this specification is taken to mean" the presence of a feature, step, operation, device, component, and/or combination thereof.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, it will be understood by those of ordinary skill in the art that in various embodiments of the present application, numerous specific details are set forth in order to provide a thorough understanding of the present application. However, the claimed technical solution of the present application can be realized without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present application, and the embodiments can be combined with each other and cited with each other without contradiction.
For ease of understanding, a system to which the present application is applicable will first be described. The furnace temperature prediction method for the solid waste incineration process provided by the application can be applied to a system architecture shown in fig. 1. The system comprises: a user space file server 103 and a terminal device 101, the terminal device 101 communicating with the user space file server 103 via a network. The user space file server 103 may be a file server based on nfsv3\v4 protocol, and operates in Linux environment, while NFS (network file system) is a network abstraction above the file system, and may allow a remote client running on the terminal device 101 to access through the network in a similar manner as a local file system. The terminal device 101 may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers, etc., and the user space file server 103 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
Fig. 2 is a schematic flow chart of a method for predicting the furnace temperature in the solid waste incineration process according to an embodiment of the present application, and fig. 3 is a schematic structural diagram of a model for predicting the furnace temperature in the solid waste incineration process according to an embodiment of the present application, where the method may be executed by a user space file server in the system shown in fig. 1. As shown in fig. 2 and 3, the method may include the steps of:
S201, acquiring a plurality of groups of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form case description, and constructing a difference database D according to the case description sequence;
s202, preprocessing data based on a difference database D to obtain a training set;
s203, constructing a case difference prediction model according to a training set based on a depth Q network algorithm;
s204, obtaining case difference data of the furnace temperature according to the characteristic variable of the current furnace temperature, using the case difference data as input data of a case difference prediction model, and obtaining a predicted value of the furnace temperature at the next moment according to output data of the case difference prediction model.
Each step is specifically described in detail below:
s201, acquiring a plurality of groups of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form case description, and constructing a difference database D according to the case description sequence;
in this embodiment, it should be noted that, a case description is formed by acquiring a plurality of sets of characteristic variables affecting the furnace temperature and corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment, and a pair of case descriptions are compared according to the sequence of the formed case descriptions to obtain a case difference description, a difference database D is obtained, a data basis is provided for a prediction model, and the difference rule between the historical furnace temperature data and the current furnace temperature data is learned by collecting the characteristic variables and the corresponding furnace temperature values, so that the prediction analysis is convenient to be performed according to the collected historical data. Specifically, the number of samples of the training set is 1000 groups, the number of samples of the test set is 200 groups, and the time interval of each group of samples is 10s, wherein the training set is used for training the constructed prediction model, and the test set is used for testing whether the finally obtained prediction model can be accurately predicted or not, so that the accuracy of the finally obtained prediction model is obtained.
S202, preprocessing data based on a difference database D to obtain a training set;
in this embodiment, it should be noted that, the constructed difference database D is subjected to data preprocessing, and all the history data are unified, so that the analysis processing can be directly performed later.
S203, constructing a case difference prediction model according to a training set based on a depth Q network algorithm;
in this embodiment, it should be noted that the deep Q network algorithm, abbreviated as DQN, is a Q learning algorithm implemented using a deep learning algorithm. The Q learning algorithm is a reinforcement learning method based on an evaluation value (value), which guides decision of an agent by learning an action value function Q (state). The deep Q network is different from the conventional Q learning algorithm in that it does not need to define a state-action cost function in advance, but automatically learns a function approximator representing the function through a neural network, and in the training process of the neural network, the parameters of the action value function are updated by using the Q learning algorithm, so that a more complex control problem can be realized. Aiming at historical data generated in the incineration process, according to a function approximator of the DQN, in the training process of the neural network, the parameters of the action value function are updated by using a Q learning algorithm to obtain an optimal result, a final case difference prediction model of the furnace temperature is obtained, and accurate prediction of the furnace temperature can be realized.
S204, obtaining case difference data of the furnace temperature according to the characteristic variable of the current furnace temperature, and obtaining a predicted value of the furnace temperature at the next moment according to the output data of the case difference prediction model as input data of the case difference prediction model;
in this embodiment, it should be noted that, according to the difference database D constructed according to the collected historical data, the difference data forms a training set, the training set is used as the input data of the prediction model, the prediction model is trained to obtain an output value, according to the difference rule of the learned historical furnace temperature data and the current furnace temperature data, the furnace temperature prediction modeling in the urban solid waste incineration process is implemented according to the combination of the output value and the similar case obtained by retrieval, so as to accurately predict the change trend of the furnace temperature.
Referring to fig. 4, in some embodiments, in S201, a plurality of sets of characteristic variables affecting the furnace temperature and corresponding current furnace temperature values and corresponding history data of the furnace temperature values at the next moment are acquired to form a case description, and a difference database is constructed according to the order of the case description, including:
s2011, acquiring a plurality of groups of characteristic variables such as fire grate speed, fire grate temperature, primary air flow, secondary air flow, fan pressure and the like which influence the fire temperature, and taking the corresponding current fire temperature value as a problem description characteristic X of a case C, taking the fire temperature value y at the next moment as a solution description, forming case description together, taking N cases together to construct a fire temperature case library C, and adopting a formula (1) to show:
S2012, selecting a pair of cases c from the case library in sequence according to the case description sequence i ,c j Obtaining a case difference description, and showing the case difference description by a formula (2):
e=Δ(c i ,c j )=(ΔX ij ,Δy ij ) (20);
s2013, constructing a case difference database D based on the obtained case difference description, wherein the number of the case differences is N 2 Described by formula (3):
in this embodiment, it should also be noted that, collecting multiple sets of fire grate speed, fire grate temperature, primary air flow, secondary air flow, fan pressure, etc., and specifically, as shown in table 1, a total of 65 feature variables and corresponding current furnace temperature values; and taking the obtained characteristic variable and the history data of the current furnace temperature value as a problem description characteristic X of the case C, taking the furnace temperature value y at the next moment as a solution description, forming case description together, and taking N cases together to construct a furnace temperature case library C for providing a data basis for obtaining case difference description.
Table 1 is a data table for collecting historical data features;
in this embodiment, it should be further noted that, the above 65 feature variables are all process variables of the solid waste incineration process and corresponding current furnace temperature values, and are also necessary conditions for predicting the furnace temperature, and the case difference description is obtained by collecting the historical data of the 65 feature variables and corresponding current furnace temperature values, so as to facilitate training of the prediction model.
Referring to fig. 2, in some embodiments, in S202, data preprocessing is performed based on the difference database D to obtain a training set, including:
the difference case problem features in the case difference database D are subjected to min-max standardization, and are described by a formula (4):
wherein DeltaX k,m Represents the mth feature variable in the kth difference case, k=1, 2, …, N 2 ,N 2 Representing the number of case differences, m=1, 2, …, M representing the number of difference case features, where m=66.
In this embodiment, it should be noted that, by normalization processing, the dimension influence between different variables is eliminated, where normalization processing is performed on all the feature variables in the training set D.
Referring to fig. 2 and 5, in some embodiments, in S203, constructing a case difference prediction model according to a training set based on a depth Q network algorithm includes:
s2031, setting a depth Q network hidden layer number L, a hidden layer node number n, a learning rate lr and a discount factor gamma based on a depth Q network, defining a related state space S, an action space A and a reward function r, and initializing parameters; wherein,
the state space S, which is a series of state descriptions, is herein the problem feature in the case of furnace temperature difference, expressed as s= [ Δx ] 1 ,…,ΔX N 2 ]The method comprises the steps of carrying out a first treatment on the surface of the The motion space is a series of motion values of interval l, which represents the solution size in the case of furnace temperature difference, namely A= [ delta y ] min ,Δy min +l,…,Δy max ];
The bonus function is described by equation (5):
r=-|Δy-a| (23),
where Δy represents the solution of the difference case and a represents the motion in the motion space a.
In this embodiment, it should be noted that, the deep Q network algorithm (DQN) is a deep reinforcement learning algorithm based on a value function, and the DQN can better combine deep learning with reinforcement learning by constructing a prediction model according to the deep Q network algorithm, and it introduces three core technologies: the method comprises the steps that an objective function is realized by using a convolutional neural network and combining a full-connection approximator serving as an action value function, the end-to-end effect is realized, the input is a video picture, and the output is a finite number of action value functions; the two target networks are arranged to independently process TD errors, so that the target values are relatively stable, the TD error is generally Temporal Difference Error, which is an error calculation method based on time difference and is commonly used in reinforcement learning; thirdly, the experience playback mechanism effectively solves the problems of relativity and non-static state among data, so that the information input by the network meets the independent and same-distribution condition.
Wherein, reinforcement learning has four elements including state (state), action (action), policy (policy), reward (reward). The prediction model is constructed, based on the DQN structure, firstly parameters are configured, the number of hidden layers of the depth Q network is set to be L=3, the number of hidden layer nodes is set to be n=64, the learning rate lr=0.001, the discount factor gamma=0.95, a relevant state space S and an action space A are defined, and a reward function r is described by a formula (5): r= - |Δy-a| (24), parameter initialization is performed, and parameter settings of algorithms used in the prediction model are configured for subsequent computation. Wherein the state space S is a series of state descriptions, which are the problem features in the case of furnace temperature difference, s= [ Δx ] 1 ,…,ΔX N 2 ]The motion space is a series of motion values with interval l=0.5, here representing the solution size in the case of furnace temperature difference, a= [ Δy min ,Δy min +0.5,…,Δy max ]Δy represents the solution of the difference case, a represents the motion in the motion space a, and a prediction model is constructed by combining a specific furnace temperature prediction environment according to the DNQ structure.
Referring to fig. 2 and 5, in some embodiments, in S203, based on the depth Q network algorithm, a case difference prediction model is constructed according to the training set, and further including:
S2032, constructing a Q network and a target Q network, correspondingly defining a Q function Q (S, a|theta) of the Q network, and describing by a formula (6):
wherein θ represents a network weight, S represents a state in a state space S, a represents an action in an action space a, a Q function corresponding to a target Q network is represented by Q ' (S, a|θ '), and θ ' represents a target network weight;
s2033, selecting the action a according to the epsilon-greedy strategy, and describing by a formula (7):
s2034, taking the training set as input data of a Q (S, a|theta) network, and setting a loss function L (theta) related to the reward value according to the reward value corresponding to training and learning of the Q network and a target Q network;
and S2035, continuously updating the Q neural network parameter theta according to the loss function L (theta) until the loss function meets the set condition, and obtaining a case difference prediction model of the furnace temperature.
In the present embodiment, the deep Q network is a combination of a neural network in deep learning and a Q learning algorithm for solving an optimal action value function in reinforcement learning. By constructing the Q network and the target Q network, the Q function Q (s, a|theta) of the Q network is correspondingly defined, and the Q (s, a|theta) framework is a nonlinear function approximation model. θ is a Q neural network parameter. Deep Q network initialization initializes the θ parameters by building a neural network framework, i.e., a Q (s, a|θ) framework. And taking the training set as input data of a Q (s, a|theta) network, training and learning corresponding reward values through the Q network and a target Q network, setting a loss function L (theta) related to the reward values, and continuously updating the Q neural network parameter theta until the loss function meets the setting condition, so as to obtain a prediction model. Here, the loss function is used to optimize the network, and as the Q network is continuously updated, the loss function is made to go in a decreasing direction until reaching a preset condition, and learning is stopped, where the preset condition may be a preset training number of times, such as 1000 times, or a preset training error, such as 0.001.
In addition, the selection of the action a is realized according to an epsilon-greedy strategy, and the epsilon-greedy method is mainly used for balancing exploration (exploration) and utilization (exploration), namely, taking effort to explore so as to have more accurate estimation on benefits, or selecting the expected behavior of the maximum benefits according to the information owned at present. In the epsilon-greedy algorithm, the agent selects a random action at each decision point with epsilon probability and selects the action currently considered best with a probability of 1 epsilon. The epsilon-greedy algorithm has the formula that if the random number r < epsilon, a random action is selected; if the random number r > = epsilon, the action currently considered the best is selected, where epsilon is a fraction between 0 and 1, called the exploration rate. Specifically, at each decision point we first randomly generate a random number between 0 and 1. If r is less than the exploration rate, then a random action is selected, otherwise the action currently considered the best is selected. Furthermore, in the epsilon-greedy algorithm, the exploration rate epsilon may gradually decrease over time so that the agent makes more use of previous experience and knowledge. Here, the selection of action a is achieved by employing an ε -greedy policy, where the initial ε is set to 0.5, and then the selection of action a in the Q-network is updated to obtain the case-difference solution Δy.
In some embodiments, referring to fig. 3, 5, and 6, in S2034, setting a loss function L (θ) related to a reward value according to a reward value corresponding to Q network training learning with respect to a target Q network using a training set as input data of the Q (S, a|θ) network, includes:
s20341, setting the current state as S t According to action a t Awarding r to the network t And proceeds to the next state s t+1 And(s) t ,a t ,r t ,s t+1 ) Storing the experience pool B;
s20342, sampling (S) from experience pool B i ,a i ,r i ,s i+1 ) Calculating a target value t based on Q '(s, a|θ') of the target network i Described by equation (8):
t i =r i +γmaxQ′(s t+1 ,a′|θ′) (26),
wherein the experience pool B is used for storing the current state s t Action a t Next state s t+1 Prize r t A' represents an action value selected from the action space S by the target network;
s20343, continuously learning and training according to the training set based on the constructed Q network and the target Q network to obtain a loss function L (theta).
In this embodiment, it should be noted that, the obtained case difference data is used as a training set, input into Q network, and the TD error is processed separately through the set target network Q '(s, a|θ') so that the target value is relatively stable, and the training process is optimized by using techniques such as experience playback and target network. Here, the TD error is generally called Temporal Difference Error, which is an error calculation method based on time difference, and is commonly used in reinforcement learning. The formula of TD error is a formula commonly used in machine learning to measure the gap between predicted and true values. The formula of TD error is TD error = predicted value-true value + discount factor. Wherein the predicted value is a predicted value in the current state, the actual value is an actual value in the current state, and the discount factor is a value between 0 and 1, which is used for measuring the importance of future rewards. The formula of the TD error can help us evaluate the prediction capability of the model, and we can know the error magnitude of the model by calculating the difference between the predicted value and the true value, so as to optimize the model.
Specifically, the current state is acquired as s t To obtain Deltay according to action a t Awarding r to the network t I.e., r= - |Δy-a| and enter the next state s t+1 And(s) t ,a t ,r t ,s t+1 ) And storing the data in an experience pool B. Sample from experience pool B (s i ,a i ,r i ,s i+1 ) Calculating a target value t using Q '(s, a|θ') of the target network i I.e. t i =r i +γmaxQ′(s t+1 A ' |θ '), wherein the experience pool B is an element in DQN reinforcement learning using experience playback technology, and is used for storing each element updated continuously, i.e. current state, action, next state, rewards, etc., and a ' represents an action value selected by the target network from the action space. Based on the training set as input data to the Q (s, a|θ) network and using an empirical playback mechanism, data (s t ,a t ,r t ,s t+1 ) Storing the target value t in an experience pool B, and obtaining a target value t based on the reward value corresponding to continuous training and learning of the Q network and the target Q network i The loss function L (θ) is derived, and the parameters of the Q function and the target Q function are continually updated to maximize the overall return.
In some embodiments, referring to fig. 3 and 5, in S2035, the Q neural network parameter θ is continuously updated according to the loss function L (θ) until the loss function satisfies the setting condition, including: the update of the Q network is realized according to the Bellman equation, a loss function L (theta) is obtained, and the method is described by a formula (9):
L ii )=E (s,a,r,s′) [(t i -Q(s i ,a ii )) 2 ]
=E (s,a,r,s′) [(r i +γmaxQ′(s t+1 ,a′|θ′)-Q(s i ,a ii )) 2 ] (27)。
In the present embodiment, it should be noted that the Bellman equation (Bellman equation) is a function equation set concerning an objective function. The method of creating a system of function equations using the optimization principle and the embedding principle is called a function equation method. In practical application, a special solution is sought according to specific problems. The bellman equation is significant in that the value function of the current state can be calculated from the value function of the next state. Similarly, the state-action value functions have similar relationships. The purpose of calculating the state value functions is to construct a learning algorithm to obtain optimal strategies from the data, each strategy corresponding to one state value function, the optimal strategy corresponding to the optimal state value function to maximize the overall return.
The Q network is updated by adopting a Bellman equation, so that the current furnace temperature change value is determined according to the furnace temperature change value of the next state, and finally, the constructed Q network is continuously updated according to each strategy corresponding to a state value function until the loss function L (theta) meets the preset condition, and when the preset condition is that the training times reach 1000, the optimal strategy under the preset condition corresponds to the optimal state value function.
In some embodiments, referring to fig. 2 and 7, in S204, case difference data of a furnace temperature is obtained according to a feature variable of a current furnace temperature, and is used as input data of a case difference prediction model, and a predicted value of the furnace temperature at a next moment is obtained according to output data of the case difference prediction model, including:
S2041, characteristic variable X according to current furnace temperature t And in furnace temperature case base CThe historical data is subjected to similarity measurement calculation and is described by the formula (10):
wherein X is j Representing the problem characteristics of a certain case in the case library, wherein M is the characteristic variable number;
s2042, retrieving and obtaining the most similar case c according to the similarity xim =(X sim ,y sim ) Case difference data Δx of composition furnace temperature t,sim
S2043, deltaX t,sim As input, an output operation value a=q (Δx) is obtained from the constructed predicted furnace temperature change model t,sim ) The value and the corresponding action value is selected as a case difference solution, namely delta y t,sim =A[a];
S2044 solving for Deltay according to the obtained case difference t,sim The predicted value y of the furnace temperature is obtained and described by a formula (11):
y=y t,sim +Δy t,sim (28)。
in the present embodiment, it should be noted that, according to the characteristic variable X of the current furnace temperature in the test set t Similarity measurement calculation is carried out with the historical data in the furnace temperature case library C, specifically through a formulaFinding the case C closest to the characteristic variable of the furnace temperature case library C and the similarity of the corresponding furnace temperature values sim =(X sim ,y sim ) Obtaining furnace temperature case difference data delta X t,sim . The distance measurement formula is used here because if two cases are more similar, the distance is smaller, a sequence is obtained after a similarity measurement is performed on all cases, the result D is de-sequenced according to the similarity, and then the value with the smallest D is selected, namely the most similar case is obtained, namely the furnace temperature case difference data delta X is obtained t,sim
According to the difference of furnace temperature casesData DeltaX t,sim As input data of the furnace temperature prediction model, an output operation value a=q (Δx t,sim ) The value, according to the action space, selects the corresponding action value as case difference solution delta y t,sim =A[a]Obtaining a predicted value y of the furnace temperature according to the case difference solution and the current furnace temperature value, namely, the furnace temperature value at the next moment is y=y t,sim +Δy t,sim To predict the furnace temperature change trend.
In some embodiments, referring to fig. 2 and 7, according to the characteristic variable of the current furnace temperature, case difference data of the furnace temperature is obtained and used as input data of a case difference prediction model, and according to output data of the case difference prediction model, a predicted value of the furnace temperature at the next moment is obtained, and the method further includes: and evaluating model prediction accuracy according to the predicted furnace temperature change model based on the test set.
In this embodiment, it should be noted that, the steps S2041 to S2044 are repeated, and similarity measurement calculation is performed on the current furnace temperature characteristic variable of the test set data and the historical data in the furnace temperature case library C to obtain the furnace temperature case difference data, the furnace temperature case difference data is used as input data, output data, i.e. a case difference solution, is obtained according to the prediction model, and a predicted value y of the furnace temperature, i.e. a furnace temperature value at the next moment, is obtained according to the case difference solution and the current furnace temperature value. And comparing the obtained predicted value y with a true value of an actual test set, wherein the true value and the predicted value adopt Root Mean Square Error (RMSE) and average absolute percentage error (MAPE) to evaluate the prediction performance of the integral model, and the smaller the RMSE and the MAPE are, the higher the prediction precision is.
Specifically, in one example, as shown in fig. 8, 200 groups of collected incineration process historical data are used as a test set, and the obtained prediction value and the obtained true value are respectively calculated to be respectively 1.791 (DEG C) and 2.325 (DEG C) by adopting an average absolute error and root mean square error evaluation method, so that the method accords with target errors, and shows that the method can realize accurate estimation of the furnace temperature in the urban solid waste incineration process, and a predicted furnace temperature change model constructed based on a case-based reasoning prediction method improved by a depth Q network can accurately realize accurate prediction of the change trend of the furnace temperature change model, thereby providing guarantee for optimal control of the incineration process, enabling on-site operators to timely master the incineration working conditions of the solid waste in the furnace and improving the working efficiency.
The implementation principle of the embodiment is as follows: the above steps mainly include collecting characteristic variables generated in the incineration process of the solid waste incineration power plant, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form case descriptions, and constructing a difference database D according to the case description sequence to obtain a training set. The adaptation process of the case-based reasoning prediction model is improved based on the deep Q network algorithm, a training set is used as input data, the prediction model is obtained by continuously learning and training, the change of the furnace temperature is researched from case difference data of the furnace temperature, the influence of each parameter on the furnace temperature is contained, the reference information of historical data on the change of the furnace temperature is also utilized, the furnace temperature prediction model is finally obtained, the furnace temperature change trend of the urban solid waste incineration process can be accurately predicted, the guarantee is provided for the optimal control of the incineration process, the foundation is laid for timely judging the change of the furnace temperature and the optimal control of the operation process in the solid waste incineration process, and the working efficiency is improved.
Referring to fig. 9, the present application is embodied in a system for furnace temperature prediction for solid waste incineration processes, which may include: the system comprises an acquisition module, a data preprocessing module, a prediction model constructing module and a result prediction module. The main functions of each component module are as follows:
the acquisition module 301 is configured to acquire a plurality of sets of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form a case description, and construct a difference database D according to the sequence of the case description;
the data preprocessing module 302 is configured to perform data preprocessing based on the difference database D to obtain a training set;
the prediction model constructing module 303 is configured to construct a case difference prediction model according to the training set based on the deep Q network algorithm;
the result prediction module 304 is configured to obtain case difference data of the furnace temperature according to the feature variable of the current furnace temperature, and obtain a predicted value of the furnace temperature at the next time according to the output data of the case difference prediction model as input data of the case difference prediction model.
According to an embodiment of the present application, the present application also provides a computer device, a computer-readable storage medium.
As shown in fig. 10, is a block diagram of a computer device according to an embodiment of the present application. Computer equipment is intended to represent various forms of digital computers or mobile devices. Wherein the digital computer may comprise a desktop computer, a portable computer, a workstation, a personal digital assistant, a server, a mainframe computer, and other suitable computers. The mobile device may include a tablet, a smart phone, a wearable device, etc.
As shown in fig. 10, the apparatus 600 includes a computing unit 601, a ROM 602, a RAM 603, a bus 604, and an input/output (I/O) interface 605, and the computing unit 601, the ROM 602, and the RAM 603 are connected to each other through the bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The computing unit 601 may perform various processes in the method embodiments of the present application according to computer instructions stored in a Read Only Memory (ROM) 602 or computer instructions loaded from a storage unit 608 into a Random Access Memory (RAM) 603. The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. The computing unit 601 may include, but is not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), as well as any suitable processor, controller, microcontroller, etc. In some embodiments, the methods provided by embodiments of the present application may be implemented as a computer software program tangibly embodied on a computer-readable storage medium, such as storage unit 608.
The RAM 603 may also store various programs and data required for operation of the device 600. Part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609.
An input unit 606, an output unit 607, a storage unit 608, and a communication unit 609 in the device 600 may be connected to the I/O interface 605. Wherein the input unit 606 may be such as a keyboard, mouse, touch screen, microphone, etc.; the output unit 607 may be, for example, a display, a speaker, an indicator light, or the like. The device 600 is capable of exchanging information, data, etc. with other devices through the communication unit 609.
It should be noted that the device may also include other components necessary to achieve proper operation. It is also possible to include only the components necessary to implement the inventive arrangements, and not necessarily all the components shown in the drawings.
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof.
Computer instructions for implementing the methods of the present application may be written in any combination of one or more programming languages. These computer instructions may be provided to a computing unit 601 such that the computer instructions, when executed by the computing unit 601, such as a processor, cause the steps involved in embodiments of the method of the present application to be performed.
The computer readable storage medium provided by the present application may be a tangible medium that may contain, or store, computer instructions for performing the steps involved in the method embodiments of the present application. The computer readable storage medium may include, but is not limited to, storage media in the form of electronic, magnetic, optical, electromagnetic, and the like.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (10)

1. The furnace temperature prediction method for the solid waste incineration process is characterized by comprising the following steps of: acquiring a plurality of groups of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form case description, and constructing a difference database D according to the case description sequence;
Performing data preprocessing based on the difference database D to obtain a training set;
based on a depth Q network algorithm, constructing a case difference prediction model according to the training set;
obtaining case difference data of the furnace temperature according to characteristic variables of the current furnace temperature, using the case difference data as input data of the case difference prediction model, and obtaining a predicted value of the furnace temperature at the next moment according to output data of the case difference prediction model.
2. The method for predicting the furnace temperature in the solid waste incineration process according to claim 1, wherein the obtaining the plurality of sets of characteristic variables affecting the furnace temperature and the corresponding current furnace temperature values and the corresponding historical data of the furnace temperature values at the next time form case descriptions, and constructing a difference database D according to the case description sequence, comprises:
acquiring multiple sets includes, but is not limited to: the characteristic variables of the fire grate speed, the fire grate temperature, the primary air flow, the secondary air flow and the fan pressure are taken as the problem description characteristic X of the case C, the furnace temperature value y at the next moment is taken as the solution description, the case description is formed together, N cases are taken together to construct a furnace temperature case library C, and the furnace temperature case library C is shown by the formula (1):
sequentially selecting a pair of cases C from the furnace temperature case base C according to the case description sequence i ,c j Obtaining a case difference description, and showing the case difference description by a formula (2):
e=Δ(c i ,c j )=(ΔX ij ,Δy ij ) (2);
constructing a case based on the obtained case difference descriptionAn example difference database D with the number of case differences N 2 Described by formula (3):
3. the furnace temperature prediction method for solid waste incineration process according to claim 2, wherein the data preprocessing is performed based on the difference database D to obtain a training set, comprising:
the difference case problem features in the case difference database D are subjected to min-max standardization, and are described by a formula (4):
wherein DeltaX k,m Represents the mth feature variable in the kth difference case, k=1, 2, …, N 2 ,N 2 Representing the number of case differences, m=1, 2, …, M representing the number of difference case features, where m=66.
4. The furnace temperature prediction method of the solid waste incineration process according to claim 1, wherein the constructing a case difference prediction model based on the training set based on the deep Q network algorithm comprises:
setting the hidden layer number L, the hidden layer node number n, the learning rate lr and the discount factor gamma of the deep Q network based on the deep Q network, defining a related state space S, an action space A and a reward function r, and initializing parameters; wherein,
The state space S is a series of state descriptions, which are the problem features in the case of furnace temperature difference, and is expressed as s= [ Δx ] 1 ,…,ΔX N 2 ]The method comprises the steps of carrying out a first treatment on the surface of the The motion space is a series of motion values of interval l, which represents the solution size in the case of furnace temperature difference, namely A= [ delta y ] min ,Δy min +l,…,Δy max ];
The bonus function is described by equation (5):
r=-|Δy-a| (5),
where Δy represents the solution of the difference case and a represents the motion in the motion space a.
5. The furnace temperature prediction method for solid waste incineration process according to claim 4, wherein the constructing a case difference prediction model based on the training set based on the deep Q network algorithm further comprises:
constructing a Q network and a target Q network, correspondingly defining a Q function Q (s, a|theta) of the Q network, and describing by a formula (6):
wherein θ represents a network weight, S represents a state in a state space S, and a represents an action in an action space a;
q functions corresponding to the target Q network are represented by Q ' (s, a|theta '), and theta ' represents the weight of the target network;
the selection of action a is realized according to the epsilon-greedy strategy, and is described by a formula (7):
taking the training set as input data of a Q (s, a|theta) network, and setting a loss function L (theta) related to the reward value according to the reward value corresponding to training and learning of the Q network and a target Q network;
And continuously updating the Q neural network parameter theta according to the loss function L (theta) until the loss function meets the set condition, and obtaining a case difference prediction model of the furnace temperature.
6. The method for predicting the furnace temperature in the solid waste incineration process according to claim 5, wherein the training set is used as the input data of a Q (s, a|θ) network according to the Q networkSetting a loss function L (θ) associated with the bonus value corresponding to the target Q network training learning, comprising: setting the current state as s t According to action a t Awarding r to the network t And proceeds to the next state s t+1 And(s) t ,a t ,r t ,s t+1 ) Storing the experience pool B;
based on sampling(s) in empirical pool B i ,a i ,r i ,s i+1 ) Calculating a target value t based on Q '(s, a|θ') of the target network i Described by equation (8):
t i =r i +γmaxQ′(s t+1 ,a′|θ′) (7),
wherein the experience pool B is used for storing the current state s t Action a t Next state s t+1 Prize r t A' represents an action value selected from the action space S by the target network;
and continuously learning and training according to the training set based on the constructed Q network and the target Q network to obtain a loss function L (theta).
7. The furnace temperature prediction method for solid waste incineration process according to claim 6, wherein the step of continuously updating the Q neural network parameter θ according to the loss function L (θ) until the loss function satisfies a set condition comprises:
The update of the Q network is realized according to the Bellman equation, a loss function L (theta) is obtained, and the method is described by a formula (9):
8. the method for predicting the furnace temperature in the solid waste incineration process according to claim 7, wherein the obtaining case difference data of the furnace temperature according to the characteristic variable of the current furnace temperature and the input data of the case difference prediction model, and obtaining the predicted value of the furnace temperature at the next moment according to the output data of the case difference prediction model, comprises:
characteristic variable X according to current furnace temperature t Similarity measurement calculation is carried out with historical data in a furnace temperature case library C, and the calculation is described by a formula (10):
wherein X is j Representing the problem characteristics of a certain case in the case library, wherein M is the characteristic variable number;
retrieving and obtaining the most similar case c according to the similarity xim =(X sim ,y sim ) Case difference data Δx of composition furnace temperature t,sim
Will DeltaX t,sim As input, an output action value a=q (Δx) is obtained by the constructed case difference prediction model t,sim ) The value and selecting the corresponding action value as case difference solution delta y from action space t,sim =A[a];
Resolving deltay from the case differences obtained t,sim The predicted value y of the furnace temperature is described by the formula (11):
y=y t,sim +Δy t,sim (9)。
9. a system for furnace temperature prediction for solid waste incineration processes, the system comprising: the acquisition module is used for acquiring a plurality of groups of characteristic variables affecting the furnace temperature, corresponding current furnace temperature values and corresponding historical data of the furnace temperature values at the next moment to form case description, and constructing a difference database D according to the case description sequence;
The data preprocessing module is used for preprocessing data based on the difference database D to obtain a training set;
the prediction model building module is used for building a case difference prediction model according to the training set based on a depth Q network algorithm;
the result prediction module is used for obtaining case difference data of the furnace temperature according to the characteristic variable of the current furnace temperature, and obtaining a predicted value of the furnace temperature at the next moment according to the output data of the case difference prediction model as input data of the case difference prediction model.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.
CN202311032380.2A 2023-08-16 2023-08-16 Furnace temperature prediction method, system, equipment and medium for solid waste incineration process Pending CN116861256A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311032380.2A CN116861256A (en) 2023-08-16 2023-08-16 Furnace temperature prediction method, system, equipment and medium for solid waste incineration process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311032380.2A CN116861256A (en) 2023-08-16 2023-08-16 Furnace temperature prediction method, system, equipment and medium for solid waste incineration process

Publications (1)

Publication Number Publication Date
CN116861256A true CN116861256A (en) 2023-10-10

Family

ID=88219302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311032380.2A Pending CN116861256A (en) 2023-08-16 2023-08-16 Furnace temperature prediction method, system, equipment and medium for solid waste incineration process

Country Status (1)

Country Link
CN (1) CN116861256A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117806169A (en) * 2024-01-17 2024-04-02 北京工业大学 Furnace temperature early warning optimization method, system, terminal and medium based on neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117806169A (en) * 2024-01-17 2024-04-02 北京工业大学 Furnace temperature early warning optimization method, system, terminal and medium based on neural network
CN117806169B (en) * 2024-01-17 2024-06-04 北京工业大学 Furnace temperature early warning optimization method, system, terminal and medium based on neural network

Similar Documents

Publication Publication Date Title
CN109902801B (en) Flood collective forecasting method based on variational reasoning Bayesian neural network
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN111814956B (en) Multi-task learning air quality prediction method based on multi-dimensional secondary feature extraction
Ayodeji et al. Causal augmented ConvNet: A temporal memory dilated convolution model for long-sequence time series prediction
CN113205233B (en) Lithium battery life prediction method based on wolf algorithm and multi-core support vector regression
CN116448419A (en) Zero sample bearing fault diagnosis method based on depth model high-dimensional parameter multi-target efficient optimization
CN112381673B (en) Park electricity utilization information analysis method and device based on digital twin
CN103413038A (en) Vector quantization based long-term intuitionistic fuzzy time series prediction method
CN116861256A (en) Furnace temperature prediction method, system, equipment and medium for solid waste incineration process
CN113325721A (en) Model-free adaptive control method and system for industrial system
CN116663419A (en) Sensorless equipment fault prediction method based on optimized Elman neural network
Kosana et al. Hybrid wind speed prediction framework using data pre-processing strategy based autoencoder network
CN115879369A (en) Coal mill fault early warning method based on optimized LightGBM algorithm
Osman et al. Soft Sensor Modeling of Key Effluent Parameters in Wastewater Treatment Process Based on SAE‐NN
Anh et al. Effect of gradient descent optimizers and dropout technique on deep learning LSTM performance in rainfall-runoff modeling
Guo et al. Data mining and application of ship impact spectrum acceleration based on PNN neural network
CN116522594A (en) Time self-adaptive transient stability prediction method and device based on convolutional neural network
Liu et al. Application of least square support vector machine based on particle swarm optimization to chaotic time series prediction
CN115619563A (en) Stock price analysis method based on neural network
CN115062528A (en) Prediction method for industrial process time sequence data
CN115310355A (en) Multi-energy coupling-considered multi-load prediction method and system for comprehensive energy system
Doudkin et al. Ensembles of neural network for telemetry multivariate time series forecasting
CN108960406B (en) MEMS gyroscope random error prediction method based on BFO wavelet neural network
CN114741952A (en) Short-term load prediction method based on long-term and short-term memory network
Li et al. The research on electric load forecasting based on nonlinear gray bernoulli model optimized by cosine operator and particle swarm optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination