CN114707684A - Improved LSTM-based raw tobacco stack internal temperature prediction algorithm - Google Patents

Improved LSTM-based raw tobacco stack internal temperature prediction algorithm Download PDF

Info

Publication number
CN114707684A
CN114707684A CN202111534264.1A CN202111534264A CN114707684A CN 114707684 A CN114707684 A CN 114707684A CN 202111534264 A CN202111534264 A CN 202111534264A CN 114707684 A CN114707684 A CN 114707684A
Authority
CN
China
Prior art keywords
data
lstm
temperature
algorithm
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111534264.1A
Other languages
Chinese (zh)
Inventor
徐跃明
陈斌
周继来
许仁杰
方海英
王磊
曾嵘
郭绍坤
杨磊
黄纳临
周鹏
杨文静
杨荣春
杨延鹏
李莉
周萍
柯宁
莫峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hongyun Honghe Tobacco Group Co Ltd
Original Assignee
Hongyun Honghe Tobacco Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hongyun Honghe Tobacco Group Co Ltd filed Critical Hongyun Honghe Tobacco Group Co Ltd
Priority to CN202111534264.1A priority Critical patent/CN114707684A/en
Publication of CN114707684A publication Critical patent/CN114707684A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a raw tobacco stack internal temperature prediction algorithm based on improved LSTM, which belongs to the field of raw tobacco maintenance. Experimental results prove that the algorithm can realize accurate prediction of the internal temperature of the raw tobacco stack.

Description

Improved LSTM-based raw tobacco stack internal temperature prediction algorithm
Technical Field
The invention belongs to the field of raw tobacco maintenance, and particularly relates to a raw tobacco stack internal temperature prediction algorithm based on improved LSTM
Background
The lamina tobacco is the main storage form of the original tobacco at present, the alcoholization effect of the lamina tobacco is directly influenced by the temperature and humidity environment inside the stack for storing the lamina tobacco, and the alcoholization effect of the lamina tobacco is related to the quality of the finished cigarette product. The alcoholization effect of the tobacco lamina has a close relationship with the temperature and humidity of the storage environment. Therefore, in the natural alcoholization process of the tobacco flakes, monitoring the temperature and humidity environment inside the tobacco flakes stack is a necessary work, and relevant personnel can be guided to take measures in time to avoid the mildew of the cured tobacco leaves and reduce the loss. It is therefore necessary to predict temperature data within the stack of lamina over a future period of time.
A plurality of scholars have made corresponding research at home and abroad about the data prediction. The precipitation approach forecast is described as a space-time sequence prediction problem, and the input and prediction targets are space-time sequences. Data on the approach of precipitation was effectively predicted by using LSTM. There is also a research on predicting road surface temperature using advanced Deep Neural Network (DNN) learning techniques for road salination management. Hourly solar radiation and air temperature data are used, as well as road surface temperature data collected by the Road Weather Information System (RWIS) around ontario city. The DNN model combining the Convolutional Neural Network (CNN) and the Long-Short-Term Memory (LSTM) is applied to pavement surface temperature prediction, and is compared with 4 comparative machine learning methods such as the LSTM, the Convolutional-LSTM, the Sequence-to-Sequence (Seq 2Seq), the Wavelet Neural Network (Wavelet) and the like. A prediction model based on long-and-short-term memory (LSTM) is also proposed for accurate prediction, and statistical analysis is performed on various network models to obtain a network structure suitable for accurate prediction of solar power and temperature.
In addition, some people use a Recurrent Neural Network (RNN) to predict the temperature, so that a very good effect is obtained, and a Long Short-term Memory (LSTM) Network can overcome the defects of gradient disappearance, gradient explosion and insufficient Long-term Memory capability of the RNN. And a combined model combining the long-term and short-term memory neural network LSTM and the gradient lifting algorithm LightGBM is also provided for predicting the environmental temperature value of the passenger station. The LSTM-LightGBM-based combined model prediction method can keep the periodic characteristic of the LSTM model on univariate prediction, and can show the non-stationary change of the environment characteristic variable input into the LightGBM model on temperature prediction. The results show that the combined model method based on LSTM-LightGBM is closer to the original waveform and has lower RMSE than the LSTM method alone. Some people test the mixed flow closed cooling tower by a control variable method, screen factors influencing the water outlet temperature by a gray correlation analysis method, and take 5 factors with larger correlation degree as input parameters to further establish a gray BP neural network prediction model to predict the water outlet temperature of the mixed flow closed cooling tower. The operation parameters comprise water inlet temperature, wet bulb temperature, water supplementing temperature, circulating water flow and air quantity, and the output value is water outlet temperature.
In dealing with temperature prediction problems, RNNs (recurrent neural networks) are typically used for time series data, and their main role is to link previously processed information to the current task. However, as the distance between the previous information location and the current predicted location increases, the RNN loses the ability to learn information with a long distance between connections, but the LSTM network does not have this drawback.
Disclosure of Invention
The invention provides a raw tobacco stack internal temperature prediction algorithm based on an improved LSTM (structured surface model), which aims to solve the defects of gradient disappearance, gradient explosion and insufficient long-term memory capacity of an RNN (RNN), improve the training speed and the calculation precision of the LSTM, combine the SOM (Self-Organizing Maps) algorithm with the LSTM, firstly preprocess data, normalize time sequence data, then divide the normalized data into a training set and a testing set, input an SOM neural network, cluster the data, then construct an LSTM neural network structure, input temperature training data for training, predict indoor temperature in a near time period in the future, then predict and output, measure algorithm effects through actual data, and measure experimental results through Root Mean Square Error (RMSE) and other indexes. Experimental results prove that the algorithm can realize accurate prediction of the internal temperature of the raw tobacco stack.
In order to achieve the purpose, the invention adopts the following technical scheme that the raw tobacco stack internal temperature prediction algorithm based on the improved LSTM is used for monitoring and predicting the temperature change in a tobacco stack, and is characterized in that the temperature prediction algorithm is realized by adopting the following steps:
step 1, preprocessing data and normalizing time sequence data;
step 2, dividing the normalized data into a training set and a test set, inputting the training set and the test set into an SOM neural network, and clustering the data;
step 3, constructing an LSTM neural network structure, inputting temperature training data for training, and predicting indoor temperature in a time period near the future;
and 4, predicting output, and measuring the algorithm effect through actual data.
Preferably, in step 1, in order to improve the training speed and prediction accuracy of the LSTM, the temperature data of different dates are normalized, and the normalization formula is:
Xnorm=(X-Xmean)/(Xmax-Xmin)
wherein, XmeanIs the average of the data used, XmaxIs the maximum value of the data used, XminIs the minimum value of the data used.
Preferably, in the step 2, the differences among the data are extracted, the normalized data are divided into a training set and a test set, the training set and the test set are input to an SOM neural network, the data with the same clustering result are classified into one class, and different classes of data sets are provided for the next LSTM neural network to improve the prediction accuracy.
Preferably, in step 3, the solver of the SOM-LSTM training is set to Adam, the gradient threshold is set to 1, the initial learning rate is specified to be 0.002, and the learning rate is reduced by multiplying by a factor of 0.2 after 125 rounds of training.
Preferably, in the step 4, the temperature data of the first half part is selected to predict the temperature data, so as to obtain the predicted temperature of the second half part, and then the predicted temperature is compared with the known real temperature data, so that the prediction reliability of the algorithm is proved.
Preferably, the prediction is measured by a Root Mean Square Error (RMSE) indicator.
Has the advantages that:
the invention provides a raw tobacco stack internal temperature prediction algorithm based on improved LSTM, aiming at solving the defects of gradient disappearance, gradient explosion and insufficient long-term memory capacity of RNN and improving the training speed and calculation precision of LSTM, the SOM (Self-Organizing Maps, SOM) algorithm is combined with LSTM, firstly data are preprocessed, time sequence data are normalized, then the normalized data are divided into a training set and a testing set, an SOM neural network is input, the data are clustered, then an LSTM neural network structure is constructed, temperature training data are input for training, meanwhile, the indoor temperature in a near time period in the future is predicted, then the output is predicted, the algorithm effect is measured through actual data, and the experimental result is measured through indexes such as Root Mean Square Error (RMSE). Experimental results prove that the algorithm can realize accurate prediction of the internal temperature of the raw tobacco stack.
Drawings
FIG. 1 is a diagram of a standard LSTM cell structure;
FIG. 2 is a data normalization;
FIG. 3 is a general algorithm flow diagram;
FIG. 4 is a wireless sensor layout (section);
FIG. 5 is predicted values of temperature under different algorithms;
FIG. 6 is a partial prediction of temperature based on SOM-LSTM;
FIG. 7 is a comparison of partial prediction of temperature based on SOM-LSTM versus the true value;
FIG. 8 shows the predicted SOM-LSTM temperature.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those skilled in the art, the technical solutions of the present invention will be further described with reference to the accompanying drawings and specific embodiments.
The LSTM algorithm is essentially a special model of a Recurrent Neural Network (RNN) and is used for processing the problems of gradient elimination and gradient explosion in the RNN training process, and an independent transmission mechanism for memory data and result data is formed. The key to LSTM is the cell state cell, which has only a few linear interactions during the entire chain run, so the information flow on it can not be easily changed. As can be seen in FIG. 1, LSTM has three gates to protect and control the state of the cells. The three gates are a forgetting gate, an input gate and an output gate respectively.
The first step in LSTM is to decide what information to discard from the cell state. This decision is made by forgetting the door. The door will read ht-1And xtOutputting a value of 0 to 1 to each of the cell states ct-1The numbers in (1). 1 means "complete retention" and 0 means "complete discard".
ft=σ(Wf·[ht-1,xt]+bf) (1)
Wherein h ist-1The output of the last cell, x, is showntThe input of the current cell is shown. σ is a vector, and each element of the output is a real number between 0 and 1, representing the weight that lets the corresponding information pass, such as: 0 means "any information is not passable", and 1 means "all information is passable".
The second step in the LSTM is to decide how much new information to add to the cell state. This process includes two steps:
(1) the sigma layer is an input gate layer and determines which information is to be updated;
(2) the tanh layer generates a new candidate vector
Figure BDA0003412020060000041
I.e. the content that is alternative to update;
combining the two parts to update the state of the cell
it=σ(Wi·[ht-1,xt]+bi) (2)
Figure BDA0003412020060000042
Updating old cell state, Ct-1Is updated to CtThe new candidate is:
Figure BDA0003412020060000051
(1) the last step in LSTM is to determine what value to output. This value is based on the cell state and is also a filtered value.
(2) Operation ht=ot·tanh(Ct) The layer determines which part of the cell state will be output.
(3) The cell state is processed by tanh (to obtain a value between-1 and 1) and multiplied by the output of the sigma gate to determine the output value
ot=σ(Wo[ht-1,xt]+bo) (5)
ht=ot·tanh(Ct) (6)
Wherein W ═ Wf,Wi,WC,Wo]Representing the weight matrix to be trained, B ═ Bf,bi,bC,bo]The amount of offset is indicated by an indication,
Figure BDA0003412020060000052
representing the cell states at different times.
The Self-Organizing mapping (SOM) algorithm is an unsupervised learning algorithm for clustering and high-dimensional visualization, and is an artificial neural network developed by simulating the characteristics of human brain on signal processing. The SOM network consists of an input layer and a contention layer (output layer). The number of neurons in the input layer is n, the competition layer is composed of m neurons, the competition layer is usually a one-dimensional or two-dimensional planar array, and the network is fully connected, namely each input node is connected with all the output nodes.
An input layer: receiving external information, and transmitting an input mode to a competition layer to play a role of 'observation';
competition layer: and the system is responsible for analyzing and comparing input modes, searching for rules and classifying the rules.
The competitive learning rule is obtained from the lateral inhibition phenomenon of the neuron cell, and the learning steps are as follows:
(1) vector normalization
For the current input mode vector X in the self-organizing network and the weight vector omega corresponding to each neuron in the competition layerj(j ═ 1,2, …, m), all normalized to give
Figure BDA0003412020060000053
And
Figure BDA0003412020060000054
Figure BDA0003412020060000055
Figure BDA0003412020060000056
(2) finding winning neurons
Will be provided with
Figure BDA0003412020060000061
Weight vector omega corresponding to all neurons of the competition layerj(j ═ 1,2, …, m) similarity ratios were performed. The most similar neuron wins with a weight vector of
Figure BDA0003412020060000062
(3) Network output and rights adjustment
According to the WTA learning rule, the output of winning neuron is "1", and the others are 0, namely
Figure BDA0003412020060000063
Only the winning neuron has the right to adjust its weight vector Wj *The weight vector learning is adjusted as follows:
Figure BDA0003412020060000064
alpha is more than or equal to 0 and less than or equal to 1, the learning efficiency is the learning efficiency, and alpha generally decreases along with the progress of learning multidimensional, namely the adjustment degree is smaller and tends to the clustering center.
(4) Renormalization process
After the normalized weight vector is adjusted, the obtained new vector is no longer a unit vector, so the learning adjusted vector needs to be normalized again, and the operation is circulated until the learning rate alpha is attenuated to 0.
The temperature prediction algorithm provided by the invention comprises the following specific steps:
(1) data preprocessing: in order to improve the training speed and the prediction precision of the LSTM, the temperature data of different dates are normalized, and the normalization formula is as follows:
Xnorm=(X-Xmean)/(Xmax-Xmin) (11)
wherein: xmeanIs the average of the data used, XmaxIs the maximum value of the data used, XminIs the minimum value of the data used. The normalized temperature data is shown in fig. 2.
(2) SOM clustering: and extracting the difference among the data, dividing the normalized data into a training set and a test set, inputting the training set and the test set into an SOM neural network, classifying the data with the same clustering result into one class, and providing different classes of data sets for the next LSTM neural network so as to improve the prediction precision.
(3) And then constructing an LSTM neural network structure, inputting temperature training data for training, and predicting indoor temperature in a period of time near the future.
(4) And predicting output and measuring the effect of the algorithm through actual data.
The overall algorithm flow is shown in fig. 3.
The experimental data of the invention is fit by temperature monitoring data of a certain tobacco stack which is stacked in open air in the material storage department of the logistics center of Hongyun Honghe company within one week of 2021, 4 months, 25 days to 5 months, 1 day every 30 minutes. The maximum layout upper limit of the sensors is nmaxThe 80 sensors are arranged in the room, and the information records are divided into 8 sections due to space problemsThe invention only selects the schematic diagram of the placement of the wireless sensors of the first section and the first layer, as shown in fig. 4. The monitoring time interval of the sensor to the temperature in the experiment is 30 min.
To ensure the scientificity of the experiment, the evaluation indices commonly used in the prediction field were used: mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and decision coefficient (R)2) To measure the results of the experiment.
Figure BDA0003412020060000071
Figure BDA0003412020060000072
Figure BDA0003412020060000073
In the formula, yp(i) For the predicted result of the time series, yt(i) For the time series samples to be predicted, ynIs the average value of the time series samples to be predicted. In general, R2The larger the value, the smaller the RMSE value the more accurate the predicted result.
In order to verify the feasibility of the SOM-LSTM method, the method is compared with the prediction effect of LSTM and BP neural network algorithms. An algorithm model is built and trained by adopting an MATLAB programming environment, partial data is selected for temperature prediction, and 135 real data are selected as sampling points for comparison of different algorithms in the experiment. The solver for the SOM-LSTM training was set to Adam, the gradient threshold was set to 1, and an initial learning rate of 0.002 was specified, which was reduced by multiplying by a factor of 0.2 after 125 rounds of training. FIG. 5 shows the temperature prediction effect of three different methods.
As can be seen from fig. 5: the temperature prediction error of the BP neural network at the inflection point is large, and the prediction result has instability. Both LSTM and SOM-LSTM have good stable prediction effects, but compared with LSTM, the predicted value of SOM-LSTM algorithm is closer to the true value, so that the SOM-LSTM algorithm is optimal for predicting the temperature in a future period of time.
The future temperature is predicted by utilizing the SOM-LSTM algorithm, in order to check the accuracy of the algorithm, the first half temperature data is selected to be predicted (figure 6), and then the first half temperature data is compared with the later known real temperature data (figure 7), so that the prediction reliability of the algorithm is proved. The data selected in the experiment is the real temperature value of the sampling point of the previous 177 times, and the temperature data measured in the 178-th and 197-th times are predicted.
In fig. 7, a yellow line is an actual temperature value, a yellow dotted line is a predicted temperature value, and a blue line is a temperature test value. Each sampling point on the abscissa is temperature data measured every 30 min. The RMSE of the temperature data which are predicted from 178 th-197 th measurement is calculated to be 0.3474, and the predicted data are basically matched with the real data, so that the prediction effect of the algorithm is proved to be better.
Since the SOM-LSTM algorithm is optimal for predicting temperatures over a future period of time, the experiment continues: randomly selecting a sensor at a certain position, predicting temperature data of the same sensor in a future time period, training and predicting the temperature data for multiple times by using an LSTM network in order to eliminate the contingency brought by the experiment to the maximum extent, and taking the average value of the results as a final prediction result, wherein the result is shown in fig. 8.
It can be seen from fig. 8 that the temperature can be effectively tracked by using the SOM-LSTM network model, and the trend of the curve is basically consistent. The method can predict the temperature data in a certain period of time in the future. Fig. 9 is an error map with the middle two blue horizontal lines having values of 0.0250 and-0.0250, respectively, and it can be seen that the error is mainly concentrated in this interval.
Different iteration times are selected, error results are different, and the comparison result of the algorithm provided by the invention and the traditional LSTM is shown in table 1.
Figure BDA0003412020060000081
TABLE 1 predictive comparison of different algorithms
The error of BP algorithm is the largest, and the error of SOM-LSTM is the smallest; the more iterations of the training data, the smaller the prediction error. The time used by the algorithm is not significantly increased while the number of iterations is increased, so the number of iterations should be between 200 and 300. Smaller RMSE indicates closer lines of predicted and actual values, i.e., more accurate predictions of future temperature data.
In the natural alcoholization process of the tobacco lamina, how to monitor the temperature and the moisture in the tobacco packet is an important work for scientific maintenance of the tobacco lamina. The invention provides an improved LSTM prediction method aiming at the problem that the temperature and humidity change trend in a tobacco stack has the possibility of generating mildew on cured tobacco leaves, an SOM algorithm and an LSTM are combined, data are preprocessed and time sequence data are normalized, then the normalized data are divided into a training set and a testing set, the training set and the testing set are input into an SOM neural network, the data are clustered, an LSTM neural network structure is constructed, temperature training data are input for training, the results of temperature prediction of three different algorithms are compared, the superiority of the SOM-LSTM algorithm is proved, meanwhile, the indoor temperature in a time period nearby in the future is predicted, then prediction output is carried out, the algorithm effect is measured through actual data, and the experimental result is measured through indexes such as Root Mean Square Error (RMSE). In the experiment, the algorithm effect is verified by adopting temperature monitoring data of the material storage department of the logistics center of Hongyun Honghe company within one week of open stacking, and the root mean square error is calculated to be about 0.06. Experimental results prove that effective temperature prediction can be carried out through the SOM-LSTM model algorithm.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. An improved LSTM based raw tobacco stack internal temperature prediction algorithm for monitoring and predicting temperature changes within a tobacco stack, the temperature prediction algorithm being implemented using the steps of:
step 1, preprocessing data and normalizing time sequence data;
step 2, dividing the normalized data into a training set and a test set, inputting the training set and the test set into an SOM neural network, and clustering the data;
step 3, constructing an LSTM neural network structure, inputting temperature training data for training, and predicting indoor temperature in a time period near the future;
and 4, predicting output, and measuring the algorithm effect through actual data.
2. The improved LSTM-based raw cigarette stack internal temperature prediction algorithm of claim 1, wherein in step 1, in order to improve the training speed and prediction accuracy of LSTM, the temperature data of different dates are normalized by the following formula:
Xnorm=(X-Xmean)/(Xmax-Xmin)
wherein, XmeanIs the average of the data used, XmaxIs the maximum value of the data used, XminIs the minimum value of the data used.
3. The improved LSTM-based raw tobacco stack internal temperature prediction algorithm of claim 1, wherein in step 2, the differences among the data are extracted, the normalized data are divided into a training set and a testing set, the training set and the testing set are input into an SOM neural network, the data with the same clustering result are classified into one type, and different types of data sets are provided for the next LSTM neural network to improve the prediction accuracy.
4. The improved LSTM based raw tobacco stack internal temperature prediction algorithm of claim 1, wherein in step 3, the SOM-LSTM trained solver is set to Adam, the gradient threshold is set to 1, an initial learning rate of 0.002 is specified, and the learning rate is reduced by multiplying by a factor of 0.2 after 125 rounds of training.
5. The improved LSTM-based raw cigarette stack internal temperature prediction algorithm of claim 1, wherein in step 4, the temperature data of the first half part is selected to be predicted, the predicted temperature of the second half part is obtained, and then the comparison is carried out with the known real temperature data, so that the prediction reliability of the algorithm is proved.
6. A modified LSTM based raw tobacco stack internal temperature prediction algorithm as claimed in claim 5 wherein the prediction is measured by Root Mean Square Error (RMSE) indicator.
CN202111534264.1A 2021-12-15 2021-12-15 Improved LSTM-based raw tobacco stack internal temperature prediction algorithm Pending CN114707684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111534264.1A CN114707684A (en) 2021-12-15 2021-12-15 Improved LSTM-based raw tobacco stack internal temperature prediction algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111534264.1A CN114707684A (en) 2021-12-15 2021-12-15 Improved LSTM-based raw tobacco stack internal temperature prediction algorithm

Publications (1)

Publication Number Publication Date
CN114707684A true CN114707684A (en) 2022-07-05

Family

ID=82166379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111534264.1A Pending CN114707684A (en) 2021-12-15 2021-12-15 Improved LSTM-based raw tobacco stack internal temperature prediction algorithm

Country Status (1)

Country Link
CN (1) CN114707684A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115344050A (en) * 2022-09-15 2022-11-15 安徽工程大学 Stacker path planning method based on improved clustering algorithm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115344050A (en) * 2022-09-15 2022-11-15 安徽工程大学 Stacker path planning method based on improved clustering algorithm
CN115344050B (en) * 2022-09-15 2024-04-26 安徽工程大学 Improved clustering algorithm-based stacker path planning method

Similar Documents

Publication Publication Date Title
CN111860979B (en) Short-term load prediction method based on TCN and IPSO-LSSVM combined model
CN108022001B (en) Short-term load probability density prediction method based on PCA (principal component analysis) and quantile regression forest
CN110765700A (en) Ultrahigh voltage transmission line loss prediction method based on quantum ant colony optimization RBF network
CN113705877B (en) Real-time moon runoff forecasting method based on deep learning model
CN109948845A (en) A kind of distribution network load shot and long term Memory Neural Networks prediction technique
CN112288164B (en) Wind power combined prediction method considering spatial correlation and correcting numerical weather forecast
CN111639823B (en) Building cold and heat load prediction method constructed based on feature set
CN113554466B (en) Short-term electricity consumption prediction model construction method, prediction method and device
CN106778838A (en) A kind of method for predicting air quality
CN111525587B (en) Reactive load situation-based power grid reactive voltage control method and system
CN112116162A (en) Power transmission line icing thickness prediction method based on CEEMDAN-QFAOA-LSTM
CN112329990A (en) User power load prediction method based on LSTM-BP neural network
CN113762387B (en) Multi-element load prediction method for data center station based on hybrid model prediction
CN107798426A (en) Wind power interval Forecasting Methodology based on Atomic Decomposition and interactive fuzzy satisfying method
CN113344288B (en) Cascade hydropower station group water level prediction method and device and computer readable storage medium
CN111292124A (en) Water demand prediction method based on optimized combined neural network
CN115995810A (en) Wind power prediction method and system considering weather fluctuation self-adaptive matching
CN114117852B (en) Regional heat load rolling prediction method based on finite difference working domain division
CN116629428A (en) Building energy consumption prediction method based on feature selection and SSA-BiLSTM
Shang et al. Research on intelligent pest prediction of based on improved artificial neural network
CN115096357A (en) Indoor environment quality prediction method based on CEEMDAN-PCA-LSTM
CN114707684A (en) Improved LSTM-based raw tobacco stack internal temperature prediction algorithm
CN114357670A (en) Power distribution network power consumption data abnormity early warning method based on BLS and self-encoder
CN114429238A (en) Wind turbine generator fault early warning method based on space-time feature extraction
CN115481788B (en) Phase change energy storage system load prediction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination