CN112765894B - K-LSTM-based aluminum electrolysis cell state prediction method - Google Patents
K-LSTM-based aluminum electrolysis cell state prediction method Download PDFInfo
- Publication number
- CN112765894B CN112765894B CN202110111679.1A CN202110111679A CN112765894B CN 112765894 B CN112765894 B CN 112765894B CN 202110111679 A CN202110111679 A CN 202110111679A CN 112765894 B CN112765894 B CN 112765894B
- Authority
- CN
- China
- Prior art keywords
- lstm
- information
- model
- gradient
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- C—CHEMISTRY; METALLURGY
- C25—ELECTROLYTIC OR ELECTROPHORETIC PROCESSES; APPARATUS THEREFOR
- C25C—PROCESSES FOR THE ELECTROLYTIC PRODUCTION, RECOVERY OR REFINING OF METALS; APPARATUS THEREFOR
- C25C3/00—Electrolytic production, recovery or refining of metals by electrolysis of melts
- C25C3/06—Electrolytic production, recovery or refining of metals by electrolysis of melts of aluminium
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The invention discloses a K-LSTM-based aluminum electrolysis cell state prediction method, which comprises the following steps of: step 1: normalizing the data; step 2: according to the set size m of the sliding window, a training set and a testing set are constructed; step 3: constructing an improved LSTM model, and initializing parameters of the model; step 4: training the prediction model by using a training set, updating parameters by using a gradient descent method, and iterating for a plurality of times until reaching the precision requirement; step 5: and feeding the test set into the trained model, and predicting a predicted value at the time t+1 by using the historical data. The invention is based on an improved K-LSTM algorithm, aims at the problem of sample unbalance in an LSTML forgetting gate unit, eliminates the sample unbalance by a method of setting weights, and can effectively predict the state of the aluminum electrolysis cell.
Description
Technical Field
The invention relates to the technical field of aluminum electrolysis industry, in particular to a K-LSTM-based aluminum electrolysis cell state prediction method.
Background
The production data of the aluminum electrolysis cell is a time sequence and has the characteristic of high data dimension. There are various models of existing prediction algorithms for time series data, such as artificial neural networks, autoregressive moving averages, wavelet neural networks, etc. The study of time series prediction began with a regression equation that predicted the number of solar blackens in a year in the data analysis. Autoregressive moving average model (ARMA) and autoregressive integrated moving average model (ARIMA) show that regression-method-based time series prediction models are becoming increasingly popular.
Therefore, these models are also the simplest and most important models in time series prediction. However, due to the complexity, irregularity, randomness and nonlinearity of the actual data, it is difficult to implement high-precision prediction through complex models. By adopting the machine learning method, a nonlinear prediction model based on a large amount of historical data can be established. In fact, by iterating training iterations and learning approximations, the machine learning model may obtain more accurate predictions than traditional statistical-based models. Typical methods are support vector regression or kernel-based classification, artificial neural multi-order (ANN) with strong nonlinear function approximation and tree-based integrated learning methods such as gradient-enhanced regression or decision tree (GBRT, GBDT). However, the above method lacks effective processing of the sequence dependency relationship between input variables, and thus has limited effects in the time-series prediction task.
With continuous deep learning algorithm, the deep learning algorithm is found to be suitable for predicting time series data, and the algorithm firstly analyzes input data information step by step, then extracts effective features and extracts implicit relations from the data sequence. In order for the RNN network to be able to process time series data more efficiently, a time series concept is introduced in the neural network architecture of the RNN. An improved algorithm of the RNN is a long-term and short-term memory neural network, so that the problems of gradient explosion, gradient disappearance, long-time sequence data memory and the like existing in the RNN network structure are improved, and long-time sequence information can be effectively processed. The LSTM model is applied in many fields such as voice recognition, stock price prediction, rainfall prediction, traffic flow prediction, image and text recognition, etc., and achieves good application effects.
Because aluminum electrolysis is a large time-lapse industrial process system, the state changes infrequently, so that sample imbalance problems are encountered when training neural networks using existing data. The first gating of LSTM is a forgetting gate, which is used to determine if some information is lost from memory cells, and what the forgetting gate needs to do is handled by a sigmoid function. However, in the process of the operation of the forgetting door, if the state change of the input data is not frequent, the forgetting door is in a 1 state for a long time, namely, the last state is used, the state does not need to be updated, and the problem that the forgetting door has a sample imbalance is caused. I.e., for the ft model, there is a sample imbalance problem.
Sample imbalance problems are mainly present in supervised machine learning tasks, and are mainly represented by the fact that when sample imbalance data is encountered, models tend to pay more attention to classes with a large number of classes for classification prediction tasks, which results in poor sample prediction with a small number of classes, and most common machine learning cannot work effectively for existing imbalance data sets. There are generally two methods to solve the sample imbalance problem, undersampling and oversampling, respectively.
(1) Undersampling: undersampling is the reduction of the total amount of most classes, which is usually chosen when the amount of data is sufficient to support. The samples are balanced by a small number of classes containing a retention of the number of samples, then reducing the number of samples contained in the majority class, and then further modeled.
(2) Oversampling: when the amount of data is insufficient to support method (1), an oversampling method is selected which takes the step of adding a small number of classes of data sets to balance the data sets, rather than removing the number of most classes, by using a repeat, bootstrap or synthetic minority class oversampling method.
There is no absolute advantage in both over-sampling and under-sampling methods, the application of which depends on the data set to be applied.
Disclosure of Invention
The invention aims to provide a K-LSTM-based aluminum electrolysis cell state prediction method for supporting the communication infrastructure construction mentioned in the background art and achieving the purposes of reducing cost and enhancing efficiency.
In order to achieve the above purpose, the invention adopts two methods in combination with the above background technology to solve the problem of sample unbalance in the forgetting gate, and then the judgment conditions are respectively provided with a weight, so that the weight of the ft model in the same state for a long time is reduced, and the sample problem is balanced.
And thus provides a K-LSTM based aluminium electrolysis cell state prediction method, comprising the following steps:
step 1: normalizing the data;
step 2: according to the set size m of the sliding window, a training set and a testing set are constructed;
step 3: constructing an improved LSTM model, and initializing parameters of the model;
step 4: training the prediction model by using a training set, updating parameters by using a gradient descent method, and iterating for a plurality of times until reaching the precision requirement;
step 5: and feeding the test set into the trained model, and predicting a predicted value at the time t+1 by using the historical data.
The improved LSTM model is constructed based on the improved K-LSTM algorithm of the LSTM algorithm.
The K-LSTM algorithm is implemented as follows:
the three gate structures of LSTM include an input gate, an output gate and a forget gate; the calculation process is as follows:
(1) Forgetting the door: for determining the previous memory information to be discarded by taking the output value h at time t-1 t-1 Input value x from current time t t Linear combination, compressing values to [0,1 ] by a sigmoid function]In the range, when the value is closer to 1, f in the current cell state is represented t The more information to be retained, the closer the value is to 0, the representation of f t The more information is selected to be abandoned, the calculation process of the forgetting gate is as follows:
f t =sigmoid(W f ·[h t-1 ,x t ]+b f ) (1)
(2) An input door: for processing the input value x at the current time t t And the output h of the last time t-1 The method is realized by a sigmoid function as total input information; then utilize x t And h t-1 Obtaining a new candidate cell information through the tanh layerThe total amount of the input information at the current moment and the memory information at the previous moment is calculated, and the calculation process is as follows:
i t =sigmoid(W i ·[h t-1 ,x t ]+b i ) (2)
then update old cell information C t-1 Updated to new cell information C t The method comprises the steps of carrying out a first treatment on the surface of the The updating method is to add the information of the forgotten old cells selected by the forgetting gate and the information newly added at the current moment by the input gate control, and the information jointly determine the last updated cell information C t The calculation process is as follows:
(3) Output door: for determining the information h delivered to the next moment of time t The method comprises the steps of carrying out a first treatment on the surface of the The output information also needs to be judged by a sigmoid function, and then the cell state is further processed by a tanh function to obtain a range of [ -1,1]A value in between; the value is a vector, and then the vector is multiplied by the judgment condition obtained before to obtain the output of the current moment. The calculation process is as follows:
o t =sigmoid(W o [h t-1 ,x t ]+b o ) (5)
h t =o t *tanh(C t ) (6)
when a network is learned, the loss function is used for carrying out bias guide on each parameter, then the parameters are updated, the iteration is carried out in sequence until the loss function converges, and the bias guide of the loss function on ft is interfered on the basis of the original LSTM gating unit to avoid the problem of unbalance of samples; according to whether the slot state is changed, if the slot state is changed, a larger weight is selected, and if the slot state is not changed, a smaller weight is selected;
f=f*k+tf.stop_gradient(f-f*k) (7)
with tf.stop_gradient () function, tf.stop_gradient () is not active in the forward process according to the characteristics of tf.stop_gradient () function, so + (f k) and- (f k) cancel out, leaving only f forward transfer; in the reverse direction, the gradient of f-f k becomes 0 due to the tf.stop_gradient (), so that only f k is left to be transferred in the reverse direction.
The improved K-LSTM algorithm comprises the following steps:
step (1): calculating the input and output values of each neuron of the hidden layer in the forward propagation of the K-LSTM;
step (2): calculating an output error through a cross entropy function, and reversely transmitting the error to each layer of nerve units through a reverse transmission algorithm;
step (3): in the backward propagation process of the forgetting gate, selecting a weight to change an original propagation function according to a judgment condition;
step (4): updating parameters of each layer of neurons according to a gradient descent algorithm and the propagated errors;
step (5): and (3) repeating the step (2), the step (3) and the step (4) according to the set iteration times until convergence is achieved, and completing model training.
Compared with the prior art, the method has the beneficial effects that: the invention is based on an improved K-LSTM algorithm, aims at the problem of sample unbalance in an LSTML forgetting gate unit, eliminates the sample unbalance by a method of setting weights, and can effectively predict the state of the aluminum electrolysis cell.
Drawings
FIG. 1 is a flow chart of the modified K-LSTM algorithm.
FIG. 2 is a flow chart of K-LSTM based slot state prediction.
FIG. 3 is a schematic diagram of production data storage.
FIG. 4 is a graph of K-LSTM prediction results.
FIG. 5 is a LSTM prediction result graph.
FIG. 6 is a graph of EA-LSTM predictions.
Detailed Description
The technical scheme of the patent is further described in detail below with reference to the specific embodiments.
The K-LSTM algorithm is implemented as follows:
the three gate structures of LSTM include an input gate, an output gate and a forget gate; the calculation process is as follows:
(1) Forgetting the door: for determining the previous memory information to be discarded by taking the output value h at time t-1 t-1 Input value x from current time t t Linear combination, compressing values to [0,1 ] by a sigmoid function]In the range, when the value is closer to 1, f in the current cell state is represented t The more information to be retained, the closer the value is to 0, the representation of f t The more information is selected to be abandoned, the calculation process of the forgetting gate is as follows:
f t =sigmoid(W f ·[h t-1 ,x t ]+b f ) (1)
(2) An input door: for processing the input value x at the current time t t And the output h of the last time t-1 The method is realized by a sigmoid function as total input information; then utilize x t And h t-1 Obtaining a new candidate cell information through the tanh layerThe total amount of the input information at the current moment and the memory information at the previous moment is calculated, and the calculation process is as follows:
i t =sigmoid(W i ·[h t-1 ,x t ]+b i ) (2)
then update old cell information C t-1 Updated to new cell information C t The method comprises the steps of carrying out a first treatment on the surface of the The updating method is to add the information of the forgotten old cells selected by the forgetting gate and the information newly added at the current moment by the input gate control, and the information jointly determine the last updated cell information C t The calculation process is as follows:
(3) Output door: for determining the information h delivered to the next moment of time t The method comprises the steps of carrying out a first treatment on the surface of the The output information also needs to be judged by a sigmoid function, and then the cell state is further processed by a tanh function to obtain a range of [ -1,1]A value in between; the value is a vector, and then the vector is multiplied by the judgment condition obtained before to obtain the output of the current moment. The calculation process is as follows:
o t =sigmoid(W o [h t-1 ,x t ]+b o ) (5)
h t =o t *tanh(C t ) (6)
when a network is learned, the loss function is used for carrying out bias guide on each parameter, then the parameters are updated, the iteration is carried out in sequence until the loss function converges, and the bias guide of the loss function on ft is interfered on the basis of the original LSTM gating unit to avoid the problem of unbalance of samples; according to whether the slot state is changed, if the slot state is changed, a larger weight is selected, and if the slot state is not changed, a smaller weight is selected;
f=f*k+tf.stop_gradient(f-f*k) (7)
with tf.stop_gradient () function, tf.stop_gradient () is not active in the forward process according to the characteristics of tf.stop_gradient () function, so + (f k) and- (f k) cancel out, leaving only f forward transfer; in the reverse direction, the gradient of f-f k becomes 0 due to the tf.stop_gradient (), so that only f k is left to be transferred in the reverse direction.
The improved algorithm flow chart is shown in fig. 1, and the improved K-LSTM algorithm comprises the following steps:
step (1): calculating the input and output values of each neuron of the hidden layer in the forward propagation of the K-LSTM;
step (2): calculating an output error through a cross entropy function, and reversely transmitting the error to each layer of nerve units through a reverse transmission algorithm;
step (3): in the backward propagation process of the forgetting gate, selecting a weight to change an original propagation function according to a judgment condition;
step (4): updating parameters of each layer of neurons according to a gradient descent algorithm and the propagated errors;
step (5): and (3) repeating the step (2), the step (3) and the step (4) according to the set iteration times until convergence is achieved, and completing model training.
The modified K-LSTM algorithm is shown in Table 1:
table 1 improved K-LSTM algorithm
In this embodiment, a kerasframe of Tensorflow is adopted to construct an LSTM slot state prediction model, a Python language is used to write all programs, a prediction experiment is performed on a computer with CPU2.50GHz, 8GB memory and Window 7 operating system, and data with clustering attribute is adopted.
And setting a sliding window m by utilizing an improved K-LSTM algorithm, constructing a training set and a testing set according to the sliding window, training a model, selecting cross entropy as a loss function to reflect deviation between predicted data and real data, and finally predicting the state of the aluminum electrolysis cell by using the trained model.
To further verify the effectiveness of the algorithm, the improved algorithm is compared to conventional LSTM and attention-mechanism-based LSTM algorithms, respectively, for accuracy.
As shown in fig. 2, the procedure for predicting the state of an aluminum electrolysis cell using the modified K-LSTM is as follows:
step 1: normalizing the data;
step 2: according to the set size m of the sliding window, a training set and a testing set are constructed;
step 3: constructing an improved LSTM model, and initializing parameters of the model;
step 4: training the prediction model by using a training set, updating parameters by using a gradient descent method, and iterating for a plurality of times until reaching the precision requirement;
step 5: and feeding the test set into the trained model, and predicting a predicted value at the time t+1 by using the historical data.
In this example, the above data is derived from real aluminum electrolysis cell production data (as shown in fig. 3), which are collected once a day, wherein each electrolysis cell contains 13 characteristics of Fe content, aluminum level, molecular ratio, si content, alumina concentration, electrolyte level, electrolysis temperature, etc. each day; before analyzing and mining the hidden information in the data, the data with many problems needs to be preprocessed; because the data used in this embodiment has a null value, the data is subjected to null value and noise processing, and then normalized according to the characteristic of high data dimension.
The K-LSTM based aluminum electrolysis cell state prediction results are shown in FIG. 4. For better viewing effect, only the first 200 pieces of data are shown; in fig. 4, the abscissa is time series, and the ordinate is slot state, and there are two states 0 and 1 in total; the two dotted lines are the real slot state and the model predicted slot state, wherein the cross entropy LOSS is 0.0418, and the accuracy reaches 99.6%.
Fig. 5 is a graph of the prediction result of the conventional LSTM model, and it can be seen that in the prediction of the channel state of the model, the overall fitting effect of the predicted value and the true value is good when the channel state is not changed, but when the channel state is suddenly changed, the accurate prediction cannot be made in time, and the prediction can be successfully performed after the channel state is changed for a period of time.
FIG. 6 shows that although the accuracy of the model prediction based on the EA-LSTM model is better than that of the conventional LSTM model, when the slot state changes, the change of the inaccurate slot state is predicted, and the slot state at the previous moment is continued.
In order to verify the effectiveness of the algorithm, the K-LSTM algorithm provided by the invention is compared and analyzed with the traditional LSTM algorithm and the EA-LSTM algorithm based on an attention mechanism, and the same iteration times, sliding window size and neuron number are adopted.
From Table 2, it can be appreciated that neither the conventional LSTM constructed model nor the EA-LATM model is less accurate than the improved K-LSTM model, which improves the accuracy of the slot state prediction to some extent.
Table 2 model comparative analysis
The improved K-LSTM of the present invention is seen to be a significant improvement over this problem by comparison, and a faster prediction of the change in slot condition can be made. The method is convenient for operators to find out the abnormal state of the groove more quickly, and make decisions in time to prevent the further deterioration of the state of the groove.
The embodiment performs experiments to verify the feasibility and effectiveness of the algorithm, and simultaneously performs comparison experiments with the traditional LSTM model and the EA-LSTM model based on an attention mechanism, so that the prediction effect of the improved K-LSTM is proved to be obviously superior to that of the other two models, and the accuracy of the prediction of the slot state is improved.
While the preferred embodiments of the present patent have been described in detail, the present patent is not limited to the above embodiments, and various changes may be made without departing from the spirit of the present patent within the knowledge of one of ordinary skill in the art.
Claims (2)
1. The K-LSTM-based aluminum electrolysis cell state prediction method is characterized by comprising the following steps of:
step 1: normalizing the data;
step 2: according to the set size m of the sliding window, a training set and a testing set are constructed;
step 3: constructing an improved LSTM model, and initializing parameters of the model;
step 4: training the prediction model by using a training set, updating parameters by using a gradient descent method, and iterating for a plurality of times until reaching the precision requirement;
step 5: feeding the test set into a trained model, and predicting a predicted value at the time t+1 by using historical data;
the improved LSTM model is constructed based on an improved K-LSTM algorithm of the LSTM algorithm;
the K-LSTM algorithm is implemented as follows:
the three gate structures of LSTM include an input gate, an output gate and a forget gate; the calculation process is as follows:
(1) Forgetting the door: for determining the previous memory information to be discarded by taking the output value h at time t-1 t-1 Input value x from current time t t Linear combination, compressing values to [0,1 ] by a sigmoid function]In the range, when the value is closer to 1, f in the current cell state is represented t The more information to be retained, the closer the value is to 0, the representation of f t The more information is selected to be abandoned, the calculation process of the forgetting gate is as follows:
f t =sigmoid(W f ·[h t-1 ,x t ]+b f ) (1)
(2) An input door: for processing the input value x at the current time t t And the output h of the last time t-1 As a result of the total input information, also byRealizing a sigmoid function; then utilize x t And h t-1 Obtaining a new candidate cell information C by means of the tanh layer t The total amount of the input information at the current moment and the memory information at the previous moment is calculated, and the calculation process is as follows:
i t =sigmoid(W i ·[h t-1 ,x t ]+b i ) (2)
then update old cell information C t-1 Updated to new cell information C t The method comprises the steps of carrying out a first treatment on the surface of the The updating method is to add the information of the forgotten old cells selected by the forgetting gate and the information newly added at the current moment by the input gate control, and the information jointly determine the last updated cell information C t The calculation process is as follows:
(3) Output door: for determining the information h delivered to the next moment of time t The method comprises the steps of carrying out a first treatment on the surface of the The output information also needs to be judged by a sigmoid function, and then the cell state is further processed by a tanh function to obtain a range of [ -1,1]A value in between; the value is a vector, and then the vector is multiplied by the judgment condition obtained before to obtain the output of the current moment, and the calculation process is as follows:
o t =sigmoid(W o [h t-1 ,x t ]+b o ) (5)
h t =o t *tanh(C t ) (6)
when a network is learned, the loss function is used for carrying out bias guide on each parameter, then the parameters are updated, the iteration is carried out in sequence until the loss function converges, and the bias guide of the loss function on ft is interfered on the basis of the original LSTM gating unit to avoid the problem of unbalance of samples; according to whether the slot state is changed, if the slot state is changed, a larger weight is selected, and if the slot state is not changed, a smaller weight is selected;
f=f*k+tf.stop_gradient(f-f*k) (7)
with tf.stop_gradient () function, tf.stop_gradient () is not active in the forward process according to the characteristics of tf.stop_gradient () function, so + (f k) and- (f k) cancel out, leaving only f forward transfer; in the reverse direction, the gradient of f-f k becomes 0 due to the tf.stop_gradient (), so that only f k is left to be transferred in the reverse direction.
2. The method for predicting the state of an aluminum electrolysis cell based on K-LSTM according to claim 1, wherein the improved K-LSTM algorithm comprises the following steps:
step (1): calculating the input and output values of each neuron of the hidden layer in the forward propagation of the K-LSTM;
step (2): calculating an output error through a cross entropy function, and reversely transmitting the error to each layer of nerve units through a reverse transmission algorithm;
step (3): in the backward propagation process of the forgetting gate, selecting a weight to change an original propagation function according to a judgment condition;
step (4): updating parameters of each layer of neurons according to a gradient descent algorithm and the propagated errors;
step (5): and (3) repeating the step (2), the step (3) and the step (4) according to the set iteration times until convergence is achieved, and completing model training.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011334630 | 2020-11-25 | ||
CN2020113346304 | 2020-11-25 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112765894A CN112765894A (en) | 2021-05-07 |
CN112765894B true CN112765894B (en) | 2023-05-05 |
Family
ID=75706135
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110111679.1A Active CN112765894B (en) | 2020-11-25 | 2021-01-27 | K-LSTM-based aluminum electrolysis cell state prediction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112765894B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114959797A (en) * | 2022-07-04 | 2022-08-30 | 广东技术师范大学 | Aluminum electrolysis cell condition diagnosis method based on data amplification and SSKELM |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2230882A1 (en) * | 1997-03-14 | 1998-09-14 | Dubai Aluminium Company Limited | Intelligent control of aluminium reduction cells using predictive and pattern recognition techniques |
CN1471627A (en) * | 2000-10-26 | 2004-01-28 | �Ʒ� | A fault tolerant liquid measurement system using multiple-model state estimators |
CN201334531Y (en) * | 2008-12-02 | 2009-10-28 | 北方工业大学 | Novel potline stop-start shunting device and system |
WO2017026010A1 (en) * | 2015-08-07 | 2017-02-16 | 三菱電機株式会社 | Device for predicting amount of photovoltaic power generation, and method for predicting amount of photovoltaic power generation |
CN109543699A (en) * | 2018-11-28 | 2019-03-29 | 北方工业大学 | Image abstract generation method based on target detection |
CN109614885A (en) * | 2018-11-21 | 2019-04-12 | 齐鲁工业大学 | A kind of EEG signals Fast Classification recognition methods based on LSTM |
CN110770760A (en) * | 2017-05-19 | 2020-02-07 | 渊慧科技有限公司 | Object-level prediction of future states of a physical system |
WO2020075767A1 (en) * | 2018-10-10 | 2020-04-16 | 旭化成株式会社 | Planning device, planning method, and planning program |
CN111563706A (en) * | 2020-03-05 | 2020-08-21 | 河海大学 | Multivariable logistics freight volume prediction method based on LSTM network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200348662A1 (en) * | 2016-05-09 | 2020-11-05 | Strong Force Iot Portfolio 2016, Llc | Platform for facilitating development of intelligence in an industrial internet of things system |
-
2021
- 2021-01-27 CN CN202110111679.1A patent/CN112765894B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2230882A1 (en) * | 1997-03-14 | 1998-09-14 | Dubai Aluminium Company Limited | Intelligent control of aluminium reduction cells using predictive and pattern recognition techniques |
CN1471627A (en) * | 2000-10-26 | 2004-01-28 | �Ʒ� | A fault tolerant liquid measurement system using multiple-model state estimators |
CN201334531Y (en) * | 2008-12-02 | 2009-10-28 | 北方工业大学 | Novel potline stop-start shunting device and system |
WO2017026010A1 (en) * | 2015-08-07 | 2017-02-16 | 三菱電機株式会社 | Device for predicting amount of photovoltaic power generation, and method for predicting amount of photovoltaic power generation |
CN110770760A (en) * | 2017-05-19 | 2020-02-07 | 渊慧科技有限公司 | Object-level prediction of future states of a physical system |
WO2020075767A1 (en) * | 2018-10-10 | 2020-04-16 | 旭化成株式会社 | Planning device, planning method, and planning program |
CN109614885A (en) * | 2018-11-21 | 2019-04-12 | 齐鲁工业大学 | A kind of EEG signals Fast Classification recognition methods based on LSTM |
CN109543699A (en) * | 2018-11-28 | 2019-03-29 | 北方工业大学 | Image abstract generation method based on target detection |
CN111563706A (en) * | 2020-03-05 | 2020-08-21 | 河海大学 | Multivariable logistics freight volume prediction method based on LSTM network |
Non-Patent Citations (2)
Title |
---|
侯婕等.基于LSTM的铝电解槽况预测.《轻金属》.2021,33-37+62. * |
孔淑麒.铝电解槽状态预测算法研究.《中国优秀硕士学位论文全文数据库(电子期刊)》.2021,B023-244. * |
Also Published As
Publication number | Publication date |
---|---|
CN112765894A (en) | 2021-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106448151B (en) | Short-term traffic flow prediction method | |
CN109766950B (en) | Industrial user short-term load prediction method based on morphological clustering and LightGBM | |
CN110782658B (en) | Traffic prediction method based on LightGBM algorithm | |
CN112990556A (en) | User power consumption prediction method based on Prophet-LSTM model | |
CN111277434A (en) | Network flow multi-step prediction method based on VMD and LSTM | |
Hassan et al. | A hybrid of multiobjective Evolutionary Algorithm and HMM-Fuzzy model for time series prediction | |
CN112270355B (en) | Active safety prediction method based on big data technology and SAE-GRU | |
CN110708318A (en) | Network abnormal flow prediction method based on improved radial basis function neural network algorithm | |
CN113487855B (en) | Traffic flow prediction method based on EMD-GAN neural network structure | |
CN113642225A (en) | CNN-LSTM short-term wind power prediction method based on attention mechanism | |
CN113095550A (en) | Air quality prediction method based on variational recursive network and self-attention mechanism | |
CN112363896A (en) | Log anomaly detection system | |
CN111447217A (en) | Method and system for detecting flow data abnormity based on HTM under sparse coding | |
CN111898825A (en) | Photovoltaic power generation power short-term prediction method and device | |
CN112766603A (en) | Traffic flow prediction method, system, computer device and storage medium | |
CN113435124A (en) | Water quality space-time correlation prediction method based on long-time and short-time memory and radial basis function neural network | |
CN114118567A (en) | Power service bandwidth prediction method based on dual-channel fusion network | |
CN113095484A (en) | Stock price prediction method based on LSTM neural network | |
CN112765894B (en) | K-LSTM-based aluminum electrolysis cell state prediction method | |
CN113449919B (en) | Power consumption prediction method and system based on feature and trend perception | |
CN114219531A (en) | Waste mobile phone dynamic pricing method based on M-WU concept drift detection | |
CN114596726A (en) | Parking position prediction method based on interpretable space-time attention mechanism | |
CN113052373A (en) | Monthly runoff change trend prediction method based on improved ELM model | |
CN116542701A (en) | Carbon price prediction method and system based on CNN-LSTM combination model | |
CN111984514A (en) | Prophet-bLSTM-DTW-based log anomaly detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |