CN112990598A - Reservoir water level time sequence prediction method and system - Google Patents

Reservoir water level time sequence prediction method and system Download PDF

Info

Publication number
CN112990598A
CN112990598A CN202110351303.8A CN202110351303A CN112990598A CN 112990598 A CN112990598 A CN 112990598A CN 202110351303 A CN202110351303 A CN 202110351303A CN 112990598 A CN112990598 A CN 112990598A
Authority
CN
China
Prior art keywords
short term
network model
data set
term memory
memory network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110351303.8A
Other languages
Chinese (zh)
Inventor
张仁贡
郑重
周国民
李锐
汪建宏
刘半藤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yu Gong Mdt Infotech Ltd
Original Assignee
Zhejiang Yu Gong Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Yu Gong Mdt Infotech Ltd filed Critical Zhejiang Yu Gong Mdt Infotech Ltd
Priority to CN202110351303.8A priority Critical patent/CN112990598A/en
Publication of CN112990598A publication Critical patent/CN112990598A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Game Theory and Decision Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a reservoir water level time series prediction method and a system, by embedding a long-short term memory network model into another long-short term memory network model, and replace the memory cells of another long-short term memory network model to form a nested long-short term memory network model, thus changing the memory mode in the long-short term memory network model, when accessing the memory cells in the nested long-short term memory network model, the memory cells are gated in the same way, so that the nested long-short term memory network model can be used for guiding memory forgetting and memory selection to make the model more suitable for processing multidimensional, multivariable reservoir water level time sequence data, thereby solving the problem of insufficient information capacity mining of the traditional reservoir water level time series prediction method, and the problems of weak robustness and reduced prediction precision of a prediction model caused by storing irrelevant memory.

Description

Reservoir water level time sequence prediction method and system
Technical Field
The application relates to the technical field of water conservancy and hydropower, in particular to a method and a system for predicting a reservoir water level time sequence.
Background
The rising of the reservoir water level is the embodiment before the arrival of flood, and the flood information can be timely obtained by predicting the time sequence of the reservoir water level to remind relevant personnel to make corresponding preparation before the arrival of the flood, so that unnecessary casualties and money loss are avoided. However, the time series of the water level of the reservoir is influenced by factors such as historical rainfall, historical evaporation capacity, historical reservoir flow, historical measured water level and the like, and the time series of the water level of the reservoir presents unstable and nonlinear characteristics. The existing water level prediction method has insufficient information mining capability and cannot accurately predict unstable and nonlinear reservoir water level time sequences influenced by multivariable. Therefore, the selection of a suitable artificial intelligence method is the key to predicting the water level time series.
The Long Short-Term Memory network (LSTM) is a recurrent neural network with a feedback connection structure, consists of four gate control systems, and generates a path which enables gradients to continuously flow for a Long time by introducing ingenious controllable self-circulation, so that the Long-Term Memory network has strong capability of processing multivariable Long-time sequences, can track information in a longer period, and belongs to a deep learning model. However, memory cells in the conventional LSTM gating system store memory unrelated to the current time step, which results in poor robustness and reduced prediction accuracy of the prediction model.
Disclosure of Invention
Based on this, when the long-short term memory network is applied to the conventional reservoir water level time sequence prediction method, the memory cells in the conventional long-short term memory network gating system can store the memory irrelevant to the current time step, so that the robustness of the prediction model is weak, and the prediction precision is reduced.
The application provides a reservoir water level time series prediction method, which comprises the following steps:
acquiring a plurality of historical factors influencing water level change and historical time sequence data corresponding to each historical factor, and generating a high-dimensional time sequence data set based on the historical time sequence data corresponding to each historical factor;
preprocessing a high-dimensional time series data set;
embedding a long-short term memory network model into another long-short term memory network model, replacing memory cells of the other long-short term memory network model to construct a nested long-short term memory network model, inputting a preprocessed high-dimensional time sequence data set into the nested long-short term memory network model, and outputting a future reservoir time sequence data set.
The present application further provides a reservoir water level time series prediction system, the system includes:
nesting long and short term memory network models;
and the server is in communication connection with the nested long-short term memory network model and is used for executing the reservoir water level time series prediction method.
The application relates to a reservoir water level time sequence prediction method and a system, a long-short term memory network model is embedded into another long-short term memory network model and replaces memory cells of the other long-short term memory network model to form a nested long-short term memory network model, the memory mode in the long-short term memory network model is changed, and when the memory cells in the nested long-short term memory network model are accessed, the nested long-short term memory network model is gated in the same mode, so that the nested long-short term memory network model can be used for guiding memory forgetting and memory selection to be more suitable for processing multidimensional and multivariable reservoir water level time sequence data, thereby solving the problems of insufficient information capacity mining and weak robustness of the prediction model caused by irrelevant memory, the prediction accuracy is reduced.
Drawings
Fig. 1 is a schematic flow chart illustrating a reservoir water level time series prediction method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a reservoir water level time series prediction system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a single long-short term memory network model according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a nested long-short term memory network model according to an embodiment of the present application.
Detailed Description
For the purpose of making the present application more apparent, technical solutions and advantages thereof are described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The application provides a reservoir water level time series prediction method. It should be noted that the reservoir water level time series prediction method provided by the application can be applied to a reservoir water level time series prediction system. The reservoir water level time series prediction system can comprise a nested long-short term memory network model and a server which are in communication connection with each other.
In addition, the reservoir water level time series prediction method provided by the application is not limited to the execution subject. Optionally, the execution subject of the reservoir level time series prediction method provided by the present application may be a server in a reservoir level time series prediction system.
As shown in fig. 1, in an embodiment of the present application, the reservoir level time series prediction method includes the following steps S100 to S300:
s100, acquiring a plurality of historical factors influencing water level change and historical time series data corresponding to each historical factor. And generating a high-dimensional time series data set based on the historical time series data corresponding to each historical factor.
Specifically, the reservoir water level is influenced by a plurality of historical factors, such as the historical rainfall of the reservoir, the historical evaporation amount, and the like. In this step, the server stores historical time-series data corresponding to each historical factor in each historical time. For example, taking the historical rainfall as an example, the server stores the historical rainfall of the reservoir corresponding to each day within 30 days, so as to form the historical time series data corresponding to the historical rainfall, and it can be seen that the historical time series data is a data group which is transversely (temporally) expanded in one dimension.
In order to save storage space, the server classifies historical time-series data corresponding to each historical factor into a high-dimensional time-series data set, which is equivalent to aggregating the historical time-series data of each historical factor, wherein each historical factor is a dimension. Although the high-dimensional time-series data set is a data set containing scattered data, the high-dimensional time-series data set can be regarded as a data cluster as a whole, and is a data cluster which is not only spread in the horizontal direction (time), but also spread in the vertical direction (dimension, namely history factor), and therefore is also a high-dimensional data cluster.
And S200, preprocessing the high-dimensional time series data set.
Specifically, the purpose of the preprocessing is to simplify the data, remove noise in the high-dimensional time series data set, and facilitate the training of the subsequent high-dimensional time series data set.
S300, embedding one long-short term memory network model into the other long-short term memory network model, replacing memory cells of the other long-short term memory network model to construct a nested long-short term memory network model, inputting the preprocessed high-dimensional time sequence data set into the nested long-short term memory network model, and outputting a future reservoir time sequence data set.
Specifically, fig. 3 is a schematic structural diagram of a single long-short term memory network model. Fig. 4 is a schematic structural diagram of a nested long-term memory network model. It can be seen that the nested long-short term memory network model is composed of two long-short term memory network models, and is formed by embedding one long-short term memory network model into the other long-short term memory network model and replacing the memory cells of the other long-short term memory network model.
The preprocessed high-dimensional time series dataset is training data. Specifically, after the model is constructed, the nested long-short term memory network model is trained by taking the preprocessed high-dimensional time series data set as training data. And finally, the trained nested long-short term memory network model can output a future reservoir time sequence data set.
In this embodiment, by embedding one long-short term memory network model into the other long-short term memory network model, and replace the memory cell of another long-short term memory network model to form a long-short term memory network model in a nested form, change the memory mode in the long-short term memory network model, when accessing the memory cell in the nested long-short term memory network model, and is gated in the same way, so that the nested long-short term memory network model can be used for guiding memory forgetting and memory selection to be more suitable for processing multidimensional, multivariable reservoir water level time sequence data, thereby solving the problem of insufficient information capacity mining of the traditional reservoir water level time series prediction method, and the problems of weak robustness and reduced prediction precision of a prediction model caused by storing irrelevant memory.
In an embodiment of the present application, the historical factors include one or more of historical rainfall, historical evaporation, historical reservoir flow, and historical measured water level.
Specifically, the historical factors may not be limited to the above listed historical rainfall, historical evaporation, historical reservoir flow and historical measured water level, but may be any other historical factors that may affect the change of the water level in the reservoir.
In an embodiment of the present application, the S200 includes the following S210 to S230:
s210, the high-dimensional time series data set is subjected to standardization processing, and the high-dimensional time series data set after the standardization processing is generated.
And S220, performing dimension reduction on the high-dimensional time series data set subjected to the standardization processing to generate a time series data set subjected to the dimension reduction processing.
And S230, performing phase space reconstruction on the time series data set subjected to the dimension reduction processing to generate a reconstructed time series data set.
Specifically, the present embodiment introduces detailed steps of preprocessing of a high-dimensional time series dataset, which are normalization processing, dimension reduction processing, and phase space reconstruction, respectively.
In the embodiment, the training results of the subsequent nested long-short term memory network model are prevented from diverging through data standardization. Information loss is minimized while compressing data in a high-dimensional time series dataset by a dimension reduction process. And constructing a final time sequence data set through phase space reconstruction, so that accurate change information of different time points of the reservoir water level can be acquired.
In an embodiment of the present application, the S210 includes the following steps:
and S211, carrying out data standardization on the data in the high-dimensional time series data set based on the formula 1 to obtain the high-dimensional time series data set after the standardization processing.
Figure BDA0003002221230000061
Wherein, X is a high-dimensional time sequence data set after the standardization processing. N is the total number of historical factors affecting the water level change. XNAnd the time sequence data corresponding to each historical factor in the high-dimensional time sequence data set after the normalization processing.
Figure BDA0003002221230000062
The time series data corresponding to one historical factor in the high-dimensional time series data set.
Figure BDA0003002221230000063
Is the average of all time series data under a historical factor in the high-dimensional time series data set.
Figure BDA0003002221230000064
The standard deviation of all time series data under a history factor in a high-dimensional time series data set.
Specifically, the purpose of standardization is to standardize data corresponding to different time nodes under each historical factor, and then put the data into a set, so that the data can be de-scattered and simplified.
In this embodiment, the training results of the subsequent nested long-short term memory network model can be prevented from diverging through data standardization.
In an embodiment of the present application, the step S220 includes the following steps S221 to S227:
and S221, mapping the high-dimensional time series data set subjected to the standardization processing to a high-dimensional feature space F to generate a mapping data set. The normalized high-dimensional time series data set and the mapped data set are shown in equation 2.
Figure BDA0003002221230000065
Wherein, X is a high-dimensional time sequence data set after the standardization processing. Phi (X)i) To map a data set.
S222, constructing a covariance matrix of the mapping data set, and constructing a relation between an eigenvalue and an eigenvector of the covariance matrix of the mapping data set. The relation is shown in equation 2.
Figure BDA0003002221230000071
Where C is the covariance matrix of the mapping dataset. λ is the eigenvalue of the covariance matrix of the mapping dataset. v is the eigenvector of the covariance matrix of the mapping dataset. Phi (X)i) To map a data set. i is the number of the history factors affecting the water level change. N is the total number of historical factors affecting the water level change.
And S223, deriving formula 4 according to formula 3.
Figure BDA0003002221230000072
Wherein alpha isiAre the coefficients of the equation. v is the eigenvector of the covariance matrix of the mapping dataset.φ(Xi) To map a data set. i is the serial number of the history factor influencing the water level change.
S224, defining an N multiplied by N matrix Kij
Figure BDA0003002221230000073
S225, define
Figure BDA0003002221230000074
σ is the standard deviation of all data in the covariance matrix. Formula 6 is obtained by substituting formula 4 and formula 5 into formula 3 and then simplifying.
N λ α ═ K α formula 6
Wherein alpha is i alphaiA column vector of components.
S226, normalizing the eigenvector v of the covariance matrix of the mapping data set to obtain a sample phi (X)i) Mapping on v. The sample phi (X)i) The mapping on v is shown in equation 7.
Figure BDA0003002221230000075
Wherein, Ti(X) is a sample phi (X)i) Mapping on v.
S227, adding Ti(X) is used as the time-series data set after the dimension reduction processing.
Specifically, in the present embodiment, use is made of
Figure BDA0003002221230000081
The Gaussian kernel function performs KPCA data dimension reduction, can remove unimportant historical factors, improves the prediction precision of a subsequent nested long-short term memory network model, and realizes dimension reduction on a high-dimensional time sequence data set after standardization processing and minimizes information loss.
In an embodiment of the present application, the S230 includes the following steps:
s231, setting reconstruction dimension and time delay, and performing dimension reduction processing on the time sequence data set Ti(X) performing a phase space reconstruction to generate a reconstructed time series data set (X, Y). The reconstructed time-series data set (X, Y) is in the form shown in equation 8.
Figure BDA0003002221230000082
Figure BDA0003002221230000083
Where m is the reconstruction dimension. tau is time delay. n is the total number of data in each row of the matrix.
Specifically, in the embodiment, a multidimensional data reconstruction method is adopted to obtain the change information of the reservoir water level at different time nodes, so as to obtain a proper training set and a proper test set. No matter the training set and the testing set are adopted, during prediction, training and testing are realized by predicting Y through X.
In an embodiment of the present application, the S300 includes the following S310 to S380:
s310, constructing a nested long-short term memory network model, wherein the nested long-short term memory network model is formed by nesting two long-short term memory network models.
S320, initializing model parameters of the nested long-short term memory network model, and setting an error threshold.
And S330, inputting the reconstructed time series data set (X, Y) into the nested long-short term memory network model.
And S340, controlling the nested long-short term memory network model to perform forward calculation.
And S350, defining a loss function of the nested long-short term memory network model.
And S360, adjusting the weight matrix of each structure in the nested long and short term memory network model according to the loss function, and generating the adjusted nested long and short term memory network model.
And S370, operating the adjusted nested long-short term memory network model, acquiring a future reservoir time sequence data set output by an output layer of the adjusted nested long-short term memory network model, calculating errors of a time sequence data set and an actual data set of a future reservoir output by the output layer, and judging whether the errors are smaller than an error threshold value.
And S380, if the error is smaller than the error threshold value, outputting a future reservoir time series data set output by the output layer.
Specifically, in S310, one long-short term memory network model is embedded inside another long-short term memory network model, and memory cells of the another long-short term memory network model are replaced, so as to construct a nested long-short term memory network model.
S310 to S360 are the training process of the nested long-short term memory network model.
S370 is an error elimination process of the output result of the nested long-short term memory network model.
In an embodiment of the present application, the S340 includes the following S341 to S349:
and S341, calculating an expression of the forgetting gate in the nested long-short term memory network model according to the formula 9.
ft=σ(Wfxxt+Wfhht-1+bf) Equation 9
Wherein f istTo forget the door. Sigma is sigmoid function. WfxIs the first forgetting gate weight matrix. WfhIs a second forgetting gate weight matrix. bfTo forget the biasing of the door. x is the number oftIs the current input. h ist-1The state is the hidden state of the previous round.
And S342, calculating the expression of the input gate in the nested long-short term memory network model according to the formula 10.
it=σ(Wixxt+Wihht-1+bi) Equation 10
Wherein itIs an input gate. Sigma is sigmoid function. WixIs a first input gate weight matrix. WihIs a second input gate weight matrix. biIs the bias of the input gate. x is the number oftIs the current input. h ist-1The state is the hidden state of the previous round.
And S343, calculating the expression of the output gate in the nested long and short term memory network model according to the formula 11.
ot=σ(Woxxt+Wohht-1+bo) Equation 11
Wherein o istIs an output gate. Sigma is sigmoid function. WoxIs a first output gate weight matrix. WohIs a second output gate weight matrix. boIs the biasing of the output gate. x is the number oftIs the current input. h ist-1The state is the hidden state of the previous round.
S344, calculating the expression of the candidate memory cell in the nested long-short term memory network model according to the formula 12.
Figure BDA0003002221230000101
Wherein the content of the first and second substances,
Figure BDA0003002221230000102
is a candidate memory cell. bcIs the bias of the candidate memory cell. WcxThe first candidate is remembered for the cell weight matrix. WchIs the second candidate memory cell weight matrix. x is the number oftIs the current input. h ist-1The state is the hidden state of the previous round.
S345, calculating the expression of the memory cells in the nested long-short term memory network model according to the formula 13.
Figure BDA0003002221230000103
Wherein, ctIs a memory cell. c. Ct-1The memory cells of the previous round. f. oftTo forget the door. i.e. itIs an input gate.
S346, calculating a new hidden state of the nested long-short term memory network model according to formula 14.
ht=ot×tanh(ct) Equation 14
Wherein h istIs a new round of hidden state. otIs an output gate. c. CtIs a memory cell.
S347, a hidden state of the previous round of the internal model in the nested long-short term memory network model and an input of the internal model in the nested long-short term memory network model are calculated according to formula 15.
Figure BDA0003002221230000104
Wherein the content of the first and second substances,
Figure BDA0003002221230000111
the hidden state of the previous round of the internal model in the nested long-short term memory network model.
Figure BDA0003002221230000112
Is the internal input of the nested long-short term memory network model. f. oftTo forget the door. c. Ct-1The memory cells of the previous round. i.e. itTo input gate
Figure BDA0003002221230000113
Is a candidate memory cell.
S348, the external memory cells in the nested long-short term memory network model are updated according to the formula 16. And assigning the value of the hidden state of the new round of the internal model in the nested long-short term memory network model to the external memory cell in the nested long-short term memory network model.
Figure BDA0003002221230000114
Wherein, ctIs a memory cell.
Figure BDA0003002221230000115
To nest longA new round of hidden states of the internal model in the short term memory network model.
And S349, calculating a future reservoir time sequence data set output by the output layer of the nested long-short term memory network model according to the formula 17.
yt=σ(Wyhht) Equation 17
Wherein, ytFor future reservoir time series data sets. σ is a singmoid function. WyhIs the output layer weight matrix. h istIs a new round of hidden state.
Specifically, as shown in FIG. 3, x in a single long-short term memory network modeltIs the input of the model, htIs the output of the model.
And as shown in fig. 4, in the nested long-short term memory network model, the nested long-short term memory network model includes a peripheral model and an internal model. The peripheral model and the internal model are both long-term and short-term memory network models. Memory cell c with internal model as peripheral modeltBut exists.
Equations 9 to 12 are all parameter calculations of the peripheral model. The parameter characteristic of the peripheral model is that the parameter symbol has no upper horizontal line. The memory cells c of the previous round after the calculation of the formulas 9 to 12t-1Forgetting door ftInput door itAll four data are calculated.
Memory cells of the peripheral model if there is no internal model ctShould be made from the memory cells c of the previous roundt-1Forgetting door ftInput door itThese four data are calculated by equation 13. Then the peripheral model outputs the calculation result h through the formula 14tI.e. a new round of hidden states. This embodiment lists the calculation steps of the peripheral model that are necessarily triggered by the training of the nested long-short term memory network model, which are also expressed by formula 13 and formula 14.
However, the special structure of the nested long-short term memory network model results in the memory cell c of the peripheral modeltAnd the result h is calculatedtThe calculation process of (a) is not so simple. Memory cell c of the previous roundt-1Forgetting door ftInput door itThese four data need to be input into the internal model for further calculations. Performing a series of operations again inside the internal model according to the memory cells c of the previous roundt-1Forgetting door ftInput door itThese four data calculate the hidden state of the previous round of the internal model
Figure RE-GDA0003048031340000121
And input of internal model
Figure RE-GDA0003048031340000122
Please see equation 15.
Figure RE-GDA0003048031340000123
And
Figure RE-GDA0003048031340000124
are all parameters of the internal model. The parameter feature of the internal model is that the parameter symbol has an upper horizontal line, and the contents described in this paragraph can be understood in conjunction with fig. 4.
Further, in the internal model, based on the same model structure and the same algorithm, the hidden state of the previous round of the internal model can be based
Figure BDA0003002221230000125
And input of internal model
Figure BDA0003002221230000126
Obtaining a new round of hidden states of the internal model output
Figure BDA0003002221230000127
Hidden state of new round of internal model output
Figure BDA0003002221230000128
Memory cell c as peripheral modeltCan obtain the real output result h of the peripheral modeltSee equation 16.
And finally, the expression formula of the future reservoir time series data set output by the peripheral model output layer can be deduced, and the expression formula is shown in a formula 17.
In the embodiment, an internal long-short term memory network model is nested in a peripheral long-short term memory network model to serve as memory cells of the peripheral model, so that the internal memory mode of the peripheral long-short term memory network model is changed, the whole nested long-short term memory network model is more suitable for processing multivariable long reservoir water level time sequences, and the problems of insufficient information capacity mining and irrelevant memory storage in the traditional technology are solved.
In an embodiment of the present application, the S350 includes the following steps:
s351, defining a loss function of the nested long-short term memory network model according to the formula 18.
Figure BDA0003002221230000131
Wherein E istIs the error at time t. And E is the total error. y istAnd outputting the future reservoir time sequence data set for the output layer.
Figure BDA0003002221230000132
Is the target data set. T is the total time length.
Specifically, this embodiment only illustrates a method for setting the loss function, and does not limit the setting methods of other loss functions that may be used in this application.
In an embodiment of the present application, the S360 includes the following S361 to S366:
s361, a weight gradient is calculated according to equation 19.
Figure BDA0003002221230000133
Wherein E istIs the error at time t. And E is the total error. WfxIs the first treeForget the gate weight matrix.
S362, adjust the output layer weight matrix according to the formula 20.
Figure BDA0003002221230000134
Wherein the content of the first and second substances,
Figure BDA0003002221230000135
is the adjusted output layer weight matrix.
Figure BDA0003002221230000136
Is the residual of the output layer. h istIs a new round of hidden state of the nested long-term memory network model.
Figure BDA0003002221230000137
Is the partial derivative of the output layer weight matrix. Alpha is a model training speed parameter.
S363, adjust the first forgetting gate weight matrix and the second forgetting gate weight matrix according to formula 21.
Figure BDA0003002221230000138
Wherein the content of the first and second substances,
Figure BDA0003002221230000139
the residual error of the forgotten gate. x is the number oftIs the current input. h ist-1The state is the hidden state of the previous round.
Figure BDA00030022212300001310
Is the partial derivative of the first forgetting gate weight matrix.
Figure BDA0003002221230000141
Is the partial derivative of the second forgetting gate weight matrix. Alpha is a model training speed parameter.
Figure BDA0003002221230000142
Is the adjusted first forgetting gate weight matrix.
Figure BDA0003002221230000143
Is the adjusted second forgetting gate weight matrix.
S364, adjusting the first input gate weight matrix and the second input gate weight matrix according to equation 22.
Figure BDA0003002221230000144
Wherein the content of the first and second substances,
Figure BDA0003002221230000145
the residual of the input gate. x is the number oftIs the current input.
Figure BDA0003002221230000146
Is the partial derivative of the first input gate weight matrix.
Figure BDA0003002221230000147
Is the partial derivative of the second input gate weight matrix. h ist-1The state is the hidden state of the previous round.
Figure BDA0003002221230000148
Is the adjusted first input gate weight matrix.
Figure BDA0003002221230000149
Is the adjusted second input gate weight matrix. Alpha is a model training speed parameter.
S365, the first output gate weight matrix and the second output gate weight matrix are adjusted according to the formula 23.
Figure BDA00030022212300001410
Wherein the content of the first and second substances,
Figure BDA00030022212300001411
the residual of the output gate. x is the number oftIs the current input.
Figure BDA00030022212300001412
Is a partial derivative of the first output gate weight matrix.
Figure BDA00030022212300001413
Is the partial derivative of the second output gate weight matrix. h ist-1The state is the hidden state of the previous round.
Figure BDA00030022212300001414
Is the adjusted first output gate weight matrix.
Figure BDA00030022212300001415
Is the adjusted second output gate weight matrix. Alpha is a model training speed parameter.
S366, adjusting the first candidate memory cell matrix and the second candidate memory cell matrix according to the formula 24;
Figure BDA00030022212300001416
wherein the content of the first and second substances,
Figure BDA00030022212300001417
is the residual error of the candidate memory cell. x is the number oftIs the current input.
Figure BDA00030022212300001418
Is a partial derivative of the first candidate memory cell matrix.
Figure BDA00030022212300001419
Is a partial derivative of the second candidate memory cell matrix. h ist-1The state is the hidden state of the previous round.
Figure BDA00030022212300001420
Is the adjusted first candidate memory cell matrix.
Figure BDA00030022212300001421
Is the adjusted second candidate memory cell matrix. Alpha is a model training speed parameter.
In particular, it is understood that the present embodiment is a method for adjusting the partial derivatives of the matrix of each important structure in the nested long-short term memory network model, wherein the structures comprise an input gate, an output layer and a candidate memory cell. The embodiment is equivalent to adjusting the model parameters of the nested long-short term memory network model.
The application also provides a reservoir water level time series prediction system.
As shown in fig. 2, in an embodiment of the present application, the reservoir level time series prediction system includes a nested long-short term memory network model 100 and a server 200. The server 200 is in communication with the nested long-term and short-term memory network model 100. The server 200 is configured to perform the reservoir level time series prediction method mentioned in any of the foregoing embodiments.
It should be noted that, the reservoir water level time series prediction system provided in this embodiment may apply the aforementioned reservoir water level time series prediction method, and therefore, for simplicity of description, the same devices or components appearing in the reservoir water level time series prediction method and the reservoir water level time series prediction system provided in this embodiment are labeled in the present embodiment in a unified manner, and the same devices or components of the method portion are not labeled.
The technical features of the embodiments described above may be arbitrarily combined, the order of execution of the method steps is not limited, all possible combinations of the technical features in the embodiments described above are not described for simplicity of description, and the combinations of the technical features are considered to be within the scope of the description of the present specification as long as there is no contradiction between the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A reservoir water level time series prediction method is characterized by comprising the following steps:
acquiring a plurality of historical factors influencing water level change and historical time sequence data corresponding to each historical factor, and generating a high-dimensional time sequence data set based on the historical time sequence data corresponding to each historical factor; the historical factors comprise one or more of historical surface rainfall, historical evaporation capacity, historical reservoir flow and historical measured water level;
preprocessing the high-dimensional time series dataset;
embedding a long-short term memory network model into another long-short term memory network model, replacing memory cells of the other long-short term memory network model to construct a nested long-short term memory network model, inputting a preprocessed high-dimensional time sequence data set into the nested long-short term memory network model, and outputting a future reservoir time sequence data set.
2. The reservoir level time series prediction method according to claim 1, wherein the preprocessing the high-dimensional time series data set comprises:
standardizing the high-dimensional time sequence data set to generate a standardized high-dimensional time sequence data set;
performing dimensionality reduction on the high-dimensional time sequence data set subjected to the standardization processing to generate a dimensionality-reduced time sequence data set;
and performing phase space reconstruction on the time series data set subjected to the dimension reduction processing to generate a reconstructed time series data set.
3. The reservoir level time series prediction method according to claim 2, wherein the normalizing the high-dimensional time series dataset to generate a normalized high-dimensional time series dataset comprises:
based on formula 1, carrying out data standardization on data in the high-dimensional time sequence data set to obtain a high-dimensional time sequence data set after standardization;
Figure FDA0003002221220000021
wherein X is a high-dimensional time sequence data set after standardization, N is the total number of historical factors influencing water level change, and XNFor the time series data corresponding to each historical factor in the normalized high-dimensional time series data set,
Figure FDA0003002221220000022
for time series data corresponding to a historical factor in a high-dimensional time series data set,
Figure FDA0003002221220000023
is the average of all time series data under a historical factor in a high-dimensional time series data set,
Figure FDA0003002221220000024
the standard deviation of all time series data under a history factor in a high-dimensional time series data set.
4. The reservoir water level time series prediction method according to claim 3, wherein the performing dimension reduction processing on the normalized high-dimensional time series data set to generate a dimension-reduced time series data set comprises:
mapping the high-dimensional time sequence data set subjected to the standardization processing to a high-dimensional feature space F to generate a mapping data set; the normalized high-dimensional time series data set and the mapping data set are shown in formula 2;
Figure FDA0003002221220000025
wherein X is a high-dimensional time series data set after normalization processing, phi (X)i) Is a mapping data set;
constructing a covariance matrix of the mapping data set, and constructing a relational expression of eigenvalues and eigenvectors of the covariance matrix of the mapping data set, wherein the relational expression is shown as a formula 2;
Figure FDA0003002221220000026
where C is the covariance matrix of the mapping dataset, λ is the eigenvalue of the covariance matrix of the mapping dataset, v is the eigenvector of the covariance matrix of the mapping dataset, φ (X)i) For mapping data sets, i is the serial number of the historical factors influencing the water level change, and N is the total number of the historical factors influencing the water level change;
deducing a formula 4 according to the formula 3;
Figure FDA0003002221220000031
wherein alpha isiV is the eigenvector of the covariance matrix of the mapped dataset, phi (X) for the equation coefficientsi) I is the serial number of the historical factor influencing the water level change for the mapping data set;
defining an NxN matrix Kij
Figure FDA0003002221220000032
Definition of
Figure FDA0003002221220000033
σ is the standard deviation of all data in the covariance matrix, and equation 4 and equation 5 are substitutedSimplifying after formula 3 to obtain formula 6;
n λ α ═ K α formula 6;
wherein alpha is i alphaiA column vector of components;
normalizing the eigenvector v of the covariance matrix of the mapping data set to obtain a sample phi (X)i) Mapping on v, the sample phi (X)i) The mapping on v is shown in equation 7;
Figure FDA0003002221220000034
wherein, Ti(X) is a sample phi (X)i) A mapping on v;
will Ti(X) is used as the time-series data set after the dimension reduction processing.
5. The reservoir water level time series prediction method according to claim 4, wherein the performing phase space reconstruction on the dimensionality-reduced time series data set to generate a reconstructed time series data set comprises:
setting reconstruction dimension and time delay, and performing dimension reduction processing on the time sequence data set Ti(X) performing a phase space reconstruction to generate a reconstructed time-series data set (X, Y) having a form shown in formula 8;
Figure FDA0003002221220000041
Figure FDA0003002221220000042
wherein m is the reconstruction dimension, tau is the time delay, and n is the total number of data in each row in the matrix.
6. The reservoir water level time series prediction method according to claim 5, wherein the building of the nested long and short term memory network model, inputting the preprocessed high-dimensional time series data set into the nested long and short term memory network model, and outputting the future reservoir time series data set comprises:
constructing a nested long-short term memory network model, wherein the nested long-short term memory network model is formed by nesting two long-short term memory network models;
initializing model parameters of the nested long and short term memory network model, and setting an error threshold;
inputting the reconstructed time series data set (X, Y) into the nested long-short term memory network model;
controlling the nested long-short term memory network model to perform forward calculation;
defining a loss function of the nested long-short term memory network model;
adjusting the weight matrix of each structure in the nested long and short term memory network model according to the loss function to generate an adjusted nested long and short term memory network model;
operating the adjusted nested long-short term memory network model, acquiring a future reservoir time sequence data set output by an output layer of the adjusted nested long-short term memory network model, calculating errors of the future reservoir time sequence data set and an actual data set output by the output layer, and judging whether the errors are smaller than an error threshold value;
and if the error is smaller than the error threshold value, outputting the future reservoir time sequence data set output by the output layer.
7. The reservoir water level time series prediction method according to claim 6, wherein the controlling the nested long and short term memory network model to perform forward calculation comprises:
calculating an expression of a forgetting gate in the nested long-short term memory network model according to a formula 9;
ft=σ(Wfxxt+Wfhht-1+bf) Equation 9;
wherein f istFor forgetting gate, σ is sigmoid function, WfxIs a first forgetting gate weight matrix, WfhIs a second forgetting gate weight matrix, bfTo forget the biasing of the door, xtFor the current input, ht-1The hidden state of the previous round;
calculating an expression of an input gate in the nested long and short term memory network model according to a formula 10;
it=σ(Wixxt+Wihht-1+bi) Equation 10;
wherein itFor the input gate, σ is the sigmoid function, WixIs a first input gate weight matrix, WihIs a second input gate weight matrix, biFor the offset of the input gate, xtFor the current input, ht-1The hidden state of the previous round;
calculating an expression of an output gate in the nested long and short term memory network model according to a formula 11;
ot=σ(Woxxt+Wohht-1+bo) Equation 11;
wherein o istFor the output gate, σ is the sigmoid function, WoxIs a first output gate weight matrix, WohIs a second output gate weight matrix, boFor biasing of output gates, xtFor the current input, ht-1The hidden state of the previous round;
calculating the expression of the candidate memory cells in the nested long-short term memory network model according to the formula 12;
Figure FDA0003002221220000051
wherein the content of the first and second substances,
Figure FDA0003002221220000052
as candidate memory cells, bcBias of candidate memory cells, WcxIs the first candidate memory cell weight matrix, WchIs the second candidateMemory cell weight matrix, xtFor the current input, ht-1The hidden state of the previous round;
calculating an expression of memory cells in the nested long-short term memory network model according to a formula 13;
Figure FDA0003002221220000053
wherein, ctBeing memory cells, ct-1As memory cells of the previous round, ftTo forget the door, itIs an input gate;
calculating a new round of hidden states of the nested long-short term memory network model according to a formula 14;
ht=ot×tanh(ct) Equation 14;
wherein h istHidden state for a new round, otTo the output gate, ctIs a memory cell;
calculating the hidden state of the previous round of the internal model in the nested long and short term memory network model and the input of the internal model in the nested long and short term memory network model according to the formula 15;
Figure FDA0003002221220000061
wherein the content of the first and second substances,
Figure FDA0003002221220000062
for the hidden state of the previous round of the internal model in the nested long-short term memory network model,
Figure FDA0003002221220000063
for the input of the internal model in the nested long-short term memory network model, ftTo forget the door, ct-1As memory cells of the previous round itIn order to input the information into the gate,
Figure FDA0003002221220000064
is a candidate memory cell;
updating the external memory cells in the nested long and short term memory network model according to a formula 16, and endowing the value of the hidden state of the new round of the internal model in the nested long and short term memory network model to the external memory cells in the nested long and short term memory network model;
Figure FDA0003002221220000065
wherein, ctIs a memory cell, and is characterized in that,
Figure FDA0003002221220000066
a new round of hidden state of an internal model in the nested long-short term memory network model;
calculating a future reservoir time sequence data set output by an output layer of the nested long-short term memory network model according to a formula 17;
yt=σ(Wyhht) Equation 17;
wherein, ytFor future reservoir time series datasets, σ is the singmoid function, WyhAs an output layer weight matrix, htIs a new round of hidden state.
8. The reservoir water level time series prediction method according to claim 7, wherein the defining the loss function of the nested long-short term memory network model comprises:
defining a loss function of the nested long-short term memory network model according to a formula 18;
Figure FDA0003002221220000071
wherein E istError at time t, E total error, ytA future reservoir time sequence data set output for the output layer,
Figure FDA0003002221220000072
for the target data set, T is the total length of time.
9. The reservoir water level time series prediction method according to claim 8, wherein the adjusting the weight matrix of each structure in the nested long-short term memory network model according to the loss function comprises:
calculating a weight gradient according to equation 19;
Figure FDA0003002221220000073
wherein E istError at time t, E total error, WfxIs a first forgetting gate weight matrix;
adjusting the output layer weight matrix according to equation 20;
Figure FDA0003002221220000074
wherein the content of the first and second substances,
Figure FDA0003002221220000075
for the adjusted output layer weight matrix,
Figure FDA0003002221220000076
as residual of the output layer, htFor a new round of hidden states of the nested long-short term memory network model,
Figure FDA0003002221220000077
is the partial derivative of the weight matrix of the output layer, and alpha is a model training speed parameter;
adjusting the first forgetting gate weight matrix and the second forgetting gate weight matrix according to a formula 21;
Figure FDA0003002221220000078
wherein the content of the first and second substances,
Figure FDA0003002221220000079
to forget the residual error of the door, xtFor the current input, ht-1In order to be in a hidden state in the previous round,
Figure FDA00030022212200000710
as a partial derivative of the first forgetting gate weight matrix,
Figure FDA00030022212200000711
is the partial derivative of the second forgetting gate weight matrix, alpha is the model training speed parameter,
Figure FDA0003002221220000081
to the adjusted first forgetting gate weight matrix,
Figure FDA0003002221220000082
the adjusted second forgetting gate weight matrix;
adjusting the first input gate weight matrix and the second input gate weight matrix according to equation 22;
Figure FDA0003002221220000083
wherein the content of the first and second substances,
Figure FDA0003002221220000084
is the residual of the input gate, xtIn order to be the current input,
Figure FDA0003002221220000085
being the partial derivatives of the first input gate weight matrix,
Figure FDA0003002221220000086
is the partial derivative of the weight matrix of the second input gate, ht-1In order to be in a hidden state in the previous round,
Figure FDA0003002221220000087
for the adjusted first input gate weight matrix,
Figure FDA0003002221220000088
alpha is a model training speed parameter for the adjusted second input gate weight matrix;
adjusting the first output gate weight matrix and the second output gate weight matrix according to equation 23;
Figure FDA0003002221220000089
wherein the content of the first and second substances,
Figure FDA00030022212200000810
to output the residual error of the gate, xtIn order to be the current input,
Figure FDA00030022212200000811
being the partial derivatives of the first output gate weight matrix,
Figure FDA00030022212200000812
is the partial derivative of the weight matrix of the second output gate, ht-1In order to be in a hidden state in the previous round,
Figure FDA00030022212200000813
for the adjusted first output gate weight matrix,
Figure FDA00030022212200000814
alpha is a model training speed parameter for the adjusted second output gate weight matrix;
adjusting the first candidate memory cell matrix and the second candidate memory cell matrix according to equation 24;
Figure FDA00030022212200000815
wherein the content of the first and second substances,
Figure FDA00030022212200000816
residual error of candidate memory cell, xtIn order to be the current input,
Figure FDA00030022212200000817
is a partial derivative of the first candidate memory cell matrix,
Figure FDA00030022212200000818
is the partial derivative of the second candidate memory cell matrix, ht-1In order to be in a hidden state in the previous round,
Figure FDA00030022212200000819
to adjust the first candidate memory cell matrix,
Figure FDA00030022212200000820
for the adjusted second candidate memory cell matrix, α is the model training speed parameter.
10. A reservoir water level time series prediction system, comprising:
nesting long and short term memory network models;
a server, communicatively connected to the nested long-short term memory network model, for executing the reservoir level time series prediction method according to any one of claims 1 to 9.
CN202110351303.8A 2021-03-31 2021-03-31 Reservoir water level time sequence prediction method and system Pending CN112990598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110351303.8A CN112990598A (en) 2021-03-31 2021-03-31 Reservoir water level time sequence prediction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110351303.8A CN112990598A (en) 2021-03-31 2021-03-31 Reservoir water level time sequence prediction method and system

Publications (1)

Publication Number Publication Date
CN112990598A true CN112990598A (en) 2021-06-18

Family

ID=76338742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110351303.8A Pending CN112990598A (en) 2021-03-31 2021-03-31 Reservoir water level time sequence prediction method and system

Country Status (1)

Country Link
CN (1) CN112990598A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113866391A (en) * 2021-09-29 2021-12-31 天津师范大学 Deep learning model prediction factor interpretation method and application thereof in soil water content prediction
CN114091362A (en) * 2022-01-24 2022-02-25 江西飞尚科技有限公司 Water level prediction method, system, storage medium and equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951669A (en) * 2017-05-19 2017-07-14 北京航空航天大学 A kind of cognitive rolling bearing variable working condition method for diagnosing faults of view-based access control model
CN109410575A (en) * 2018-10-29 2019-03-01 北京航空航天大学 A kind of road network trend prediction method based on capsule network and the long Memory Neural Networks in short-term of nested type
CN110070715A (en) * 2019-04-29 2019-07-30 浙江工业大学 A kind of road traffic flow prediction method based on Conv1D-NLSTMs neural network structure
CN110070713A (en) * 2019-04-15 2019-07-30 浙江工业大学 A kind of traffic flow forecasting method based on two-way nested-grid ocean LSTM neural network
CN110095744A (en) * 2019-04-04 2019-08-06 国网江苏省电力有限公司电力科学研究院 A kind of electronic mutual inductor error prediction method
CN110210126A (en) * 2019-05-31 2019-09-06 重庆大学 A kind of prediction technique of the gear remaining life based on LSTMPP
CN111079906A (en) * 2019-12-30 2020-04-28 燕山大学 Cement product specific surface area prediction method and system based on long-time and short-time memory network
CN111340284A (en) * 2020-02-24 2020-06-26 成都大汇物联科技有限公司 Intelligent waterwheel room water level prediction method based on long-time memory network
CN111461201A (en) * 2020-03-30 2020-07-28 重庆大学 Sensor data classification method based on phase space reconstruction
CN111832704A (en) * 2020-06-30 2020-10-27 东南大学 Design method of convolution input type nested recurrent neural network
CN111863153A (en) * 2020-07-24 2020-10-30 青岛洪锦智慧能源技术有限公司 Method for predicting total amount of suspended solids in wastewater based on data mining
CN111866128A (en) * 2020-07-20 2020-10-30 浙江树人学院(浙江树人大学) Internet of things data flow detection method based on double-LSTM iterative learning
CN111914875A (en) * 2020-06-05 2020-11-10 华南理工大学 Fault early warning method of rotating machinery based on Bayesian LSTM model
CN112446537A (en) * 2020-11-20 2021-03-05 国网浙江省电力有限公司宁波供电公司 Short-term load prediction method based on deep long-term and short-term memory network

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951669A (en) * 2017-05-19 2017-07-14 北京航空航天大学 A kind of cognitive rolling bearing variable working condition method for diagnosing faults of view-based access control model
CN109410575A (en) * 2018-10-29 2019-03-01 北京航空航天大学 A kind of road network trend prediction method based on capsule network and the long Memory Neural Networks in short-term of nested type
CN110095744A (en) * 2019-04-04 2019-08-06 国网江苏省电力有限公司电力科学研究院 A kind of electronic mutual inductor error prediction method
CN110070713A (en) * 2019-04-15 2019-07-30 浙江工业大学 A kind of traffic flow forecasting method based on two-way nested-grid ocean LSTM neural network
CN110070715A (en) * 2019-04-29 2019-07-30 浙江工业大学 A kind of road traffic flow prediction method based on Conv1D-NLSTMs neural network structure
CN110210126A (en) * 2019-05-31 2019-09-06 重庆大学 A kind of prediction technique of the gear remaining life based on LSTMPP
CN111079906A (en) * 2019-12-30 2020-04-28 燕山大学 Cement product specific surface area prediction method and system based on long-time and short-time memory network
CN111340284A (en) * 2020-02-24 2020-06-26 成都大汇物联科技有限公司 Intelligent waterwheel room water level prediction method based on long-time memory network
CN111461201A (en) * 2020-03-30 2020-07-28 重庆大学 Sensor data classification method based on phase space reconstruction
CN111914875A (en) * 2020-06-05 2020-11-10 华南理工大学 Fault early warning method of rotating machinery based on Bayesian LSTM model
CN111832704A (en) * 2020-06-30 2020-10-27 东南大学 Design method of convolution input type nested recurrent neural network
CN111866128A (en) * 2020-07-20 2020-10-30 浙江树人学院(浙江树人大学) Internet of things data flow detection method based on double-LSTM iterative learning
CN111863153A (en) * 2020-07-24 2020-10-30 青岛洪锦智慧能源技术有限公司 Method for predicting total amount of suspended solids in wastewater based on data mining
CN112446537A (en) * 2020-11-20 2021-03-05 国网浙江省电力有限公司宁波供电公司 Short-term load prediction method based on deep long-term and short-term memory network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜洪权等: ""一种适用于高维非线性特征数据的聚类算法及应用"", 《西安交通大学学报》, vol. 51, no. 12, pages 49 - 55 *
汪心怡等: ""利用相空间重构短时间间隔的吸收太阳能预测方法"", 《实验室研究与探索》, vol. 39, no. 1, pages 32 - 36 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113866391A (en) * 2021-09-29 2021-12-31 天津师范大学 Deep learning model prediction factor interpretation method and application thereof in soil water content prediction
CN113866391B (en) * 2021-09-29 2024-03-08 天津师范大学 Deep learning model prediction factor interpretation method and application thereof in soil water content prediction
CN114091362A (en) * 2022-01-24 2022-02-25 江西飞尚科技有限公司 Water level prediction method, system, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN107688871B (en) Water quality prediction method and device
CN106022954B (en) Multiple BP neural network load prediction method based on grey correlation degree
CN112990598A (en) Reservoir water level time sequence prediction method and system
CN112365033B (en) Wind power interval prediction method, system and storage medium
CN112215412B (en) Dissolved oxygen prediction method and device
CN114565021A (en) Financial asset pricing method, system and storage medium based on quantum circulation neural network
Qiao et al. Identification of fuzzy neural networks by forward recursive input-output clustering and accurate similarity analysis
CN111709550A (en) Junctor planning method and system based on deep learning
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
CN113836823A (en) Load combination prediction method based on load decomposition and optimized bidirectional long-short term memory network
CN112884236A (en) Short-term load prediction method and system based on VDM decomposition and LSTM improvement
CN113850438A (en) Public building energy consumption prediction method, system, equipment and medium
CN117521511A (en) Granary temperature prediction method based on improved wolf algorithm for optimizing LSTM
CN116011109B (en) Spacecraft service life prediction method and device, electronic equipment and storage medium
CN116663395A (en) Rainfall equivalent surface generation method based on regression of support vector machine of parameter optimization
Jadli et al. A Novel LSTM-GRU-Based Hybrid Approach for Electrical Products Demand Forecasting.
Angelov et al. Data-driven evolving fuzzy systems using eTS and FLEXFIS: Comparative analysis
CN114140158A (en) Power distribution network investment demand determination method, device, equipment and storage medium based on combination prediction
Yang et al. ELM parameter estimation in view of maximum likelihood
Delev et al. Application of Machine Learning in the producer's optimal control problem with non-stable demand
Huh Enhanced stochastic gradient descent with backward queried data for online learning
CN115986746B (en) Prediction method and device based on soft sensor, computer equipment and storage medium
CN113688774B (en) Advanced learning-based high-rise building wind induced response prediction and training method and device
Rajab et al. A new approach to ANFIS modeling using kernel based FCM clustering
Ma The superiority of XGboost model in the forecast of medical stock price in the period of COVID-19

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination