CN115310724A - Precipitation prediction method based on Unet and DCN _ LSTM - Google Patents
Precipitation prediction method based on Unet and DCN _ LSTM Download PDFInfo
- Publication number
- CN115310724A CN115310724A CN202211233428.1A CN202211233428A CN115310724A CN 115310724 A CN115310724 A CN 115310724A CN 202211233428 A CN202211233428 A CN 202211233428A CN 115310724 A CN115310724 A CN 115310724A
- Authority
- CN
- China
- Prior art keywords
- representing
- gate
- dcn
- lstm
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000001556 precipitation Methods 0.000 title claims description 25
- 238000012549 training Methods 0.000 claims abstract description 65
- 210000004027 cell Anatomy 0.000 claims description 86
- 239000011159 matrix material Substances 0.000 claims description 57
- 230000006870 function Effects 0.000 claims description 41
- 238000002310 reflectometry Methods 0.000 claims description 16
- 230000004913 activation Effects 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 14
- CIWBSHSKHKDKBQ-JLAZNSOCSA-N Ascorbic acid Chemical compound OC[C@H](O)[C@H]1OC(=O)C(O)=C1O CIWBSHSKHKDKBQ-JLAZNSOCSA-N 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 11
- 239000000126 substance Substances 0.000 claims description 7
- 239000000203 mixture Substances 0.000 claims description 6
- 210000002569 neuron Anatomy 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 5
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 3
- 238000012937 correction Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
The invention discloses a rainfall prediction method based on Unet and DCN _ LSTM, which relates to the technical field of weather forecast, can reduce training time, improve the timeliness of rainfall prediction, effectively combine two models, improve the precision of rainfall prediction, effectively capture space-time correlation, input the offset of hidden states and memory cells by using deformable convolution learning, and adjust the position of a convolution kernel by inputting, so that the position of the convolution kernel is not fixed any more, the characteristics of a rainfall region can be effectively extracted, a Bayesian algorithm is used, the problem of fussy manual parameter adjustment can be solved, an optimal hyper-parameter combination can be learned by the Bayesian algorithm, the precision is higher by using a Unet and DCN _ LSTM mixed model than using a single model, and the effect is better.
Description
Technical Field
The invention relates to the technical field of weather forecast, in particular to a rainfall prediction method based on Unet and DCN _ LSTM.
Background
The rainstorm incident can influence people's normal life, cause great life loss and loss of property, therefore, accurate rainfall prediction plays vital role to people's life and trip, the forecast in advance of rainfall can be to the public, reduce the mechanism of calamity risk, government department and infrastructure's managers provide early warning, after the rainstorm early warning is released, above-mentioned relevant personnel, mechanism and department take corresponding action according to predetermined standard operating program, in order to save life and protect the property, so the prediction has huge influence to aviation service, public safety and each field of people, the rainfall prediction is a major problem always, accurate prediction rainfall is not only vital to people's trip and to the society, also can avoid the appearance of heavy disasters such as rainstorm and debris flow simultaneously.
The traditional method for forecasting the precipitation is mainly based on a numerical prediction mode (NWP), and means that an ultra-large computer is adopted as a numerical calculation tool through an atmospheric condition, and the motion state and the weather condition of a future period of time are forecasted through a fluid mechanics and thermodynamic equation set.
The existing deep learning rainfall prediction methods are many, but a single model is adopted, two models are not combined, unet is used independently for rainfall prediction, the extracted time correlation is too small, the prediction is not suitable for long-time prediction, for example, one-hour prediction is realized, the LSTM model is used independently for prediction, the calculated amount is complex, the training time is long, the model parameters are large, and the computer memory is consumed; therefore, the rainfall prediction method based on the Unet and the DCN _ LSTM is provided, compared with the existing method, the time-space correlation can be extracted effectively, the global characteristics of rainfall can be extracted better as the characteristic diagram becomes smaller, meanwhile, the model parameters become smaller, the training time becomes shorter, and the precision of rainfall prediction can be improved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a precipitation prediction method based on Unet and DCN _ LSTM, which comprises the following steps
S1, acquiring meteorological radar data and preprocessing the meteorological radar data;
s2, constructing a Unet and DCN _ LSTM mixed model;
s3, adding a Bayesian algorithm to the Unet and DCN _ LSTM mixed model, performing hyper-parameter optimization, and searching for an optimal parameter combination;
s4, testing the Unet and DCN _ LSTM mixed model;
and S5, converting the result of the prediction of the test set into radar reflectivity through a pixel value, and then obtaining rainfall according to the relation between the radar reflectivity and the rainfall.
The technical scheme of the invention is further defined as follows:
further, in step S1, the method for preprocessing the weather radar data includes the following steps
S1.1, removing abnormal values and repeated values of data, and performing bilinear interpolation on missing values of the data;
s1.2, screening the data sets to ensure that each echo sequence has 20% precipitation coverage rate;
s1.3, carrying out normalization processing on the data, wherein a specific formula is as follows,
wherein X * Representing normalized radar echo intensity values, X max Indicating the maximum value of the radar echo intensity, X min Representing the minimum value of the radar echo intensity, and X represents the radar echo intensity value;
s1.4, dividing the data set by adopting a proportion of 8.
In the foregoing precipitation prediction method based on the Unet and the DCN _ LSTM, in step S2, the method for constructing the Unet and DCN _ LSTM hybrid model includes the following steps
S2.1, a hybrid model encoder part inputs training set data into a model, a characteristic diagram is changed into a half of an original size through two 3 x 3 convolutional layers and then a maximum pooling layer, and the number of channels is doubled through the two 3 x 3 convolutional layers;
s2.2, adopting DCN _ LSTM in the middle of the hybrid model for extracting the time characteristics and the space characteristics of the radar echo sequence, wherein the DCN _ LSTM consists of a plurality of DCN _ LSTM circulating units and is used for decomposing the characteristic diagram output by the encoder and sequentially inputting the characteristic diagram into the DCN _ LSTM circulating units for training;
s2.3, the mixed model decoder part splices the radar echo sequence output by the DCN _ LSTM according to channels, then passes through two 3 x 3 convolutional layers, is subjected to up-sampling, is connected with a characteristic diagram output by the encoder in a skipping mode, passes through the two 3 x 3 convolutional layers and one 1 x 1 convolutional layer, and finally outputs a predicted radar echo sequence.
In the foregoing, in step S2.2, the DCN _ LSTM learns the offset of the input X to the hidden state H and the memory cell C by using the feasible variable convolution, so as to update the hidden state H and the memory cell C, and slides on the input picture, the feasible variable convolution takes the obtained feature map as an input, and a convolution layer is applied to the feature map, so as to obtain the deformation offset of the feasible variable convolution, wherein the offset layer is 2N, and the translation is performed on a plane, and both directions X and y need to be changed;
the specific formula of the feasible convolution is as follows,
wherein, R represents a convolution kernel of 3 × 3, (-1, -1), (-1, 0), (0, 1), (1, 1) represents points in the convolution kernel, and coordinates are integers;
wherein the content of the first and second substances,representing a feature matrix resulting from the variable convolution,represents the learning amount of each point obtained through neural network learning within a convolution kernel of size 3 x 3,represents the center point, i.e., (0, 0) point,representing points defined in the R range, with more offset matrices learned by convolution than standard convolution feasible convolutions。
In the aforementioned rainfall prediction method based on Unet and DCN _ LSTM, the DCN _ LSTM model includes a plurality of DCN _ LSTM circulation units, feature information is screened and transmitted through a gating mechanism, and forgetting gates, input gates, modulation gates, output gates, time memory cells and hidden states of convLSTM are reserved, wherein f is respectively t、 i t、 g t、 o t、 C t And H t (ii) a Also includes space cells M t For extracting and transferring spatial structure features vertically between different layers, and adding feasible convolution to learn the offset of input X to hidden state H and memory cell C, the concrete formula is as follows,
where DCN represents a variable convolutional network, X t Representing the picture entered, the lower coordinate t representing the moment of entry,andrespectively representing the hidden state and the memory cells, the lower coordinate t-1 representing the last moment, the upper coordinate 1 representing the first layer,andrespectively representing the new hidden state and the memory cell obtained after the update.
One of the foregoing precipitation prediction methods based on Unet and DCN _ LSTM, the DCN _ LSTM model, the input gate, update gate and forget gate for updating memory cells, is as follows,
wherein i t An input gate representing a refresh memory cell; sigma represents an activation function sigmoid; w is a group of xi A parameter matrix representing the training of input X, input gate i; w is a group of hi A parameter matrix representing the training of the input gate i for the hidden state H; x t An input representing time t;representing the hidden state of the l-th layer at the time t-1; b i Represents the offset to input gate i;
g t a refresh gate indicating a refresh of the memory cell; tanh represents an activation function tanh; w xg Representing a parameter matrix trained for updating gate g for input X; w hg Representing the parameter matrix of the gate g training updated for the hidden state H; b g Represents the bias to the refresh gate g;
f t a forgetting gate that indicates a renewed memory cell; w xf A parameter matrix representing the training of the forgetting gate f for the input X; w hf A parameter matrix representing the training of the hidden state H and the forgetting gate f; b f Represents a bias to the forgetting gate f; * Represents a convolution;
an input gate for updating the cells in space, an update gate and a forget gate, the formula is as follows,
wherein the content of the first and second substances,an input gate representing an update space cell; sigma represents an activation function sigmoid;a parameter matrix representing the training of input X, input gate i; w mi A parameter matrix representing the training of the input gate i for the space cell M; x t An input representing time t;represents the space cell of the l-1 st layer at the time point t;represents the offset to input gate i;
an update gate representing an updated spatial cell; tanh represents an activation function tanh;representing a parameter matrix trained for updating gate g for input X; w mg Representing a parameter matrix for updating gate g training on the space cell M;represents the bias to the refresh gate g;
a forgetting gate representing an updated spatial cell;a parameter matrix representing the training of the forgetting gate f for the input X; w mf A parameter matrix representing the training of the spatial cell M and the forgetting gate f;represents a bias to the forgetting gate f; * Represents a convolution;
the hidden state, i.e. the output, is updated by the memory cells, the space cells and the output gate, the specific formula is as follows,
wherein the content of the first and second substances,memory cells representing the first layer at time t; i.e. i t Representing an input gate; g t Represents an update gate; f. of t Indicating a forgetting gate;memory cells in layer I at time t-1;represents the space cell of the l layer at the t time;an input gate representing a spatial cell;an update gate representing a spatial cell;a forgetting gate representing a spatial cell;represents the space cell of l-1 layer at the t time; o t An output gate is shown; sigma represents an activation function sigmoid;
W xo a parameter training matrix representing the output gate o for input X; w ho A parameter training matrix representing the output gate o for the hidden state H; w co A matrix of training parameters representing the input gate o for memory cell C; w is a group of mo Representing the training parameter matrix of the input gate o for the spatial cell M; x t An input representing time t;representing the hidden state of the l-th layer at the time of t-1;memory cells representing the first layer at time t;representing the space cells of the l layer at the t time; b is a mixture of o Represents the offset to the output gate; w 1x1 A convolution kernel representing a size of 1 × 1;representing the new hidden state obtained by updating; degree represents the multiplication of the matrices in bits; * Representing a convolution.
In the foregoing precipitation prediction method based on Unet and DCN _ LSTM, in step S3, the number of neurons, the batch size and the learning rate of the hidden layer are optimized through a Bayesian algorithm, which includes the following steps
S3.1, assuming a set of hyper-parametric combinations isWhereinRespectively representing the parameter combinations of the number of neurons of the hidden layer, the batch size and the learning rate, and assuming that a loss function and a set hyper-parameter have a mapping relation;
assuming the function f x → R, it is necessary to determine the value of x ∈ R
S3.2, obtaining the random initialization point of the hyper-parameter in the parameter range according to the determined and optimized hyper-parameterInputting experimental data to train the model, the response value of the loss function isEstablishing a Gaussian regression process;
wherein K is a constant, K (x) anda covariance matrix is represented by a value of the covariance matrix,represents the variance of sample n;
s3.3, selecting the next hyper-parameter combination sampling point from the Gaussian regression model based on the sampling function PIAnd the sampling function PI is as follows,
where Φ () represents a normal distribution cumulative density function,andrespectively representing the mean and variance of the objective function value,the value of the optimum objective function is represented,representing a parameter;
s3.4, bringing the selected first group of hyper-parameter combinations into model training, outputting real values of ground observation and predicted mean square error values of radar echo sequences, and if the mean square error values are smaller than a preset threshold value, stopping updating and outputting optimal hyper-parameter combinations; if the mean square error value is not less than the preset threshold value, the hyper-parameter is updated toAnd repeating the step S3.2 to the step S3.4 until a hyperparameter combination with the mean square error value smaller than a preset threshold value is found.
In the aforementioned method for predicting precipitation based on the Unet and DCN _ LSTM, in step 3.4, the preset threshold is set to 0.0001.
The method for predicting precipitation based on Unet and DCN _ LSTM in step S4 comprises the following steps
S4.1, loading the weight of model training, testing and storing the weight in a picture format;
s4.2, adopting mean square error, structural similarity and critical success index as evaluation indexes of the test set;
the mean square error is used for evaluating the difference of pixel points of two pictures, and a specific formula is as follows,
where n denotes the total number of samples, i denotes the sample number of the sample points, Y denotes the real tag of the real radar echo pattern,representing a predicted radar echo map;
the structural similarity is used for measuring the similarity of two pictures, and a specific formula is as follows,
wherein u is x And u y Denotes the mean, σ, of x and y, respectively x And σ y Denotes the variance, σ, for x and y, respectively xy Represents the covariance of the two pictures x and y, C 1 And C 2 Represents a constant;
the specific formula for the critical success index is as follows:
wherein, TP indicates that the true category is positive and the prediction result is also positive, FP indicates that the true category is negative and the prediction result is negative, and FN indicates that the true category is positive and the prediction result is negative.
In the aforementioned precipitation prediction method based on the Unet and DCN _ LSTM, in step S5, the result of prediction of the test set is converted into radar reflectivity through pixel values, and the specific formula is as follows,
wherein, radar _ value represents the value of radar reflectivity of each pixel point converted by a formula, and pixel _ value represents the value of each pixel point;
then the rainfall is obtained according to the relation between the radar reflectivity and the rainfall, the concrete formula is as follows,
wherein Z represents radar reflectivity, R represents rainfall, and a, b represent coefficients.
The beneficial effects of the invention are:
according to the rainfall prediction method, training time can be shortened, timeliness of rainfall prediction is improved, the two models are effectively combined, precision of rainfall prediction is improved, space-time correlation can be effectively captured, the deformable convolution is used for learning offset of input to hidden states and memory cells, the position of a convolution kernel can be adjusted through input, the position of the convolution kernel is enabled to be no longer fixed, characteristics of a rainfall area can be effectively extracted, a Bayesian algorithm is used, complexity of manual parameter adjustment can be solved, an optimal hyper-parameter combination can be learned through the Bayesian algorithm, through multi-index evaluation, the accuracy of prediction is higher by using a Unet and DCN _ LSTM mixed model than that of prediction by using a single model, and the effect is better.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a schematic diagram of the hybrid model of Unet and DCN _ LSTM in the present invention;
FIG. 3 is a schematic flow chart of the Bayesian algorithm for hyperparametric optimization in the present invention.
Detailed Description
The present embodiment provides a precipitation prediction method based on Unet and DCN _ LSTM, as shown in fig. 1 to 3, which includes the following steps
S1, acquiring meteorological radar data, wherein the used data is a radar echo sequence dataset, 10 frames are input and predicted, the interval of each frame is 6 minutes, namely, the rainfall of one hour in the future is predicted by historical data of the previous hour, and the meteorological radar data is preprocessed, and the method specifically comprises the following steps:
s1.1, removing abnormal values and repeated values of data, and performing bilinear interpolation on missing values of the data;
s1.2, screening the data set to ensure that each echo sequence has 20% precipitation coverage rate, and predicting that if the data are not screened during rainfall, a plurality of sequences are likely to have no precipitation coverage, so that the training model effect is not ideal;
s1.3, carrying out normalization processing on the data, wherein a specific formula is as follows,
wherein, X * Representing normalized radar echo intensity values, X max Representing the maximum value, X, of the radar echo intensity min Representing the minimum value of the radar echo intensity, and X represents the radar echo intensity value;
s1.4, dividing the data set by adopting a proportion of 8.
S2, constructing a Unet and DCN _ LSTM hybrid model, wherein the Unet and DCN _ LSTM hybrid model mainly comprise a first half part and a second half part of the Unet and respectively form an encoder part and a decoder part of the model, and a convolutional layer is originally arranged in the middle of the Unet, so that the DCN _ LSTM model is replaced in the method, the time and space correlation can be effectively extracted, and the method comprises the following specific steps:
s2.1, a mixed model encoder part, wherein the encoder part adopts the first half part of Unet, training set data is input into the model, and the input is [8,10,200,200], wherein 8 represents batch size number, 10 represents input seq sequence, and 200 respectively represent the length and width of an input picture; after passing through two 3 × 3 convolutional layers, adding a correction linear unit after each layer to enable the model to become nonlinear and batch normalization, then passing through a 2 × 2 maximum pooling layer, doubling the number of channels after pooling, changing the size of an input radar echo picture into half of the original size, enabling the picture length to be 100 and the width to be 100, and then passing through the two 3 × 3 convolutional layers, adding a correction linear unit after each layer to enable the model to become nonlinear and batch normalization, and doubling the number of channels to be 128;
s2.2, adopting DCN _ LSTM in the middle of the hybrid model for extracting the time characteristics and the space characteristics of a radar echo sequence, wherein the DCN _ LSTM is composed of a plurality of DCN _ LSTM circulating units, performing dimension conversion on [8,128,100 and 100] output by an encoder into [8,128,1,100 and 100], and sequentially inputting the converted data into the DCN _ LSTM circulating units for training;
the DCN _ LSTM updates the hidden state H and the memory cell C by using the offset of the feasible variable convolution learning input X to the hidden state H and the memory cell C, slides on an input picture, uses the obtained feature map as an input by the feasible variable convolution, and applies a convolution layer to the feature map so as to obtain the deformation offset of the feasible variable convolution, wherein the offset layer is 2N, and the translation is carried out on a plane, and the two directions of X and y need to be changed;
the specific formula for the feasible convolution is as follows,
wherein, R represents a convolution kernel of 3 × 3, (-1, -1), (-1, 0), (0, 1), (1, 1) represents points in the convolution kernel, and coordinates are integers;
wherein, the first and the second end of the pipe are connected with each other,representing a feature matrix resulting from the feasible convolution,is represented in a size ofThe learning amount of each point obtained by neural network learning in the convolution kernel of 3 x 3,represents the center point, i.e., (0, 0) point,representing points defined in the R range, with more offset matrices learned by convolution than standard convolution feasible convolutions;
The DCN _ LSTM model comprises a plurality of DCN _ LSTM circulation units, characteristic information is screened and transmitted through a gate control mechanism, and forgetting gates, input gates, modulation gates, output gates, time memory cells and hidden states of convLSTM are reserved, wherein the forgetting gates, the input gates, the modulation gates, the output gates, the time memory cells and the hidden states are respectively f t、 i t、 g t、 o t、 C t And H t (ii) a Also includes space cells M t For extracting and transmitting spatial structure features vertically between different layers, and adding feasible convolution to learn the offset of input X to hidden state H and memory cell C, the concrete formula is as follows,
where DCN represents a variable convolutional network, X t Representing the picture entered, the lower coordinate t representing the moment of entry,andrespectively representing the hidden state and the memory cells, the lower coordinate t-1 representing the previous moment, the upper coordinate 1 representing the first layer,andrespectively representing the new hidden state and the memory cell obtained after updating;
in the DCN _ LSTM model, the input gate, the update gate, and the forget gate for updating the memory cells are defined as follows,
wherein i t An input gate representing a refresh memory cell; sigma represents an activation function sigmoid; w xi A parameter matrix representing the training of input X, input Gate i; w hi A parameter matrix representing the training of the input gate i for the hidden state H; x t An input representing time t;representing the hidden state of the l-th layer at the time t-1; b i Represents the offset to input gate i;
g t a refresh gate indicating a refresh of the memory cell; tanh represents an activation function tanh; w xg Representing a parameter matrix trained for updating gate g for input X; w hg Representing that the parameter matrix of the training of the gate g is updated for the hidden state H; b is a mixture of g Represents the bias to the refresh gate g;
f t a forgetting gate representing a renewed memory cell; w xf A parameter matrix representing the training of the forgetting gate f for the input X; w is a group of hf Representing a parameter matrix for training a hidden state H and a forgetting gate f; b is a mixture of f Represents a bias to the forgetting gate f; * Represents a convolution;
an input gate for updating the cells in space, an update gate and a forget gate, the formula is as follows,
wherein the content of the first and second substances,an input gate representing the update space cells; σ represents an activation function sigmoid;a parameter matrix representing the training of input X, input Gate i; w mi A parameter matrix representing the training of the input gate i for the space cell M; x t An input representing time t;represents the space cell of the l-1 st layer at the time point t;represents the offset to input gate i;
an update gate representing an updated spatial cell; tanh represents an activation function tanh;representing a parameter matrix trained for updating gate g for input X; w mg Representing a parameter matrix for updating gate g training on the space cell M;represents the bias to the refresh gate g;
a forgetting gate representing an updated spatial cell;a parameter matrix representing the training of the forgetting gate f for the input X; w mf A parameter matrix representing the training of the spatial cell M and the forgetting gate f;indicating to left behind door fBiasing; * Represents a convolution;
the hidden state, i.e. the output, is updated by the memory cells, the space cells and the output gate, the specific formula is as follows,
wherein the content of the first and second substances,memory cells representing the first layer at time t; i.e. i t Representing an input gate; g is a radical of formula t Represents an update gate; f. of t Indicating a forgetting gate;memory cells in the first layer at time t-1;representing the space cells of the l layer at the t time;an input gate representing a spatial cell;an update gate representing a spatial cell;a forgetting gate representing a spatial cell;represents the space cell of l-1 layer at the t time; o t An output gate is shown; σ represents an activation function sigmoid;
W xo a parameter training matrix representing the output gate o for input X; w ho A parameter training matrix representing the output gate o for the hidden state H; w co Representing the training parameter matrix of input gate o for memory cell C; w mo Representing training on spatial cell M, input Gate oA parameter matrix; x t An input representing time t;representing the hidden state of the l-th layer at the time t-1;memory cells representing the first layer at time t;representing the space cells of the l layer at the t time; b is a mixture of o Represents the offset to the output gate; w 1x1 A convolution kernel representing a size of 1 × 1;representing the new hidden state obtained by updating; degree represents a multiplication of the matrices in bits; * Representing a convolution.
S2.3, a hybrid model decoder part,
the method comprises the steps that the method is composed of the second half part of Unet, the DCN _ LSTM output is used for splicing according to channels, feature fusion is carried out on a feature diagram which is subjected to skip connection and original input, and the number of the channels is changed into 256; capturing the context of the original image through two 3 x 3 convolutional layers, and adding a correction linear unit after each layer so that the model becomes nonlinear and batch normalization; then, through upsampling, bilinear interpolation is used in the upsampling, dimensionality is recovered, and meanwhile, through skipping connection, the number of characteristic channels is changed into 128; through two 3 x 3 convolutional layers and one 1 x 1 convolutional layer. And finally, outputting the predicted radar echo sequence.
S3, adding a Bayesian algorithm to the Unet and DCN _ LSTM mixed model, carrying out hyper-parameter optimization, searching for an optimal parameter combination, and optimizing the number of neurons, batch size and learning rate of a hidden layer through the Bayesian algorithm, wherein the specific steps are as follows:
s3.1, assuming a set of hyper-parametric combinations isIn whichRespectively representing the parameter combinations of the number of neurons of the hidden layer, the batch size and the learning rate, and assuming that a loss function and a set hyper-parameter have a mapping relation;
assuming the function f x → R, it is necessary to determine the value of x ∈ R
S3.2, obtaining the random initialization point of the hyper-parameter in the parameter range according to the determined and optimized hyper-parameterInputting experimental data to train the model, the response value of the loss function isEstablishing a Gaussian regression process;
wherein K is a constant, K (x) anda covariance matrix is represented by a matrix of covariance,represents the variance of sample n;
s3.3, selecting the next hyper-parameter combination sampling point from the Gaussian regression model based on the sampling function PIThe sampling function PI is, as follows,
where Φ () represents a normal distribution cumulative density function,andrespectively representing the mean and variance of the objective function value,the value of the optimum objective function is represented,representing a parameter;
and S3.4, substituting the selected first group of hyper-parameter combinations into model training, outputting a real value of ground observation and a mean square error value of a predicted radar echo sequence, and setting a preset threshold value to be 0.0001, if the mean square error value is smaller than a preset threshold value, stopping updating and outputting the optimal hyper-parameter combination; if the mean square error value is not less than the preset threshold value, the hyper-parameter is updated toAnd repeating the step S3.2 to the step S3.4 until the hyper-parameter combination with the mean square error value smaller than the preset threshold value is found.
After a model is built, inputting a training set into the model for training, using an optimized super-parameter, setting a maximum training epoch by using a loss function according to a Mean Square Error (MSE) of an output radar echo diagram and a ground real radar echo diagram, enabling a loss value loss to reach a minimum value through back propagation, wherein the minimum value means that the training loss continuously decreases until the training loss does not decrease, and storing a best training weight;
the loss function uses the sum of the mean error (MSE) and the Mean Absolute Error (MAE), and the equation is as follows:
where n denotes the total number of samples, i denotes the sample number of the sample points, Y denotes the real tag of the real radar echo pattern,representing a predicted radar echo map;
the total loss value is loss = loss1+ loss2.
S4, testing the Unet and DCN _ LSTM mixed model, and evaluating the model by adopting a plurality of indexes, wherein the specific steps are as follows:
s4.1, loading the weight of model training, testing and storing the weight in a picture format;
s4.2, adopting Mean Square Error (MSE), structural Similarity (SSIM) and Critical Success Index (CSI) as evaluation indexes of the test set;
the mean square error is used for evaluating the difference of pixel points of two pictures, and a specific formula is as follows,
where n denotes the total number of samples, i denotes the number of sample points, Y denotes the real tag of the real radar echo pattern,representing a predicted radar echo map;
the structural similarity is used for measuring the similarity of two pictures, and the specific formula is as follows,
wherein u is x And u y Denotes the mean, σ, of x and y, respectively x And σ y Denotes the variance, σ, for x and y, respectively xy Represents the covariance, C, of the two pictures x and y 1 And C 2 Represents a constant;
the specific formula for the critical success index is as follows:
wherein, TP indicates that the true category is positive and the prediction result is also positive, FP indicates that the true category is negative and the prediction result is negative, and FN indicates that the true category is positive and the prediction result is negative.
S5, converting the result of the prediction of the test set into radar reflectivity through a pixel value, wherein the specific formula is as follows,
wherein, radar _ value represents the value of radar reflectivity of each pixel point converted by a formula, and pixel _ value represents the value of each pixel point;
then the rainfall is obtained according to the relation between the radar reflectivity and the rainfall, the concrete formula is as follows,
wherein Z represents radar reflectivity, R represents rainfall, and A and b represent coefficients.
In addition to the above embodiments, the present invention may have other embodiments. All technical solutions formed by adopting equivalent substitutions or equivalent transformations fall within the protection scope of the claims of the present invention.
Claims (10)
1. A precipitation prediction method based on Unet and DCN _ LSTM is characterized in that: comprises the following steps
S1, acquiring meteorological radar data and preprocessing the meteorological radar data;
s2, constructing a hybrid model of Unet and DCN _ LSTM;
s3, adding a Bayesian algorithm to the Unet and DCN _ LSTM mixed model, performing hyper-parameter optimization, and searching for an optimal parameter combination;
s4, testing the Unet and DCN _ LSTM mixed model;
and S5, converting the result of the prediction of the test set into radar reflectivity through a pixel value, and then obtaining rainfall according to the relation between the radar reflectivity and the rainfall.
2. The method of claim 1, wherein the method of predicting precipitation based on Unet and DCN _ LSTM comprises: in the step S1, the method for preprocessing the meteorological radar data comprises the following steps
S1.1, removing abnormal values and repeated values of data, and performing bilinear interpolation on missing values of the data;
s1.2, screening the data set to ensure that each echo sequence has 20% precipitation coverage rate;
s1.3, carrying out normalization processing on the data, wherein a specific formula is as follows,
wherein, X * Representing normalized radar echo intensity values, X max Indicating the maximum value of the radar echo intensity, X min Representing the minimum value of the radar echo intensity, and X represents the radar echo intensity value;
s1.4, dividing the data set by adopting a proportion of 8.
3. The method of claim 1, wherein the method of predicting precipitation based on Unet and DCN _ LSTM comprises: in the step S2, the method for constructing the hybrid model of Unet and DCN _ LSTM comprises the following steps
S2.1, inputting training set data into a model by a hybrid model encoder part, changing a characteristic diagram into a half of the original size through two 3 x 3 convolutional layers and a maximum pooling layer, and doubling the number of channels through the two 3 x 3 convolutional layers;
s2.2, adopting DCN _ LSTM in the middle of the hybrid model for extracting the time characteristics and the space characteristics of the radar echo sequence, wherein the DCN _ LSTM consists of a plurality of DCN _ LSTM circulating units and is used for decomposing the characteristic diagram output by the encoder and sequentially inputting the characteristic diagram into the DCN _ LSTM circulating units for training;
s2.3, the mixed model decoder part splices the radar echo sequence output by the DCN _ LSTM according to channels, then passes through two 3 x 3 convolutional layers, is subjected to up-sampling, is connected with a characteristic diagram output by the encoder in a skipping mode, passes through the two 3 x 3 convolutional layers and one 1 x 1 convolutional layer, and finally outputs a predicted radar echo sequence.
4. The method of claim 3, wherein the method of predicting precipitation based on Unet and DCN _ LSTM comprises: in the step S2.2, the DCN _ LSTM learns the offset of the input X to the hidden state H and the memory cell C by using the variable convolution, so as to update the hidden state H and the memory cell C, slides on the input image, and the variable convolution takes the obtained feature map as an input, and applies a convolution layer to the feature map, so as to obtain the variable convolution offset, wherein the offset layer is 2N, and is translated on a plane, and two directions X and y need to be changed;
the specific formula for the feasible convolution is as follows,
wherein, R represents a convolution kernel of 3 × 3, (-1, -1), (-1, 0), (0, 1), (1, 1) represents points in the convolution kernel, and coordinates are integers;
wherein, the first and the second end of the pipe are connected with each other,representing a feature matrix resulting from the feasible convolution,represents the amount of learning of each point obtained by neural network learning within a convolution kernel of size 3 x 3,represents the center point, i.e., (0, 0) point,representing points defined in the R range, with more offset matrices learned by convolution than standard convolution feasible convolutions。
5. The method of claim 4 for prediction of precipitation based on Unet and DCN _ LSTM, wherein: the DCN _ LSTM model comprises a plurality of DCN _ LSTM circulation units, the characteristic information is screened and transmitted through a gating mechanism, and a forgetting gate, an input gate, a modulation gate, an output gate, a time memory cell and a hidden state of convLSTM are reserved and are respectively f t、 i t、 g t、 o t、 C t And H t (ii) a Also includes space cells M t For extracting and transferring spatial structure features vertically between different layers, and adding feasible convolution to learn the offset of input X to hidden state H and memory cell C, the concrete formula is as follows,
where DCN represents a variable convolutional network, X t Representing the picture entered, the lower coordinate t representing the moment of entry,andrespectively representing the hidden state and the memory cells, the lower coordinate t-1 representing the previous moment, the upper coordinate 1 representing the first layer,andrespectively representing the new hidden state and the memory cell obtained after the updating.
6. The method of claim 4 for predicting precipitation based on Unet and DCN _ LSTM, wherein: in the DCN _ LSTM model, an input gate, an update gate and a forgetting gate for updating memory cells are defined as follows,
wherein i t An input gate representing a refresh memory cell; σ represents an activation function sigmoid; w xi A parameter matrix representing the training of input X, input gate i; w hi A parameter matrix representing the training of the input gate i for the hidden state H; x t An input representing time t;representing the hidden state of the l-th layer at the time t-1; b i Represents the offset to input gate i;
g t a refresh gate indicating a refresh of the memory cell; tanh represents an activation function tanh; w xg Representing a parameter matrix trained for updating gate g for input X; w hg Representing that the parameter matrix of the training of the gate g is updated for the hidden state H; b g Represents the bias to the refresh gate g;
f t a forgetting gate that indicates a renewed memory cell; w xf A parameter matrix representing the training of the forgetting gate f for the input X; w hf Representing a parameter matrix for training a hidden state H and a forgetting gate f; b is a mixture of f Represents a bias to the forgetting gate f; * Represents a convolution;
an input gate for updating the cells in space, an update gate and a forgetting gate, the formula is as follows,
wherein the content of the first and second substances,an input gate representing the update space cells; σ represents an activation function sigmoid;a parameter matrix representing the training of input X, input Gate i; w mi A parameter matrix representing the training of the input gate i for the space cell M; x t An input representing time t;represents the space cell of the l-1 st layer at the time point t;represents the offset to input gate i;
an update gate representing an updated spatial cell; tanh represents an activation function tanh;representing a parameter matrix trained for updating gate g for input X; w mg Representing a parameter matrix for updating gate g training on the space cell M;represents the bias to the refresh gate g;
a forgetting gate representing an updated spatial cell;a parameter matrix representing the training of the forgetting gate f for the input X; w mf A parameter matrix representing the training of the spatial cell M and the forgetting gate f;represents a bias to the forgetting gate f; * Represents a convolution;
the hidden state, i.e. the output, is updated by the memory cells, the space cells and the output gate, the specific formula is as follows,
wherein the content of the first and second substances,memory cells representing the first layer at time t; i.e. i t Representing an input gate; g is a radical of formula t Represents an update gate; f. of t Indicating a forgotten door;memory cells in layer I at time t-1;represents the space cell of the l layer at the t time;an input gate representing a spatial cell;an update gate representing a spatial cell;a forgetting gate representing a spatial cell;represents the space cell of l-1 layer at the t-th time; o t An output gate is shown; σ represents an activation function sigmoid;
W xo a parameter training matrix representing the output gate o for input X; w is a group of ho A parameter training matrix representing the output gate o for the hidden state H; w co A matrix of training parameters representing the input gate o for memory cell C; w is a group of mo Representing training parameters for spatial cell M, input Gate oA number matrix; x t An input representing time t;representing the hidden state of the l-th layer at the time t-1;memory cells representing the first layer at time t;representing the space cells of the l layer at the t time; b is a mixture of o Indicating an offset to the output gate; w is a group of 1x1 A convolution kernel representing a size of 1 × 1;representing the new hidden state obtained by updating; degree represents a multiplication of the matrices in bits; * Representing a convolution.
7. The method of claim 1, wherein the method of predicting precipitation based on Unet and DCN _ LSTM comprises: in the step S3, the number of the neurons, the batch size and the learning rate of the hidden layer are optimized through a Bayesian algorithm, and the method comprises the following steps
S3.1, assuming a set of hyper-parametric combinations isWhereinRespectively representing the parameter combinations of the number of neurons of the hidden layer, the batch size and the learning rate, and assuming that a loss function and a set hyper-parameter have a mapping relation;
assuming the function f x → R, it is necessary to determine the value of x ∈ R
S3.2, obtaining the random initialization point of the hyper-parameter in the parameter range according to the determined and optimized hyper-parameterInputting experimental data to train the model, the response value of the loss function isEstablishing a Gaussian regression process;
wherein K is a constant, K (x) anda covariance matrix is represented by a value of the covariance matrix,represents the variance of sample n;
s3.3, selecting the next hyper-parameter combination sampling point from the Gaussian regression model based on the sampling function PIThe sampling function PI is, as follows,
where Φ () represents a normal distribution cumulative density function,andrespectively representing the mean and variance of the objective function value,the value of the optimum objective function is represented,representing a parameter;
s3.4, bringing the selected first group of hyper-parameter combinations into model training, outputting real values of ground observation and predicted mean square error values of radar echo sequences, and if the mean square error values are smaller than a preset threshold value, stopping updating and outputting optimal hyper-parameter combinations; if the mean square error value is not less than the preset threshold value, the hyper-parameter is updated toAnd repeating the step S3.2 to the step S3.4 until the hyper-parameter combination with the mean square error value smaller than the preset threshold value is found.
8. The method of claim 7, wherein the method of predicting precipitation based on Unet and DCN _ LSTM comprises: in step 3.4, the preset threshold is set to 0.0001.
9. The method of claim 1 for prediction of precipitation based on Unet and DCN _ LSTM, wherein: in step S4, the method for testing the hybrid model of Unet and DCN _ LSTM includes the following steps
S4.1, loading the weight of model training, testing and storing the weight in a picture format;
s4.2, adopting mean square error, structural similarity and critical success index as evaluation indexes of the test set;
the mean square error is used for evaluating the difference of pixel points of two pictures, and a specific formula is as follows,
where n denotes the total number of samples, i denotes the number of sample points, Y denotes the real tag of the real radar echo pattern,representing a predicted radar echo map;
the structural similarity is used for measuring the similarity of two pictures, and the specific formula is as follows,
wherein u is x And u y Denotes the mean, σ, of x and y, respectively x And σ y The variance for x and y is represented separately,σ xy represents the covariance of the two pictures x and y, C 1 And C 2 Represents a constant;
the specific formula for the critical success index is as follows:
wherein TP indicates that the true category is a positive prediction result and also a positive prediction result, FP indicates that the true category is a negative prediction result and is a positive prediction result, and FN indicates that the true category is a positive prediction result and is a negative prediction result.
10. The method of claim 1, wherein the method of predicting precipitation based on Unet and DCN _ LSTM comprises: in step S5, the result of the prediction of the test set is converted into radar reflectivity through a pixel value, and the specific formula is as follows,
wherein, radar _ value represents the value of radar reflectivity of each pixel point converted by a formula, and pixel _ value represents the value of each pixel point;
then, the rainfall is obtained according to the relation between the radar reflectivity and the rainfall, the concrete formula is as follows,
wherein Z represents radar reflectivity, R represents rainfall, and A and b represent coefficients.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211233428.1A CN115310724A (en) | 2022-10-10 | 2022-10-10 | Precipitation prediction method based on Unet and DCN _ LSTM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211233428.1A CN115310724A (en) | 2022-10-10 | 2022-10-10 | Precipitation prediction method based on Unet and DCN _ LSTM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115310724A true CN115310724A (en) | 2022-11-08 |
Family
ID=83867631
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211233428.1A Pending CN115310724A (en) | 2022-10-10 | 2022-10-10 | Precipitation prediction method based on Unet and DCN _ LSTM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115310724A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116047631A (en) * | 2023-03-31 | 2023-05-02 | 中科星图维天信(北京)科技有限公司 | Precipitation prediction method and device, electronic equipment and storage medium |
CN116089884A (en) * | 2023-02-09 | 2023-05-09 | 安徽省气象台 | Method for constructing near-real-time precipitation estimation model and near-real-time precipitation estimation method |
CN116307267A (en) * | 2023-05-15 | 2023-06-23 | 成都信息工程大学 | Rainfall prediction method based on convolution |
CN116307283A (en) * | 2023-05-19 | 2023-06-23 | 青岛科技大学 | Precipitation prediction system and method based on MIM model and space-time interaction memory |
CN116485010A (en) * | 2023-03-20 | 2023-07-25 | 四川省雅安市气象局 | S2S precipitation prediction method based on cyclic neural network |
CN116719002A (en) * | 2023-08-08 | 2023-09-08 | 北京弘象科技有限公司 | Quantitative precipitation estimation method, quantitative precipitation estimation device, electronic equipment and computer storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210004727A1 (en) * | 2019-06-27 | 2021-01-07 | Mohamad Zaim BIN AWANG PON | Hyper-parameter tuning method for machine learning algorithms using pattern recognition and reduced search space approach |
CN113156325A (en) * | 2021-03-18 | 2021-07-23 | 吉林大学 | Method for estimating state of health of battery |
CN113554148A (en) * | 2021-06-07 | 2021-10-26 | 南京理工大学 | BiLSTM voltage deviation prediction method based on Bayesian optimization |
-
2022
- 2022-10-10 CN CN202211233428.1A patent/CN115310724A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210004727A1 (en) * | 2019-06-27 | 2021-01-07 | Mohamad Zaim BIN AWANG PON | Hyper-parameter tuning method for machine learning algorithms using pattern recognition and reduced search space approach |
CN113156325A (en) * | 2021-03-18 | 2021-07-23 | 吉林大学 | Method for estimating state of health of battery |
CN113554148A (en) * | 2021-06-07 | 2021-10-26 | 南京理工大学 | BiLSTM voltage deviation prediction method based on Bayesian optimization |
Non-Patent Citations (3)
Title |
---|
FENG JIANG: "DLU-Net for Pancreatic Cancer Segmentation", 《2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM)》 * |
HAORAN CHEN: "FC-ZSM: Spatiotemporal", 《FRONTIERS IN EARTH SCIENCE》 * |
何津祥等: "肿瘤治疗与循证医学及相关网址", 《卫生职业教育》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116089884A (en) * | 2023-02-09 | 2023-05-09 | 安徽省气象台 | Method for constructing near-real-time precipitation estimation model and near-real-time precipitation estimation method |
CN116485010A (en) * | 2023-03-20 | 2023-07-25 | 四川省雅安市气象局 | S2S precipitation prediction method based on cyclic neural network |
CN116485010B (en) * | 2023-03-20 | 2024-04-16 | 四川省雅安市气象局 | S2S precipitation prediction method based on cyclic neural network |
CN116047631A (en) * | 2023-03-31 | 2023-05-02 | 中科星图维天信(北京)科技有限公司 | Precipitation prediction method and device, electronic equipment and storage medium |
CN116307267A (en) * | 2023-05-15 | 2023-06-23 | 成都信息工程大学 | Rainfall prediction method based on convolution |
CN116307267B (en) * | 2023-05-15 | 2023-07-25 | 成都信息工程大学 | Rainfall prediction method based on convolution |
CN116307283A (en) * | 2023-05-19 | 2023-06-23 | 青岛科技大学 | Precipitation prediction system and method based on MIM model and space-time interaction memory |
CN116307283B (en) * | 2023-05-19 | 2023-08-18 | 青岛科技大学 | Precipitation prediction system and method based on MIM model and space-time interaction memory |
CN116719002A (en) * | 2023-08-08 | 2023-09-08 | 北京弘象科技有限公司 | Quantitative precipitation estimation method, quantitative precipitation estimation device, electronic equipment and computer storage medium |
CN116719002B (en) * | 2023-08-08 | 2023-10-27 | 北京弘象科技有限公司 | Quantitative precipitation estimation method, quantitative precipitation estimation device, electronic equipment and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115310724A (en) | Precipitation prediction method based on Unet and DCN _ LSTM | |
CN111859800B (en) | Space-time estimation and prediction method for PM2.5 concentration distribution | |
Wang et al. | Comparison of convolutional neural networks for landslide susceptibility mapping in Yanshan County, China | |
CN113011397B (en) | Multi-factor cyanobacterial bloom prediction method based on remote sensing image 4D-Fractalnet | |
CN112415521A (en) | CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics | |
CN114943365A (en) | Rainfall estimation model establishing method fusing multi-source data and rainfall estimation method | |
CN114611608A (en) | Sea surface height numerical value prediction deviation correction method based on deep learning model | |
CN115629160A (en) | Air pollutant concentration prediction method and system based on space-time diagram | |
CN116844041A (en) | Cultivated land extraction method based on bidirectional convolution time self-attention mechanism | |
CN117233869B (en) | Site short-term wind speed prediction method based on GRU-BiTCN | |
CN112766099B (en) | Hyperspectral image classification method for extracting context information from local to global | |
CN113591608A (en) | High-resolution remote sensing image impervious surface extraction method based on deep learning | |
CN117194926A (en) | Method and system for predicting hoisting window period of land wind power base | |
Kaparakis et al. | Wf-unet: Weather fusion unet for precipitation nowcasting | |
CN112529270A (en) | Water flow prediction model based on deep learning | |
CN117011668A (en) | Weather radar echo extrapolation method based on time sequence prediction neural network | |
CN115984132A (en) | Short-term prediction method based on CBAIM differential recurrent neural network | |
CN116148864A (en) | Radar echo extrapolation method based on DyConvGRU and Unet prediction refinement structure | |
Yao et al. | A Forecast-Refinement Neural Network Based on DyConvGRU and U-Net for Radar Echo Extrapolation | |
CN113222206B (en) | Traffic state prediction method based on ResLS-C deep learning combination | |
CN112380985A (en) | Real-time detection method for intrusion foreign matters in transformer substation | |
Zhang et al. | MMSTP: Multi-modal Spatiotemporal Feature Fusion Network for Precipitation Prediction | |
CN117808650B (en) | Precipitation prediction method based on Transform-Flownet and R-FPN | |
CN112926619B (en) | High-precision underwater laser target recognition system | |
Zhang et al. | A Multi-task two-stream spatiotemporal convolutional neural network for convective storm nowcasting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221108 |
|
RJ01 | Rejection of invention patent application after publication |