CN112415521A - CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics - Google Patents

CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics Download PDF

Info

Publication number
CN112415521A
CN112415521A CN202011493039.3A CN202011493039A CN112415521A CN 112415521 A CN112415521 A CN 112415521A CN 202011493039 A CN202011493039 A CN 202011493039A CN 112415521 A CN112415521 A CN 112415521A
Authority
CN
China
Prior art keywords
cgru
radar echo
time
network
3dcnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011493039.3A
Other languages
Chinese (zh)
Inventor
陈苏婷
张松
张闯
陈耀登
杨春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202011493039.3A priority Critical patent/CN112415521A/en
Publication of CN112415521A publication Critical patent/CN112415521A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/95Radar or analogous systems specially adapted for specific applications for meteorological use
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a CGRU-based radar echo nowcasting method with strong space-time characteristics, which comprises the following steps: (1) acquiring continuous radar echo images related to weather proximity forecast, preprocessing the continuous radar echo images, and constructing tensor data with unified time dimension and space dimension; (2) constructing and training a 3DCNN-CGRU network training model to obtain a 3DCNN-CGRU coding prediction network model; (3) inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result; the invention provides a 3DCNN-CGRU network model, which enhances the transmission capability of space-time characteristics, more effectively captures and learns the correlation of the space-time characteristics of continuous radar echo images and solves the problems of easy loss of space-time information and low prediction accuracy.

Description

CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics
Technical Field
The invention relates to the technical field of meteorological observation, in particular to a CGRU-based radar echo nowcasting method with strong space-time characteristics.
Background
The goal of radar echo nowcasting is to make timely and accurate predictions of the weather conditions in a local area over a relatively short period of time (e.g., 0-2 hours) in the future. At present, the technology is widely applied to the aspects of resident trip, agricultural production, flight safety and the like, can bring convenience to people, and is favorable for disaster prevention and reduction. With the current climate change and the acceleration of the urbanization process, the atmospheric conditions become more and more complex, various meteorological disasters frequently occur, the climate change brings many negative impacts to the life and work of people, many uncertain dangers are increased, and if the climate disasters can be effectively predicted and prevented, the loss of people can be greatly reduced.
The currently used methods for radar echo prediction are mainly cross-correlation and optical flow based methods, which have been proven to be effective in extrapolating future radar echo patterns. There are inevitable disadvantages to both of these conventional approaches; when the echo changes rapidly, the Lagrangian conservation condition cannot be met, and the prediction effect can be reduced rapidly; the traditional radar echo nowcasting method still has certain defects in the aspects of short-term prediction accuracy and full utilization of massive radar echo image data. Compared with the traditional radar echo forecasting method, the deep learning method can better perform deep mining and analysis on big data, and improves the model prediction precision. Deep learning as a big data driven emerging technology, especially the Recurrent Neural Network (RNN) and the long-short term memory network (LSTM), brings some new solutions to the radar echo prediction task. By fully utilizing massive collected radar echo map data, a network model can be trained more effectively, and the future echo trend can be predicted more accurately. Although the LSTM network with a common structure can solve the meteorological time sequence problem to a certain extent, radar echo prediction has strong front-back space-time correlation, space-time information of a previous moment can determine prediction of a next moment, and the common LSTM model does not consider the space-time correlation, so that the space-time information is easily lost, the prediction accuracy is reduced, and the speed cannot be guaranteed.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems, the invention provides a radar echo nowcasting method with strong space-time characteristics based on CGRU.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows: a CGRU-based radar echo nowcasting method with strong space-time characteristics comprises the following specific steps:
(1) acquiring a continuous radar echo image sequence for weather proximity forecast, and comparing with a single radar image, wherein the image sequence can better reflect the front-back correlation of meteorological data; then preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension; processing the three-dimensional data can obtain tensor data with complete space-time characteristics;
wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;
the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;
(2) the method comprises the following steps of constructing and training a 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model, and specifically comprises the following steps:
(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;
(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;
(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model;
wherein the loss function of the 3DCNN-CGRU network training model is the pixel-level mean square error MSE of the continuous radar echo image sequence:
Figure BDA0002841299440000021
in the formula (I), the compound is shown in the specification,
Figure BDA0002841299440000022
representing a loss function value; y represents real live data;
Figure BDA0002841299440000023
representing model prediction output data; n is the length of the continuous time sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.
(3) Inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result;
further, the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.
Further, the 3DCNN-CGRU network model consists of a coding network and a prediction network;
furthermore, the coding network is composed of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the radar echo image sequence;
the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;
further, the prediction network is composed of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.
Furthermore, the convolutional neural network is particularly suitable for processing image data due to the characteristics of feature mapping, local connection, weight sharing and the like; the conventional 2DCNN network has strong feature extraction capability for image data, but does not consider the influence of the connection between continuous multi-frame images on prediction when processing a task of processing continuous echo images, so that the related information of the motion change trend among features is easily lost, and the problem of prediction of moving images cannot be solved. The invention utilizes the constructed 3DCNN to replace the traditional 2DCNN, wherein the 3DCNN has the following calculation formula:
Figure BDA0002841299440000031
in the formula (I), the compound is shown in the specification,
Figure BDA0002841299440000032
the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;
Figure BDA0002841299440000033
represents the connection of the convolution kernel to (i-1) The weight of the mth characteristic diagram of the layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;
Figure BDA0002841299440000034
the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing the sizes of three dimensions of a convolution kernel;
furthermore, the invention provides a CGRU network structure, which changes the conversion between states from multiplication operation to convolution operation by adjusting the proposed GRU network structure, so that not only can a time sequence relation be established, but also the spatial characteristics can be described, and the problem of spatial information loss in the time sequence transmission process is effectively solved.
Wherein, each CGRU network unit includes outputs from 3DCNN network time and space, and the structural calculation process is as follows:
Zt=σ(Wxz*Xt+Whz*Ht-1)
Rt=σ(Wxr*Xt+Whr*Ht-1)
Figure BDA0002841299440000035
Figure BDA0002841299440000041
in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; ht' represents the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;
Figure BDA0002841299440000042
representing to control each unit to screen the radar space-time information;
Figure BDA0002841299440000043
is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]];
Furthermore, the invention provides a BN method and utilizes a ReLU nonlinear activation function to replace the traditional Sigmoid skill to improve the network convergence speed, relieve the overfitting phenomenon, and can obviously enhance the space-time feature learning capability of the model, so that the model has stronger feature expression capability of a multi-frame radar echo diagram, and the prediction accuracy is improved.
Further, data of the radar echo image in the training prediction process are all constructed into a three-dimensional tensor X epsilon RT ×W×H
Wherein R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively; the individual echo images are converted into vectors of multi-frame time dimensions on a space grid, and a three-dimensional structure is formed by sequentially stacking continuous images in front and back.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
the invention provides a deep learning method of a 3DCNN-CGRU coding prediction structure for the first time aiming at a radar echo proximity prediction task. Aiming at a 3DCNN-CGRU network structure, the dimension of echo image input data needs to be reconstructed first, and the time dimension and the space dimension of the data are respectively constructed; in the processes of space-time feature extraction and motion information learning, input and output are three-dimensional tensors, and conversion between states is three-dimensional tensor convolution operation, so that radar echo data have uniform dimensionality, all time and space characteristics are reserved, and radar echoes in the region are more comprehensively and accurately forecasted; the 3DCNN provided by the invention is firstly used for extracting local short-term space-time characteristics, so that spatial characteristic confusion caused by directly utilizing a CGRU network for learning is avoided, meanwhile, the CGRU structure can more fully learn the global long-term motion trend of forward and backward radar echoes, network parameters are reduced, and the convergence speed is accelerated; the method improves the fuzzy condition of the predicted echo image, solves the problems of easy loss of space-time information and low prediction precision, has obviously better overall performance than other radar echo adjacent prediction methods under various rainfall threshold values, has more accurate predicted future echo image, and fully proves the effectiveness of the method.
Drawings
FIG. 1 is a flow chart of a radar echo proximity prediction method based on a 3DCNN-CGRU network and having strong space-time characteristics;
fig. 2 is a diagram of a CGRU network structure.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
The invention discloses a CGRU-based radar echo nowcasting method with strong space-time characteristics, which specifically comprises the following steps:
(1) acquiring a continuous radar echo image sequence for weather proximity forecast, and comparing with a single radar image, wherein the image sequence can better reflect the front-back correlation of meteorological data; then preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension; processing the three-dimensional data can obtain tensor data with complete space-time characteristics;
wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;
the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;
(2) the method comprises the following steps of constructing and training a 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model, and specifically comprises the following steps:
(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;
(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;
(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model;
wherein the loss function of the 3DCNN-CGRU network training model is the pixel-level mean square error MSE of the continuous radar echo image sequence:
Figure BDA0002841299440000061
in the formula (I), the compound is shown in the specification,
Figure BDA0002841299440000062
representing a loss function value; y represents real live data;
Figure BDA0002841299440000063
representing model prediction output data; n isThe length of the continuous time sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.
(3) Inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result;
further, the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.
Further, the 3DCNN-CGRU network model consists of a coding network and a prediction network;
furthermore, the coding network is composed of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the radar echo image sequence;
the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;
further, the prediction network is composed of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.
Furthermore, the convolutional neural network is particularly suitable for processing image data due to the characteristics of feature mapping, local connection, weight sharing and the like; the conventional 2DCNN network has strong feature extraction capability for image data, but does not consider the influence of the connection between continuous multi-frame images on prediction when processing a task of processing continuous echo images, so that the related information of the motion change trend among features is easily lost, and the problem of prediction of moving images cannot be solved. The invention utilizes the constructed 3DCNN to replace the traditional 2DCNN, wherein the 3DCNN has the following calculation formula:
Figure BDA0002841299440000064
in the formula (I), the compound is shown in the specification,
Figure BDA0002841299440000065
the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;
Figure BDA0002841299440000066
representing the weight of the convolution kernel connected to the mth characteristic diagram of the (i-1) layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;
Figure BDA0002841299440000071
the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing the sizes of three dimensions of a convolution kernel;
furthermore, the invention provides a CGRU network structure, which changes the conversion between states from multiplication operation to convolution operation by adjusting the proposed GRU network structure, so that not only can a time sequence relation be established, but also the spatial characteristics can be described, and the problem of spatial information loss in the time sequence transmission process is effectively solved.
Wherein, each CGRU network unit includes outputs from 3DCNN network time and space, and the structural calculation process is as follows:
Zt=σ(Wxz*Xt+Whz*Ht-1)
Rt=σ(Wxr*Xt+Whr*Ht-1)
Figure BDA0002841299440000072
Figure BDA0002841299440000073
in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; h'tRepresenting the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;
Figure BDA0002841299440000074
representing to control each unit to screen the radar space-time information;
Figure BDA0002841299440000075
is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]];
Furthermore, the invention provides a BN method and utilizes a ReLU nonlinear activation function to replace the traditional Sigmoid skill to improve the network convergence speed, relieve the overfitting phenomenon, and can obviously enhance the space-time feature learning capability of the model, so that the model has stronger feature expression capability of a multi-frame radar echo diagram, and the prediction accuracy is improved.
Further, data of the radar echo image in the training prediction process are all constructed into a three-dimensional tensor X epsilon RT ×W×H
Wherein R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively; the individual echo images are converted into vectors of multi-frame time dimensions on a space grid, and a three-dimensional structure is formed by sequentially stacking continuous images in front and back.

Claims (7)

1. A CGRU-based radar echo nowcasting method with strong space-time characteristics is characterized by comprising the following steps: the method specifically comprises the following steps:
(1) acquiring a continuous radar echo image sequence for weather proximity prediction, and preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension;
wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;
the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;
(2) constructing and training a 3DCNN-CGRU network training model to obtain a 3DCNN-CGRU network model;
(3) and (3) tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) are input into the 3DCNN-CGRU network model, and a weather nowcasting result is generated.
2. The method as claimed in claim 1, wherein the 3DCNN-CGRU network model is composed of a coding network and a prediction network;
the coding network consists of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the continuous radar echo image sequence;
the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;
the prediction network consists of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.
3. The method as claimed in claim 2, wherein the 3DCNN network is calculated as follows:
Figure FDA0002841299430000011
in the formula (I), the compound is shown in the specification,
Figure FDA0002841299430000012
the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;
Figure FDA0002841299430000013
representing the weight of the convolution kernel connected to the mth characteristic diagram of the (i-1) layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;
Figure FDA0002841299430000021
the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing a convolutionThe dimensions of the kernel are in three dimensions.
4. The method as claimed in claim 3, wherein the CGRU network structure changes the transition between states from multiplication to convolution by adjusting the proposed GRU network structure; wherein the CGRU network structure calculation process is as follows:
Zt=σ(Wxz*Xt+Whz*Ht-1)
Rt=σ(Wxr*Xt+Whr*Ht-1)
Figure FDA0002841299430000022
Figure FDA0002841299430000023
in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; ht' represents the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;
Figure FDA0002841299430000024
representing to control each unit to screen the radar space-time information;
Figure FDA0002841299430000025
is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]]。
5. The method as claimed in claim 4, wherein the step (2) of constructing and training the 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model comprises:
(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;
(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;
(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model.
6. The CGRU-based radar echo nowcasting method with strong spatiotemporal characteristics according to claim 5, wherein the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.
7. The CGRU-based radar echo nowcasting method with strong space-time characteristics according to claim 5, wherein the loss function of the 3DCNN-CGRU network training model in step (2.3) is a pixel-level Mean Square Error (MSE) of the radar echo image:
Figure FDA0002841299430000031
in the formula (I), the compound is shown in the specification,
Figure FDA0002841299430000032
representing a loss function value; y represents real live data;
Figure FDA0002841299430000033
representing model prediction output data; n is the length of the radar echo image sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.
CN202011493039.3A 2020-12-17 2020-12-17 CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics Pending CN112415521A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011493039.3A CN112415521A (en) 2020-12-17 2020-12-17 CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011493039.3A CN112415521A (en) 2020-12-17 2020-12-17 CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics

Publications (1)

Publication Number Publication Date
CN112415521A true CN112415521A (en) 2021-02-26

Family

ID=74775739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011493039.3A Pending CN112415521A (en) 2020-12-17 2020-12-17 CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics

Country Status (1)

Country Link
CN (1) CN112415521A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906640A (en) * 2021-03-19 2021-06-04 电子科技大学 Space-time situation prediction method and device based on deep learning and readable storage medium
CN112949934A (en) * 2021-03-25 2021-06-11 浙江万里学院 Short-term heavy rainfall prediction method based on deep learning
CN113486919A (en) * 2021-05-24 2021-10-08 浙江大学 Regional cloud picture prediction method based on deep learning
CN113610329A (en) * 2021-10-08 2021-11-05 南京信息工程大学 Short-time rainfall approaching forecasting method of double-current convolution long-short term memory network
CN113936142A (en) * 2021-10-13 2022-01-14 成都信息工程大学 Rainfall approach forecasting method and device based on deep learning
CN114460555A (en) * 2022-04-08 2022-05-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar echo extrapolation method and device and storage medium
CN115016042A (en) * 2022-06-06 2022-09-06 湖南师范大学 Precipitation prediction method and system based on multi-encoder fusion radar and precipitation information
CN115792913A (en) * 2022-05-16 2023-03-14 湖南师范大学 Radar echo extrapolation method and system based on time-space network
CN117808650A (en) * 2024-02-29 2024-04-02 南京信息工程大学 Precipitation prediction method based on Transform-Flown and R-FPN

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732550A (en) * 2018-08-01 2018-11-02 北京百度网讯科技有限公司 Method and apparatus for predicting radar return
CN111489525A (en) * 2020-03-30 2020-08-04 南京信息工程大学 Multi-data fusion meteorological prediction early warning method
CN111708030A (en) * 2020-05-28 2020-09-25 深圳市气象局(深圳市气象台) Disaster weather forecasting method based on energy generation antagonism predictor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732550A (en) * 2018-08-01 2018-11-02 北京百度网讯科技有限公司 Method and apparatus for predicting radar return
CN111489525A (en) * 2020-03-30 2020-08-04 南京信息工程大学 Multi-data fusion meteorological prediction early warning method
CN111708030A (en) * 2020-05-28 2020-09-25 深圳市气象局(深圳市气象台) Disaster weather forecasting method based on energy generation antagonism predictor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SUTING CHEN: "Strong Spatiotemporal Radar Echo Nowcasting Combining 3DCNN and Bi-Directional Convolutional LSTM" *
陈训来: "基于卷积门控循环单元神经网络的临近预报方法研究" *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906640A (en) * 2021-03-19 2021-06-04 电子科技大学 Space-time situation prediction method and device based on deep learning and readable storage medium
CN112949934A (en) * 2021-03-25 2021-06-11 浙江万里学院 Short-term heavy rainfall prediction method based on deep learning
CN113486919A (en) * 2021-05-24 2021-10-08 浙江大学 Regional cloud picture prediction method based on deep learning
CN113610329A (en) * 2021-10-08 2021-11-05 南京信息工程大学 Short-time rainfall approaching forecasting method of double-current convolution long-short term memory network
CN113610329B (en) * 2021-10-08 2022-01-04 南京信息工程大学 Short-time rainfall approaching forecasting method of double-current convolution long-short term memory network
CN113936142A (en) * 2021-10-13 2022-01-14 成都信息工程大学 Rainfall approach forecasting method and device based on deep learning
CN114460555A (en) * 2022-04-08 2022-05-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar echo extrapolation method and device and storage medium
CN115792913A (en) * 2022-05-16 2023-03-14 湖南师范大学 Radar echo extrapolation method and system based on time-space network
CN115792913B (en) * 2022-05-16 2023-08-22 湖南师范大学 Radar echo extrapolation method and system based on space-time network
CN115016042A (en) * 2022-06-06 2022-09-06 湖南师范大学 Precipitation prediction method and system based on multi-encoder fusion radar and precipitation information
CN117808650A (en) * 2024-02-29 2024-04-02 南京信息工程大学 Precipitation prediction method based on Transform-Flown and R-FPN
CN117808650B (en) * 2024-02-29 2024-05-14 南京信息工程大学 Precipitation prediction method based on Transform-Flownet and R-FPN

Similar Documents

Publication Publication Date Title
CN112415521A (en) CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics
CN106407889B (en) Method for recognizing human body interaction in video based on optical flow graph deep learning model
CN113094357B (en) Traffic missing data completion method based on space-time attention mechanism
CN112699956B (en) Neuromorphic visual target classification method based on improved impulse neural network
CN109001736B (en) Radar echo extrapolation method based on deep space-time prediction neural network
CN110728698B (en) Multi-target tracking system based on composite cyclic neural network system
CN112183886B (en) Short-time adjacent rainfall prediction method based on convolution network and attention mechanism
CN110097028B (en) Crowd abnormal event detection method based on three-dimensional pyramid image generation network
CN112949828A (en) Graph convolution neural network traffic prediction method and system based on graph learning
CN109829495A (en) Timing image prediction method based on LSTM and DCGAN
CN117665825B (en) Radar echo extrapolation prediction method, system and storage medium
CN115310724A (en) Precipitation prediction method based on Unet and DCN _ LSTM
CN114943365A (en) Rainfall estimation model establishing method fusing multi-source data and rainfall estimation method
CN113988357B (en) Advanced learning-based high-rise building wind induced response prediction method and device
CN112365091A (en) Radar quantitative precipitation estimation method based on classification node map attention network
CN116148796A (en) Strong convection weather proximity forecasting method based on radar image extrapolation
CN111708030A (en) Disaster weather forecasting method based on energy generation antagonism predictor
CN115792853A (en) Radar echo extrapolation method based on dynamic weight loss
CN116592883A (en) Navigation decision method based on attention and cyclic PPO
CN115902806A (en) Multi-mode-based radar echo extrapolation method
CN117131991A (en) Urban rainfall prediction method and platform based on hybrid neural network
CN113341419B (en) Weather extrapolation method and system based on VAN-ConvLSTM
CN116822592A (en) Target tracking method based on event data and impulse neural network
CN116844041A (en) Cultivated land extraction method based on bidirectional convolution time self-attention mechanism
CN116148864A (en) Radar echo extrapolation method based on DyConvGRU and Unet prediction refinement structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226

RJ01 Rejection of invention patent application after publication