CN112415521A - CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics - Google Patents
CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics Download PDFInfo
- Publication number
- CN112415521A CN112415521A CN202011493039.3A CN202011493039A CN112415521A CN 112415521 A CN112415521 A CN 112415521A CN 202011493039 A CN202011493039 A CN 202011493039A CN 112415521 A CN112415521 A CN 112415521A
- Authority
- CN
- China
- Prior art keywords
- cgru
- radar echo
- time
- network
- 3dcnn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000005457 optimization Methods 0.000 title description 2
- 238000012549 training Methods 0.000 claims abstract description 30
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 20
- 238000010586 diagram Methods 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 7
- 150000001875 compounds Chemical class 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims 1
- 230000005540 biological transmission Effects 0.000 abstract description 3
- 238000012545 processing Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000002592 echocardiography Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000012271 agricultural production Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000013277 forecasting method Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/95—Radar or analogous systems specially adapted for specific applications for meteorological use
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a CGRU-based radar echo nowcasting method with strong space-time characteristics, which comprises the following steps: (1) acquiring continuous radar echo images related to weather proximity forecast, preprocessing the continuous radar echo images, and constructing tensor data with unified time dimension and space dimension; (2) constructing and training a 3DCNN-CGRU network training model to obtain a 3DCNN-CGRU coding prediction network model; (3) inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result; the invention provides a 3DCNN-CGRU network model, which enhances the transmission capability of space-time characteristics, more effectively captures and learns the correlation of the space-time characteristics of continuous radar echo images and solves the problems of easy loss of space-time information and low prediction accuracy.
Description
Technical Field
The invention relates to the technical field of meteorological observation, in particular to a CGRU-based radar echo nowcasting method with strong space-time characteristics.
Background
The goal of radar echo nowcasting is to make timely and accurate predictions of the weather conditions in a local area over a relatively short period of time (e.g., 0-2 hours) in the future. At present, the technology is widely applied to the aspects of resident trip, agricultural production, flight safety and the like, can bring convenience to people, and is favorable for disaster prevention and reduction. With the current climate change and the acceleration of the urbanization process, the atmospheric conditions become more and more complex, various meteorological disasters frequently occur, the climate change brings many negative impacts to the life and work of people, many uncertain dangers are increased, and if the climate disasters can be effectively predicted and prevented, the loss of people can be greatly reduced.
The currently used methods for radar echo prediction are mainly cross-correlation and optical flow based methods, which have been proven to be effective in extrapolating future radar echo patterns. There are inevitable disadvantages to both of these conventional approaches; when the echo changes rapidly, the Lagrangian conservation condition cannot be met, and the prediction effect can be reduced rapidly; the traditional radar echo nowcasting method still has certain defects in the aspects of short-term prediction accuracy and full utilization of massive radar echo image data. Compared with the traditional radar echo forecasting method, the deep learning method can better perform deep mining and analysis on big data, and improves the model prediction precision. Deep learning as a big data driven emerging technology, especially the Recurrent Neural Network (RNN) and the long-short term memory network (LSTM), brings some new solutions to the radar echo prediction task. By fully utilizing massive collected radar echo map data, a network model can be trained more effectively, and the future echo trend can be predicted more accurately. Although the LSTM network with a common structure can solve the meteorological time sequence problem to a certain extent, radar echo prediction has strong front-back space-time correlation, space-time information of a previous moment can determine prediction of a next moment, and the common LSTM model does not consider the space-time correlation, so that the space-time information is easily lost, the prediction accuracy is reduced, and the speed cannot be guaranteed.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems, the invention provides a radar echo nowcasting method with strong space-time characteristics based on CGRU.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows: a CGRU-based radar echo nowcasting method with strong space-time characteristics comprises the following specific steps:
(1) acquiring a continuous radar echo image sequence for weather proximity forecast, and comparing with a single radar image, wherein the image sequence can better reflect the front-back correlation of meteorological data; then preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension; processing the three-dimensional data can obtain tensor data with complete space-time characteristics;
wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;
the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;
(2) the method comprises the following steps of constructing and training a 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model, and specifically comprises the following steps:
(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;
(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;
(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model;
wherein the loss function of the 3DCNN-CGRU network training model is the pixel-level mean square error MSE of the continuous radar echo image sequence:
in the formula (I), the compound is shown in the specification,representing a loss function value; y represents real live data;representing model prediction output data; n is the length of the continuous time sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.
(3) Inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result;
further, the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.
Further, the 3DCNN-CGRU network model consists of a coding network and a prediction network;
furthermore, the coding network is composed of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the radar echo image sequence;
the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;
further, the prediction network is composed of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.
Furthermore, the convolutional neural network is particularly suitable for processing image data due to the characteristics of feature mapping, local connection, weight sharing and the like; the conventional 2DCNN network has strong feature extraction capability for image data, but does not consider the influence of the connection between continuous multi-frame images on prediction when processing a task of processing continuous echo images, so that the related information of the motion change trend among features is easily lost, and the problem of prediction of moving images cannot be solved. The invention utilizes the constructed 3DCNN to replace the traditional 2DCNN, wherein the 3DCNN has the following calculation formula:
in the formula (I), the compound is shown in the specification,the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;represents the connection of the convolution kernel to (i-1) The weight of the mth characteristic diagram of the layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing the sizes of three dimensions of a convolution kernel;
furthermore, the invention provides a CGRU network structure, which changes the conversion between states from multiplication operation to convolution operation by adjusting the proposed GRU network structure, so that not only can a time sequence relation be established, but also the spatial characteristics can be described, and the problem of spatial information loss in the time sequence transmission process is effectively solved.
Wherein, each CGRU network unit includes outputs from 3DCNN network time and space, and the structural calculation process is as follows:
Zt=σ(Wxz*Xt+Whz*Ht-1)
Rt=σ(Wxr*Xt+Whr*Ht-1)
in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; ht' represents the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;representing to control each unit to screen the radar space-time information;is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]];
Furthermore, the invention provides a BN method and utilizes a ReLU nonlinear activation function to replace the traditional Sigmoid skill to improve the network convergence speed, relieve the overfitting phenomenon, and can obviously enhance the space-time feature learning capability of the model, so that the model has stronger feature expression capability of a multi-frame radar echo diagram, and the prediction accuracy is improved.
Further, data of the radar echo image in the training prediction process are all constructed into a three-dimensional tensor X epsilon RT ×W×H;
Wherein R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively; the individual echo images are converted into vectors of multi-frame time dimensions on a space grid, and a three-dimensional structure is formed by sequentially stacking continuous images in front and back.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
the invention provides a deep learning method of a 3DCNN-CGRU coding prediction structure for the first time aiming at a radar echo proximity prediction task. Aiming at a 3DCNN-CGRU network structure, the dimension of echo image input data needs to be reconstructed first, and the time dimension and the space dimension of the data are respectively constructed; in the processes of space-time feature extraction and motion information learning, input and output are three-dimensional tensors, and conversion between states is three-dimensional tensor convolution operation, so that radar echo data have uniform dimensionality, all time and space characteristics are reserved, and radar echoes in the region are more comprehensively and accurately forecasted; the 3DCNN provided by the invention is firstly used for extracting local short-term space-time characteristics, so that spatial characteristic confusion caused by directly utilizing a CGRU network for learning is avoided, meanwhile, the CGRU structure can more fully learn the global long-term motion trend of forward and backward radar echoes, network parameters are reduced, and the convergence speed is accelerated; the method improves the fuzzy condition of the predicted echo image, solves the problems of easy loss of space-time information and low prediction precision, has obviously better overall performance than other radar echo adjacent prediction methods under various rainfall threshold values, has more accurate predicted future echo image, and fully proves the effectiveness of the method.
Drawings
FIG. 1 is a flow chart of a radar echo proximity prediction method based on a 3DCNN-CGRU network and having strong space-time characteristics;
fig. 2 is a diagram of a CGRU network structure.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
The invention discloses a CGRU-based radar echo nowcasting method with strong space-time characteristics, which specifically comprises the following steps:
(1) acquiring a continuous radar echo image sequence for weather proximity forecast, and comparing with a single radar image, wherein the image sequence can better reflect the front-back correlation of meteorological data; then preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension; processing the three-dimensional data can obtain tensor data with complete space-time characteristics;
wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;
the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;
(2) the method comprises the following steps of constructing and training a 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model, and specifically comprises the following steps:
(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;
(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;
(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model;
wherein the loss function of the 3DCNN-CGRU network training model is the pixel-level mean square error MSE of the continuous radar echo image sequence:
in the formula (I), the compound is shown in the specification,representing a loss function value; y represents real live data;representing model prediction output data; n isThe length of the continuous time sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.
(3) Inputting tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) into the 3DCNN-CGRU network model to generate a weather nowcasting result;
further, the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.
Further, the 3DCNN-CGRU network model consists of a coding network and a prediction network;
furthermore, the coding network is composed of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the radar echo image sequence;
the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;
further, the prediction network is composed of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.
Furthermore, the convolutional neural network is particularly suitable for processing image data due to the characteristics of feature mapping, local connection, weight sharing and the like; the conventional 2DCNN network has strong feature extraction capability for image data, but does not consider the influence of the connection between continuous multi-frame images on prediction when processing a task of processing continuous echo images, so that the related information of the motion change trend among features is easily lost, and the problem of prediction of moving images cannot be solved. The invention utilizes the constructed 3DCNN to replace the traditional 2DCNN, wherein the 3DCNN has the following calculation formula:
in the formula (I), the compound is shown in the specification,the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;representing the weight of the convolution kernel connected to the mth characteristic diagram of the (i-1) layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing the sizes of three dimensions of a convolution kernel;
furthermore, the invention provides a CGRU network structure, which changes the conversion between states from multiplication operation to convolution operation by adjusting the proposed GRU network structure, so that not only can a time sequence relation be established, but also the spatial characteristics can be described, and the problem of spatial information loss in the time sequence transmission process is effectively solved.
Wherein, each CGRU network unit includes outputs from 3DCNN network time and space, and the structural calculation process is as follows:
Zt=σ(Wxz*Xt+Whz*Ht-1)
Rt=σ(Wxr*Xt+Whr*Ht-1)
in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; h'tRepresenting the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;representing to control each unit to screen the radar space-time information;is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]];
Furthermore, the invention provides a BN method and utilizes a ReLU nonlinear activation function to replace the traditional Sigmoid skill to improve the network convergence speed, relieve the overfitting phenomenon, and can obviously enhance the space-time feature learning capability of the model, so that the model has stronger feature expression capability of a multi-frame radar echo diagram, and the prediction accuracy is improved.
Further, data of the radar echo image in the training prediction process are all constructed into a three-dimensional tensor X epsilon RT ×W×H;
Wherein R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively; the individual echo images are converted into vectors of multi-frame time dimensions on a space grid, and a three-dimensional structure is formed by sequentially stacking continuous images in front and back.
Claims (7)
1. A CGRU-based radar echo nowcasting method with strong space-time characteristics is characterized by comprising the following steps: the method specifically comprises the following steps:
(1) acquiring a continuous radar echo image sequence for weather proximity prediction, and preprocessing the continuous radar echo image sequence to obtain tensor data with uniform time dimension and space dimension;
wherein the tensor data is a three-dimensional tensor X epsilon RT×W×H(ii) a In the formula, R represents a real number set; t is a time dimension; w, H are row and column space dimensions, respectively;
the sequence of successive radar echo images is represented by y (t) ═ y1,y2,...,yNT is 1, 2., N; wherein t represents time; n represents the length of the radar echo image sequence;
(2) constructing and training a 3DCNN-CGRU network training model to obtain a 3DCNN-CGRU network model;
(3) and (3) tensor data of the continuous radar echo image sequence for weather nowcasting in the step (1) are input into the 3DCNN-CGRU network model, and a weather nowcasting result is generated.
2. The method as claimed in claim 1, wherein the 3DCNN-CGRU network model is composed of a coding network and a prediction network;
the coding network consists of a 3DCNN network and a three-layer CGRU network and is used for extracting echo image space-time characteristic information of the continuous radar echo image sequence;
the 3DCNN is used for extracting local short space-time motion characteristics of a continuous radar echo image sequence; the three layers of the CGRU networks are used for learning the global long-time space characteristic dependency relationship of the continuous radar echo image sequence and compressing the space-time characteristics of the radar echo motion obtained by learning into a hidden state;
the prediction network consists of three layers of CGRU networks and 3DCNN networks; and the prediction network takes the output of the encoder as input, reversely reconstructs the image according to the characteristic information of the current echo image, generates a future echo image sequence and further obtains a weather forecast result.
3. The method as claimed in claim 2, wherein the 3DCNN network is calculated as follows:
in the formula (I), the compound is shown in the specification,the j-th radar echo feature map position of the i-th layer in the 3DCNN is represented as the output of a (T, W, H) unit; t represents a time dimension; w, H are row and column space dimensions, respectively; f represents a nonlinear activation function; bijThe bias parameters represent the jth radar echo characteristic diagram of the ith layer in the 3 DCNN;representing the weight of the convolution kernel connected to the mth characteristic diagram of the (i-1) layer; p, q and r respectively represent actual parameter values of the convolution operation in a position (T, W and H) unit;the m-th radar echo characteristic map position in the (i-1) -th layer is represented as the output of a unit (W + p, H + q, T + r); pi,Qi,RiRespectively representing a convolutionThe dimensions of the kernel are in three dimensions.
4. The method as claimed in claim 3, wherein the CGRU network structure changes the transition between states from multiplication to convolution by adjusting the proposed GRU network structure; wherein the CGRU network structure calculation process is as follows:
Zt=σ(Wxz*Xt+Whz*Ht-1)
Rt=σ(Wxr*Xt+Whr*Ht-1)
in the formula, ZtRepresents an update gate in the CGRU network structure; rtRepresenting a reset gate in a CGRU network structure; xtA radar echo diagram input representing time t; htHidden layer output representing time t; ht-1Representing the hidden layer output at time t-1; wxzRepresenting the weight parameters input to the update gate in the CGRU network; whzA weight parameter representing a hidden layer to an update gate; wxrRepresenting the weight parameters input to the reset gate in the CGRU network; whrA weight parameter representing a hidden layer to reset gate; ht' represents the memory content of the hidden layer at the time t; f represents a nonlinear activation function; wxhRepresenting a weight parameter input to a hidden layer in the CGRU network; whhRepresenting a hidden layer to hidden layer weight parameter;representing to control each unit to screen the radar space-time information;is a Hadamard product, i.e. multiplication of corresponding elements of a matrix; the σ nonlinear activation function is Sigmoid, and the formula is s (x) ═ 1+ e-x)-1The value range of the gate structure in the control model is [0,1 ]]。
5. The method as claimed in claim 4, wherein the step (2) of constructing and training the 3DCNN-CGRU network training model to obtain the 3DCNN-CGRU network model comprises:
(2.1) acquiring a continuous historical radar echo image sequence by taking the first continuous time sequence and the second continuous time sequence as sliding windows; wherein the first continuous-time series is temporally continuous with the second continuous-time series;
(2.2) preprocessing the historical radar echo image sequence to construct tensor data with uniform time dimension and space dimension; simultaneously setting tensor data of the radar echo image of each time frame in a first continuous time sequence as training data; setting tensor data of the radar echo image of each time frame in the second continuous time sequence as live data;
(2.3) establishing a 3DCNN-CGRU network training model, inputting tensor data of the historical radar echo image into the 3DCNN-CGRU network training model, performing iterative prediction, calculating a difference value between live data of the radar echo image in a continuous time sequence and model prediction output data, updating the 3DCNN-CGRU network weight through back propagation until loss function value MSE is converged, and representing training to obtain the 3DCNN-CGRU network model.
6. The CGRU-based radar echo nowcasting method with strong spatiotemporal characteristics according to claim 5, wherein the prediction output data obtained by training the live data of the echo image of each time frame of the first continuous time sequence corresponds to the live data of the echo image of each time frame of the second continuous time sequence; the iterative prediction iterates according to the radar echo image of each time frame of the second continuous time sequence.
7. The CGRU-based radar echo nowcasting method with strong space-time characteristics according to claim 5, wherein the loss function of the 3DCNN-CGRU network training model in step (2.3) is a pixel-level Mean Square Error (MSE) of the radar echo image:
in the formula (I), the compound is shown in the specification,representing a loss function value; y represents real live data;representing model prediction output data; n is the length of the radar echo image sequence; n is a counting unit; A. b represents the abscissa and ordinate of the radar echo image, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011493039.3A CN112415521A (en) | 2020-12-17 | 2020-12-17 | CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011493039.3A CN112415521A (en) | 2020-12-17 | 2020-12-17 | CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112415521A true CN112415521A (en) | 2021-02-26 |
Family
ID=74775739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011493039.3A Pending CN112415521A (en) | 2020-12-17 | 2020-12-17 | CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112415521A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112906640A (en) * | 2021-03-19 | 2021-06-04 | 电子科技大学 | Space-time situation prediction method and device based on deep learning and readable storage medium |
CN112949934A (en) * | 2021-03-25 | 2021-06-11 | 浙江万里学院 | Short-term heavy rainfall prediction method based on deep learning |
CN113486919A (en) * | 2021-05-24 | 2021-10-08 | 浙江大学 | Regional cloud picture prediction method based on deep learning |
CN113610329A (en) * | 2021-10-08 | 2021-11-05 | 南京信息工程大学 | Short-time rainfall approaching forecasting method of double-current convolution long-short term memory network |
CN113936142A (en) * | 2021-10-13 | 2022-01-14 | 成都信息工程大学 | Rainfall approach forecasting method and device based on deep learning |
CN114460555A (en) * | 2022-04-08 | 2022-05-10 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Radar echo extrapolation method and device and storage medium |
CN115016042A (en) * | 2022-06-06 | 2022-09-06 | 湖南师范大学 | Precipitation prediction method and system based on multi-encoder fusion radar and precipitation information |
CN115792913A (en) * | 2022-05-16 | 2023-03-14 | 湖南师范大学 | Radar echo extrapolation method and system based on time-space network |
CN117808650A (en) * | 2024-02-29 | 2024-04-02 | 南京信息工程大学 | Precipitation prediction method based on Transform-Flown and R-FPN |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108732550A (en) * | 2018-08-01 | 2018-11-02 | 北京百度网讯科技有限公司 | Method and apparatus for predicting radar return |
CN111489525A (en) * | 2020-03-30 | 2020-08-04 | 南京信息工程大学 | Multi-data fusion meteorological prediction early warning method |
CN111708030A (en) * | 2020-05-28 | 2020-09-25 | 深圳市气象局(深圳市气象台) | Disaster weather forecasting method based on energy generation antagonism predictor |
-
2020
- 2020-12-17 CN CN202011493039.3A patent/CN112415521A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108732550A (en) * | 2018-08-01 | 2018-11-02 | 北京百度网讯科技有限公司 | Method and apparatus for predicting radar return |
CN111489525A (en) * | 2020-03-30 | 2020-08-04 | 南京信息工程大学 | Multi-data fusion meteorological prediction early warning method |
CN111708030A (en) * | 2020-05-28 | 2020-09-25 | 深圳市气象局(深圳市气象台) | Disaster weather forecasting method based on energy generation antagonism predictor |
Non-Patent Citations (2)
Title |
---|
SUTING CHEN: "Strong Spatiotemporal Radar Echo Nowcasting Combining 3DCNN and Bi-Directional Convolutional LSTM" * |
陈训来: "基于卷积门控循环单元神经网络的临近预报方法研究" * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112906640A (en) * | 2021-03-19 | 2021-06-04 | 电子科技大学 | Space-time situation prediction method and device based on deep learning and readable storage medium |
CN112949934A (en) * | 2021-03-25 | 2021-06-11 | 浙江万里学院 | Short-term heavy rainfall prediction method based on deep learning |
CN113486919A (en) * | 2021-05-24 | 2021-10-08 | 浙江大学 | Regional cloud picture prediction method based on deep learning |
CN113610329A (en) * | 2021-10-08 | 2021-11-05 | 南京信息工程大学 | Short-time rainfall approaching forecasting method of double-current convolution long-short term memory network |
CN113610329B (en) * | 2021-10-08 | 2022-01-04 | 南京信息工程大学 | Short-time rainfall approaching forecasting method of double-current convolution long-short term memory network |
CN113936142A (en) * | 2021-10-13 | 2022-01-14 | 成都信息工程大学 | Rainfall approach forecasting method and device based on deep learning |
CN114460555A (en) * | 2022-04-08 | 2022-05-10 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Radar echo extrapolation method and device and storage medium |
CN115792913A (en) * | 2022-05-16 | 2023-03-14 | 湖南师范大学 | Radar echo extrapolation method and system based on time-space network |
CN115792913B (en) * | 2022-05-16 | 2023-08-22 | 湖南师范大学 | Radar echo extrapolation method and system based on space-time network |
CN115016042A (en) * | 2022-06-06 | 2022-09-06 | 湖南师范大学 | Precipitation prediction method and system based on multi-encoder fusion radar and precipitation information |
CN117808650A (en) * | 2024-02-29 | 2024-04-02 | 南京信息工程大学 | Precipitation prediction method based on Transform-Flown and R-FPN |
CN117808650B (en) * | 2024-02-29 | 2024-05-14 | 南京信息工程大学 | Precipitation prediction method based on Transform-Flownet and R-FPN |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112415521A (en) | CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics | |
CN106407889B (en) | Method for recognizing human body interaction in video based on optical flow graph deep learning model | |
CN113094357B (en) | Traffic missing data completion method based on space-time attention mechanism | |
CN112699956B (en) | Neuromorphic visual target classification method based on improved impulse neural network | |
CN109001736B (en) | Radar echo extrapolation method based on deep space-time prediction neural network | |
CN110728698B (en) | Multi-target tracking system based on composite cyclic neural network system | |
CN112183886B (en) | Short-time adjacent rainfall prediction method based on convolution network and attention mechanism | |
CN110097028B (en) | Crowd abnormal event detection method based on three-dimensional pyramid image generation network | |
CN112949828A (en) | Graph convolution neural network traffic prediction method and system based on graph learning | |
CN109829495A (en) | Timing image prediction method based on LSTM and DCGAN | |
CN117665825B (en) | Radar echo extrapolation prediction method, system and storage medium | |
CN115310724A (en) | Precipitation prediction method based on Unet and DCN _ LSTM | |
CN114943365A (en) | Rainfall estimation model establishing method fusing multi-source data and rainfall estimation method | |
CN113988357B (en) | Advanced learning-based high-rise building wind induced response prediction method and device | |
CN112365091A (en) | Radar quantitative precipitation estimation method based on classification node map attention network | |
CN116148796A (en) | Strong convection weather proximity forecasting method based on radar image extrapolation | |
CN111708030A (en) | Disaster weather forecasting method based on energy generation antagonism predictor | |
CN115792853A (en) | Radar echo extrapolation method based on dynamic weight loss | |
CN116592883A (en) | Navigation decision method based on attention and cyclic PPO | |
CN115902806A (en) | Multi-mode-based radar echo extrapolation method | |
CN117131991A (en) | Urban rainfall prediction method and platform based on hybrid neural network | |
CN113341419B (en) | Weather extrapolation method and system based on VAN-ConvLSTM | |
CN116822592A (en) | Target tracking method based on event data and impulse neural network | |
CN116844041A (en) | Cultivated land extraction method based on bidirectional convolution time self-attention mechanism | |
CN116148864A (en) | Radar echo extrapolation method based on DyConvGRU and Unet prediction refinement structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210226 |
|
RJ01 | Rejection of invention patent application after publication |