CN117665825B - Radar echo extrapolation prediction method, system and storage medium - Google Patents

Radar echo extrapolation prediction method, system and storage medium Download PDF

Info

Publication number
CN117665825B
CN117665825B CN202410131969.6A CN202410131969A CN117665825B CN 117665825 B CN117665825 B CN 117665825B CN 202410131969 A CN202410131969 A CN 202410131969A CN 117665825 B CN117665825 B CN 117665825B
Authority
CN
China
Prior art keywords
prediction
branch
time
space
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410131969.6A
Other languages
Chinese (zh)
Other versions
CN117665825A (en
Inventor
程勇
渠海峰
钱坤
王军
杨玲
刘敏
许小龙
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202410131969.6A priority Critical patent/CN117665825B/en
Publication of CN117665825A publication Critical patent/CN117665825A/en
Application granted granted Critical
Publication of CN117665825B publication Critical patent/CN117665825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar echo extrapolation prediction method and a radar echo extrapolation prediction system, which belong to the technical field of rainfall short-term prediction, and comprise the steps of obtaining a radar echo image sequence to be predicted; preprocessing a radar echo image sequence to obtain a data set to be predicted; inputting the data set to be predicted into a pre-trained prediction model combining the double-branch coding and decoding and a gating recursion network to obtain a radar echo extrapolation prediction image; the training method of the prediction model combining the double-branch coding and decoding and the gating recursion network comprises the following steps: preprocessing an acquired radar echo image sequence sample to obtain an effective sample data set; the effective sample data set is input into a pre-constructed prediction model combining the double-branch coding and decoding and the gating recursion network to obtain a trained prediction model combining the double-branch coding and decoding and the gating recursion network, the radar echo extrapolation prediction effect with higher accuracy is realized through the process, and the rainfall prediction precision is greatly enhanced.

Description

Radar echo extrapolation prediction method, system and storage medium
Technical Field
The invention relates to a radar echo extrapolation prediction method, a radar echo extrapolation prediction system and a storage medium, and belongs to the technical field of rainfall short-term prediction.
Background
Strong convection weather refers to strong convection movements in the atmosphere, such as weather phenomena of thunderstorms, tornadoes, storms, etc. These weather phenomena can cause heavy rainfall in local areas in a short time, causing huge losses to economy and life and property. The rainfall approach forecast is a weather forecast method, mainly refers to forecasting weather in a short time (usually several hours), and provides timely and accurate rainfall intensity and range forecast for local areas. Accurate proximity prediction is indispensable in the fields of waterlogging prevention, agriculture, aviation, travel planning and the like.
The traditional radar echo extrapolation method at present mainly comprises a centroid tracking method, a cross correlation method and an optical flow method. The monomer centroid method is used for tracking a monomer centroid path by identifying a three-dimensional thunderstorm monomer, is suitable for the strong thunderstorm monomer, and is poor in application in the aspect of the adjacent forecast of convection weather or lamellar cloud rainfall weather; the cross correlation method assumes that the echo evolution is linear, carries out echo region tracking according to the optimal correlation coefficient between adjacent time regions, and is difficult to accurately estimate the nonlinear evolution of radar echo in practice. The optical flow method is a two-step calculation method, wherein an optical flow field is calculated from continuous radar images, and then the nearest precipitation field is extrapolated based on the optical flow field, but the two-stage extrapolation method adopted by the optical flow method can cause accumulated errors. The radar echo data is used as a series of image data, the space-time dimension is higher, no obvious periodicity exists, and the motion speed and the shape change rule are not fixed. Therefore, the three traditional methods have limitations, cannot fully utilize a large amount of historical observation data, and cannot obtain a satisfactory forecasting result; especially in terms of accuracy in predicting high resolution radar echo images, a satisfactory level is far from reached.
Disclosure of Invention
The invention provides a radar echo extrapolation prediction method, a system and a storage medium, wherein a prediction model combining a double-branch coding and decoding and a gating recursion network is constructed, the obtained historical observation radar data is utilized to train the prediction model, and then an image sequence to be predicted is input into the trained prediction model combining the double-branch coding and decoding and the gating recursion network to obtain a radar echo extrapolation prediction image with higher accuracy, so that the radar echo extrapolation prediction effect with higher accuracy is realized, and the rainfall prediction accuracy is greatly enhanced.
In order to achieve the above purpose, the invention is realized by adopting the following technical scheme.
In a first aspect, the present invention provides a radar echo extrapolation prediction method, including:
acquiring a radar echo image sequence to be predicted;
preprocessing the radar echo image sequence to obtain a data set to be predicted;
Inputting the data set to be predicted into a pre-trained prediction model combining the double-branch coding and decoding and a gating recursion network to obtain a radar echo extrapolation prediction image;
the training method of the prediction model combining the double-branch coding and decoding and the gating recursion network comprises the following steps:
Preprocessing an acquired radar echo image sequence sample to obtain an effective sample data set;
inputting the effective sample data set into a pre-constructed prediction model combining the double-branch coding and decoding and gating recursion network to obtain a trained prediction model combining the double-branch coding and decoding and gating recursion network.
Optionally, the pre-constructed prediction model combining dual-branch codec and gating recursion network includes: the spatial-temporal double-branch coding and decoding structure comprises a spatial-temporal double-branch coder and a spatial-temporal double-branch decoder, and the prediction network comprises a plurality of layers of prediction units which are sequentially connected, wherein the prediction units are gating recurrent neural units based on an attention mechanism.
Optionally, the method is characterized in that inputting the data set to be predicted into a pre-trained prediction model combining dual-branch coding and gating recursive network to obtain a radar echo extrapolation prediction image, and includes:
Sequentially reading an image sequence to be predicted with a batch size of 4 from the data set to be predicted as the input of the current time t of the prediction model combining the double-branch coding and decoding and gating recursion network
Inputting the input by using the space-time double-branch coderExtracting features to obtain the output/>, of the space-time double-branch encoderSaid/>And/>The calculation formulas of (a) are respectively as follows:
Wherein, Representing time-coded information,/>Representing spatially encoded information,/>,/>Representing a temporal encoder and a spatial encoder, respectively, for extracting deep features from an input;
Output of the space-time double-branch encoder Sending the output to the prediction network to obtain the output/>Wherein/>Representing temporal prediction information,/>Representing spatial prediction information;
Outputting the predicted network output Sending the output signals into the space-time double-branch decoder to obtain the output/>, of the space-time double-branch decoderWherein/>Representing time decoding information,/>Representing spatial decoding information;
Outputs of the space-time dual-branch decoder Fusion is carried out to obtain the prediction result/>, of the image sequence to be predicted
Optionally, the output of the space-time dual-branch encoderSending the output to the prediction network to obtain the output/>Comprising:
based on the attention mechanism of each layer of prediction units in the prediction network, according to ,/>,/>AndObtain/>Wherein/>For the spatial state output by the layer 1 prediction unit at the current moment t,/>For the time state output by the first layer prediction unit of the previous time t-1,/>For the time state set output by the first layer prediction unit of the previous tau moments,/>For the spatial state set output by the first layer 1 prediction unit at the first tau moments,/>Is an enhanced time state with a plurality of time step information;
Based on And/>Obtaining the time state/>, which is output by the current moment prediction unitAnd space state/>
Based on the time state output by the current time prediction unitAnd space state/>And obtaining the time state and the space state output by the prediction unit of the last layer.
Optionally, based on the attention mechanism of each layer of prediction units in the prediction network, according to,/>,And/>Obtain/>Comprising:
Will be And/>Respectively performing dot multiplication, and obtaining attention score/>, through softmax function,/>The calculation formula of (2) is as follows:
Scoring the attention score And/>Respectively multiplying and then carrying out addition fusion to obtain long-term motion information/>The calculation formula of (2) is as follows:
Wherein, The time state output by the first layer prediction unit at the first j moments;
Based on Construction of fusion door/>,/>The calculation formula of (2) is as follows:
Where, represents a two-dimensional convolution, For/>Is a convolution kernel of/>Activating a function for Sigmoid;
Based on Fusion door/>Long-term motion information/>Obtaining an enhanced time state/>, with multiple time step information,/>The calculation formula of (2) is as follows:
Wherein, Representing the Hadamard product of the matrix.
Optionally based onAnd/>Obtaining the time state/>, which is output by the current moment prediction unitAnd space state/>Comprising:
According to And/>Get updated gate/>And reset gate/>,/>And/>The calculation formulas of (a) are respectively as follows:
Wherein, For the current moment/>Convolution kernel for updating gates,/>To enhance the time state/>Convolution kernel for updating gates,/>For the current moment/>Convolution kernel for reset gate,/>To enhance the time state/>A convolution kernel for resetting the gate;
According to 、/>Update gate/>And reset gate/>Obtaining candidate spatial trend information/>And candidate time trend information/>,/>And/>The calculation formulas of (a) are respectively as follows:
wherein, tan h represents hyperbolic tangent activation function, To enhance the time state/>For generating/>Is a convolution kernel of/>For the current moment/>For generating/>Is a convolution kernel of/>For the current moment/>For generating/>Is used for the convolution kernel of (c),To enhance the time state/>For generating/>Is a convolution kernel of (2);
According to And/>Obtaining the time state/>, of the current time prediction networkAnd space state/>,/>And/>The calculation formulas of (a) are respectively as follows:
Optionally, the space-time dual-branch coding and decoding structure further comprises a multi-scale channel attention module, which is used for extracting detail characteristics of each layer of coding information in the space-time dual-branch coder and fusing the detail characteristics with each layer of decoding information in the space-time dual-branch decoder.
Optionally, outputting the predicted networkSending the output signals into the space-time double-branch decoder to obtain the output/>, of the space-time double-branch decoderComprising:
Using multi-scale channel attention module pairs And/>The global attention G and the global attention L are extracted, and the calculation formulas are respectively as follows:
where L () represents local attention, G () represents global attention, PWConv represents a1 x 1 point convolution, Representing a first layer point-by-point convolution,/>Representing a second layer point-by-point convolution,/>Representing a ReLU activation function, GAP represents an average pooling operation Global Average Pooling;
According to And/>Is derived from the global attention G and the local attention L to the output MSCAM (/ >) of the multi-scale channel attention module),MSCAM(/>),MSCAM(/>) And MSCAM (/ >)) The calculation formulas of (a) are respectively as follows:
Output MSCAM @ based on the multi-scale channel attention module ),MSCAM(/>) Output of the prediction network/>Obtaining the output/>, of the space-time dual-branch decoder,/>And/>The calculation formula of (2) is as follows:
Wherein, Respectively representing a temporal decoder, a spatial decoder for mapping the prediction features to frames.
Optionally, the output of the space-time dual-branch decoderFusion is carried out to obtain the prediction result/>, of the image sequence to be predictedComprising: the spatio-temporal dual-branch codec structure will/>, by using 1*1 convolutionsAnd/>Output prediction result/>, after fusion,/>The calculation formula of (2) is as follows:
Wherein, The convolution kernel 1*1 is shown for restoring the channel to the initial value, concat is the channel splice.
Optionally, preprocessing the radar echo image sequence includes: and interpolating and normalizing the radar echo image sequence to remove invalid radar data, and obtaining an effective radar data set.
Optionally, inputting the valid sample data set into a pre-constructed prediction model combining the dual-branch coding and gating recursion network to obtain a trained prediction model combining the dual-branch coding and gating recursion network, including:
Carrying out parameter initialization on the pre-constructed prediction model to obtain an initialized prediction model;
selecting the effective sample data set to obtain a training sample set and a test sample set;
Inputting the training sample set into the initialized prediction model for parameter updating to obtain a prediction model after each round of learning, and sequentially inputting the test sample set into the learned prediction model according to a certain round to obtain the prediction performance of the prediction model under the certain round;
and evaluating and optimizing the prediction model under the certain turn based on the convergence condition and the prediction performance to obtain a trained prediction model combining the double-branch coding and decoding and the gating recursion network.
In a second aspect, the present invention provides a radar echo extrapolation prediction system comprising:
And a data acquisition module: the method comprises the steps of acquiring a radar echo image sequence to be predicted;
And a pretreatment module: the method comprises the steps of preprocessing the radar echo image sequence to obtain a data set to be predicted;
and a prediction module: inputting the data set to be predicted into a pre-trained prediction model combining the double-branch coding and decoding and a gating recursion network to obtain a radar echo extrapolation prediction image;
the training method of the prediction model combining the double-branch coding and decoding and the gating recursion network comprises the following steps:
Preprocessing an acquired radar echo image sequence sample to obtain an effective sample data set;
inputting the effective sample data set into a pre-constructed prediction model combining the double-branch coding and decoding and gating recursion network to obtain a trained prediction model combining the double-branch coding and decoding and gating recursion network.
In a third aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a radar echo extrapolation prediction method according to any one of the first aspects.
Compared with the prior art, the invention has the beneficial effects that: according to the method, the double-branch coding and decoding structure independently codes radar echo images in time and space domains respectively, interference between space-time information is avoided, a multi-scale channel attention module (MSCAM) is used for learning global and local characteristic information of each layer of encoder, so that attention to radar image details is enhanced, in addition, time evolution and space relation in radar data can be processed by combining with an attention space-time gating recursion unit (STAGRU), and space-time information can be extracted from a wider receptive field. The radar echo extrapolation prediction effect with higher accuracy is realized, and the rainfall prediction precision is greatly enhanced.
Drawings
FIG. 1 is a flow chart of a method for radar echo extrapolation prediction in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram showing the overall structure of a prediction model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a multi-scale channel attention module workflow in accordance with one embodiment of the invention;
FIG. 4 illustrates an enhanced time state obtained with multiple time step information in one embodiment of the invention Is a flow diagram of (1);
FIG. 5 shows a time state of the current time prediction unit output obtained by the prediction unit in one embodiment of the present invention And space state/>Is a flow diagram of (1);
FIG. 6 is a schematic diagram of a prediction network in a prediction model according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Example 1
As shown in fig. 1, the present embodiment provides a radar echo extrapolation prediction method, including:
acquiring a radar echo image sequence to be predicted;
preprocessing the radar echo image sequence to obtain a data set to be predicted;
Inputting the data set to be predicted into a pre-trained prediction model combining the double-branch coding and decoding and a gating recursion network to obtain a radar echo extrapolation prediction image;
the training method of the prediction model combining the double-branch coding and decoding and the gating recursion network comprises the following steps:
Preprocessing an acquired radar echo image sequence sample to obtain an effective sample data set;
inputting the effective sample data set into a pre-constructed prediction model combining the double-branch coding and decoding and gating recursion network to obtain a trained prediction model combining the double-branch coding and decoding and gating recursion network.
By constructing a prediction model combining the double-branch coding and decoding and the gating recursive network, the radar data of historical observation is utilized to train the prediction model, and then radar data to be predicted is input into the prediction model, so that the radar echo extrapolation prediction effect with higher accuracy is realized, and the rainfall prediction precision is greatly enhanced.
Example 2
On the basis of embodiment 1, this embodiment also makes the following design.
The method for constructing and training the prediction model comprises the following steps.
1. And acquiring a radar echo image sequence of historical observation, preprocessing the acquired radar echo image sequence, such as interpolation, normalization and the like, and removing invalid data without rainfall or with little rainfall.
2. The effective data set is further divided: total length is set to 20, i.e., every 20 data is divided into a sequence, with the first 10 data being the input sequence and the last 10 being the control sequence. And randomly dividing all sequences of each month in the data set into a training sequence sample subset and a test sequence sample subset according to a ratio of 3:1, and combining the training sequence sample subset and the test sequence sample subset of each month to obtain a training sequence sample set train_data and a test sequence sample set test_data.
3. As shown in fig. 2, the overall architecture of the prediction model includes a space-time dual branch codec structure and a prediction network, the space-time dual branch codec structure includes a space-time dual branch encoder, a multi-scale channel attention module, and a space-time dual branch decoder; the prediction network comprises a plurality of layers of prediction units STAGRU which are connected in sequence, and the prediction unit STAGRU adopts a time-space gating recurrent neural network based on an attention mechanism.
In the prediction model, the space-time double-branch coding and decoding structure independently codes radar echo images in time and space domains respectively, so that interference between space-time information is avoided. A multi-scale attention module (MSCAM) is used to learn global and local feature information of each layer of encoder, thereby enhancing the focus on radar image details. Furthermore, in combination with a attention-directed spatio-temporal gating recursion unit (STAGRU), temporal evolution and spatial relationships in the radar data can be processed, extracting spatio-temporal information from a wider receptive field. The radar echo extrapolation prediction effect with higher accuracy is realized, and the rainfall prediction precision is greatly enhanced.
4. Firstly initializing training parameters of a prediction model, and specifically setting the height, width and channel of an input image, convolution kernel filter_size, convolution step size, number of stacked layers of a prediction unit num_layers, number of convolution kernels num_hidden, number of samples input each time in a training stage batch_size, maximum training round number max_epoch, learning rate lambda, input sequence length, output sequence length and the like.
Specifically, the input image may be set to height=256, width to width=256, channel number to channel=1, number of prediction unit stacking layers to num_layers=4, convolution kernel size to filter_size=5, step size to stride=1, convolution kernel number to hidden_num=64, learning rate to λ=0.001, input sequence length to input_length=10, extrapolation sequence length to output_length=10, number of samples input per training stage to batch_size=4, and maximum training round to max_epoch=80.
5. When constructing the prediction model, firstly, a space-time dual branch coding and decoding structure is constructed, which mainly comprises a Time Encoder (TE), a Space Encoder (SE), a Time Decoder (TD), a Space Decoder (SD) and a multi-scale channel attention module (MSCAM).
Space-time double-branch encoder: as shown in the left half of fig. 2, is divided into a temporal encoder and a spatial encoder. Each encoder is 3 consecutive modules, each module mainly comprising a 3x3 convolutional layer, leakyReLU active layer, downsampling by means of convolution. Each module convolution layer parameter is set as: the input channel of the 1 st layer convolution layer is 1, the output channel is 64, the convolution kernel is 3, the step length is 2, and the padding is 1; the input channel of the 2 nd layer convolution layer is 64, the output channel is 64, the convolution kernel is 3, the step length is 2, and the padding is 1; the layer 3 convolutional layer input channel is 64, the output channel is 64, the convolutional kernel is 3, the step size is 2, and the padding is 1. With the deep network layer, the space dimension of the original input is reduced by half after each pass through one coding module, so that the characteristic information of different scales of the characteristic map is captured.
Space-time dual-branch decoder: as shown in the right half of fig. 2, is divided into a temporal decoder and a spatial decoder. Starting with the STAGRU units output, there are then 3 consecutive modules, each mainly comprising a 3x3 deconvolution layer, leakyReLU activation layer. Each module deconvolution layer parameter is set to: the input channel of the 1 st layer convolution layer is 64, the output channel is 64, the convolution kernel is 3, the step size is 2, the packing is 1, and the output_packing is 1; the input channel of the 2 nd convolution layer is 64, the output channel is 64, the convolution kernel is 3, the step size is 2, the packing is 1, and the output_packing is 1; the layer 3 convolutional layer input channel is 64, the output channel is 64, the convolutional kernel is 3, the step size is 2, the padding is 1, and the output_padding is 1. Up-sampling is performed by means of deconvolution. Finally, the time decoder and the space decoder information are fused, and then the channel is restored by 1*1 convolution to output an image.
Skip-Connection: the multi-scale channel attention module is mainly composed of local attention and global attention and is used for further extracting detail characteristics of each layer of coding information in the space-time dual-branch coder and fusing the detail characteristics with the space-time dual-branch decoder information.
Next, according to the set number of prediction unit stacking layers, convolution kernel size, step size, and number of convolution kernels, a 4-layer STAGRU prediction network as shown in fig. 6 is constructed, and sequentially stacked after the space-time dual-branch encoder.
At the beginning, the time state in the prediction model is initialized to be all-zero tensor, the size is (4, 64, 32, 32), the time state and the space state set of the former tau time are also initialized to be all-zero tensor, the size is (tau, 4, 64, 32, 32), and the output of each layer is updated after each time, wherein the tau takes a value of 5.
The working process of the prediction network is as follows:
1) See fig. 4, based on the attention mechanism, according to ,/>,/>And/>Obtain/>; Wherein,For the spatial state output by the layer 1 prediction unit at the current moment,/>For the time state output by the first layer prediction unit at the previous moment,/>For the time state set output by the first layer prediction unit of the previous tau moments,/>For the spatial state set output by the first layer 1 prediction unit at the first tau moments,/>Is an enhanced time state with multiple time step information.
The specific process is as follows:
11 Will) be And/>Respectively performing dot multiplication, and obtaining/> through softmax function;/>Is an attention score;
12 Will) be And/>Respectively multiplying and then carrying out addition fusion to obtain/>; Wherein/>Is long-term movement information;
13 Using) a combination of Constructed fusion door/>According to/>Fusion door/>And/>Obtain enhanced time state/>
In the figure, softmax represents a nonlinear activation function, dot Product represents Dot Product, and the Dot Product is obtainedThe process of (2) can be expressed as:
Where, represents a two-dimensional convolution, Hadamard product representing matrix,/>For the time state output by the first layer prediction unit of the previous j moments,/>For/>Is a convolution kernel of/>The function is activated for Sigmoid.
2) According to FIG. 5、/>Obtaining the time state/>, output by the current moment prediction unitAnd space state/>; The temporal and spatial states of the output of the last layer prediction unit are the inputs to the decoder.
The specific process is as follows:
21 Space state at the current time And attention time status/>, with multiple time step informationBy updating the gate/>And reset gate/>The update and reset gates are formulated as follows:
Wherein, Activating a function for Sigmoid,/>For the current moment/>Convolution kernel for updating gates,/>To enhance the time state/>Convolution kernel for updating gates,/>For the current moment/>Convolution kernel for reset gate,/>To enhance the time state/>Convolution kernel for reset gate.
22 According to the followingAnd/>Obtaining the time state/>, of the current time prediction networkAnd space state/>The formula is as follows:
wherein, tan h represents hyperbolic tangent activation function, Time trend information as candidate,/>Is candidate spatial trend information,/>To enhance the time state/>For generating/>Is a convolution kernel of/>For the current moment/>For generatingIs a convolution kernel of/>For the current moment/>For generating/>Is a convolution kernel of/>To enhance the time state/>For generating/>Is a convolution kernel of (a).
The training process of the prediction model is as follows:
S1) training sample reading. Reading batch_size=4 sequence samples from training sample set train_data at each training as input of prediction model
S2) assume an input at a certain moment,/>The size of (4,1,256,256) will/>The input encoder extracts the depth characteristics of the samples, and outputs the extracted depth characteristics after three layers of characteristic extraction blocks of the time and space encoder are passed= (4, 64, 32, 32), The formula is as follows:
Wherein, ,/>Respectively a temporal encoder and a spatial encoder for extracting deep features from an input.
S3) predicting the output time state of the prediction unit of the last layer of the network according to the processes of 1) to 2)And space state/>And will/>And/>Input temporal and spatial decoders, to help the decoder recall better the detailed information in the encoder, a multi-scale channel attention module MSCAM is used in a dual-branch codec structure, as shown in fig. 3, with the following formula:
where L () represents local attention, G () represents global attention, PWConv represents a1 x 1 point convolution, Representing a first layer point-by-point convolution,/>Representing a second layer point-by-point convolution,/>Representing the ReLU activation function, GAP represents the average pooling operation Global Average Pooling.
Decoding the time state and the space state and then outputting through 1*1 convolution fusionAnd/>Fusion output, obtaining predicted result image/>, of next momentThe size is (4, 1, 256, 256), the slave input/>To/>Radar echo extrapolation of (2); can be expressed as:
Wherein, Representing respectively a temporal decoder, a spatial decoder for mapping prediction features to frames,/>The convolution kernel 1*1 is shown for restoring the channel to the initial value, concat is the channel splice.
S4) when t is more than or equal to 10, inputting the previous momentObtained prediction result image/>As predictive model input, repeating steps S2) to S3) until t=19, in turn yielding a predicted sequence of images at future timesAnd (5) completing the extrapolation of the radar echo sequence.
S5) calculating a loss function value. The predicted sequence obtained by forward propagation of the steps S2) to S4)And extrapolated reference sequence/>And taking the mean square error as a loss function, calculating a network parameter gradient according to the numerical value obtained by the loss function, and updating the network parameter to finish back propagation.
S6) finishing calculation of all data in the training set once to form one round, repeatedly executing the steps S2) to S5), and inputting the test sample set test_data into the multi-round trained prediction model according to the preset round while training the prediction model by using the training sample set train_data, and evaluating the performance of the round model until the maximum number of rounds of training is finished or convergence condition is reached, thereby finishing the training of the prediction model.
Taking fig. 2 as an example, the training process S2) to S4) may be:
a1 Sample(s) Input space-time double-branch encoder, extract depth characteristic/>, of sampleAnd/>
A2 Time state)=/>Space state/>=/>Space state set/>, of the first τ momentsAnd time state set/>Input into a layer 1 gating recursion unit, and output time state/> after forward propagationSpace state/>
A3 Similar to step A2), the time status is setSpace state/>Space state set/>, of the first τ momentsAnd time state set/>Input into a layer 2 gating recursion unit, and obtain the time state/> of the layer output after forward propagationSpace state/>
A4 Similar to step A3), the time status is setSpace state/>Space state set/>, of the first τ momentsAnd time state set/>Inputting into a layer 3 gating recursion unit, and obtaining the time state/> of the layer output after forward propagationSpace state/>
A5 Similar to step A3), the time status is setSpace state/>Space state set/>, of the first τ momentsAnd time state set/>Inputting into a4 th layer gating recursion unit, and obtaining the time state/>, of the output of the layer after forward propagationSpace state/>
A6 To the time state=/>Space state/>=/>Inputting the space-time double-branch decoder to obtain the predicted result image/>, at the next momentCompleting the slave input/>To/>Radar echo extrapolation of (2).
A7 Sample(s)Input space-time double-branch encoder, extract depth characteristic/>, of sampleAnd/>
A8 Time state)=/>Space state/>=/>Space state set/>, of the first τ momentsAnd time state set/>Input into a layer 1 gating recursion unit, and output time state/> after forward propagationAnd space state
A9 Time state)Space state/>Space state set/>, of the first τ momentsAnd a set of time statesInput into a layer 2 gating recursion unit, and output time state/> after forward propagationAnd space state/>
A10 To the time stateSpace state/>Space state set/>, of the first τ momentsAnd a set of time statesInputting into a layer 3 gating recursion unit, and obtaining the time state/> of the layer output after forward propagationAnd space state
A11 To the time stateSpace state/>Space state set/>, of the first τ momentsAnd a set of time statesInputting into a4 th layer gating recursion unit, and obtaining the time state/>, of the output of the layer after forward propagationSpace state
A12 To the time state=/>Space state/>=/>Inputting the space-time double-branch decoder to obtain the predicted result image/>, at the next momentCompleting the slave input/>To/>Radar echo extrapolation of (2).
A13 When t=11, 12, …,19, will be at the previous timePredicted output/>As input to the prediction model, repeatedly executing A7) to a 12) until t=19, and sequentially obtaining the predicted image sequence at the future timeAnd (5) completing the extrapolation of the radar echo sequence.
When the trained prediction model is used for implementing radar echo extrapolation prediction, a radar echo image sequence is firstly obtained, preprocessing such as interpolation, normalization and the like can be carried out on the obtained radar echo image sequence, the radar echo image sequence to be predicted is input into the prediction model, and the radar echo image sequence to be predicted is completed in the prediction model as in the steps S2) to S4) in the model training process, so that a radar echo extrapolation prediction image can be obtained.
Example 3
The embodiment provides a radar echo extrapolation prediction system, including:
And a data acquisition module: the method comprises the steps of acquiring a radar echo image sequence to be predicted;
And a pretreatment module: the method comprises the steps of preprocessing the radar echo image sequence to obtain a data set to be predicted;
and a prediction module: inputting the data set to be predicted into a pre-trained prediction model combining the double-branch coding and decoding and a gating recursion network to obtain a radar echo extrapolation prediction image;
the training method of the prediction model combining the double-branch coding and decoding and the gating recursion network comprises the following steps:
Preprocessing an acquired radar echo image sequence sample to obtain an effective sample data set;
inputting the effective sample data set into a pre-constructed prediction model combining the double-branch coding and decoding and gating recursion network to obtain a trained prediction model combining the double-branch coding and decoding and gating recursion network.
Example 4
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the radar echo extrapolation prediction method as described in embodiment 2.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are all within the protection of the present invention.

Claims (9)

1. A method of radar echo extrapolation prediction, comprising:
acquiring a radar echo image sequence to be predicted;
preprocessing the radar echo image sequence to obtain a data set to be predicted;
Inputting the data set to be predicted into a pre-trained prediction model combining the double-branch coding and decoding and a gating recursion network to obtain a radar echo extrapolation prediction image;
the training method of the prediction model combining the double-branch coding and decoding and the gating recursion network comprises the following steps:
Preprocessing an acquired radar echo image sequence sample to obtain an effective sample data set;
Inputting the effective sample data set into a pre-constructed prediction model combining the double-branch coding and decoding and gating recursion network to obtain a trained prediction model combining the double-branch coding and decoding and gating recursion network;
the pre-constructed prediction model combining the dual-branch coding and the gating recursion network comprises the following steps: the system comprises a space-time double-branch coding and decoding structure and a prediction network, wherein the space-time double-branch coding and decoding structure comprises a space-time double-branch coder and a space-time double-branch decoder, the prediction network comprises a plurality of layers of prediction units which are sequentially connected, and the prediction units are gating recurrent neural units based on an attention mechanism;
inputting the data set to be predicted into a pre-trained prediction model combining double-branch coding and decoding and gating recursion network to obtain a radar echo extrapolation prediction image, wherein the method comprises the following steps of:
Sequentially reading an image sequence to be predicted with a batch size of 4 from the data set to be predicted as the input of the current time t of the prediction model combining the double-branch coding and decoding and gating recursion network
Inputting the input by using the space-time double-branch coderExtracting features to obtain the output/>, of the space-time double-branch encoderSaid/>And/>The calculation formulas of (a) are respectively as follows:
Wherein, Representing time-coded information,/>Representing spatially encoded information,/>、/>Representing a temporal encoder and a spatial encoder, respectively, for extracting deep features from an input;
Output of the space-time double-branch encoder Sending the output to the prediction network to obtain the output/>Wherein/>Representing temporal prediction information,/>Representing spatial prediction information;
Outputting the predicted network output Sending the output signals into the space-time double-branch decoder to obtain the output/>, of the space-time double-branch decoderWherein/>Representing time decoding information,/>Representing spatial decoding information;
Outputs of the space-time dual-branch decoder Fusion is carried out to obtain the prediction result/>, of the image sequence to be predicted
The space-time double-branch coding and decoding structure also comprises a multi-scale channel attention module which is used for extracting detail characteristics of each layer of coding information in the space-time double-branch coder and fusing the detail characteristics with each layer of decoding information in the space-time double-branch decoder;
Outputting the predicted network output Sending the output signals into the space-time double-branch decoder to obtain the output/>, of the space-time double-branch decoderComprising:
Using multi-scale channel attention module pairs And/>The global attention G and the global attention L are extracted, and the calculation formulas are respectively as follows:
Where L () represents local attention, G () represents global attention, PWConv represents a1 x 1 point convolution, PWConv 1 represents a first layer point convolution, PWConv 2 represents a second layer point convolution, Representing a ReLU activation function, GAP represents an average pooling operation Global Average Pooling;
According to And/>Is derived from the global attention G and the local attention L to the output MSCAM (/ >) of the multi-scale channel attention module)、MSCAM(/>),MSCAM(/>) And MSCAM (/ >)) The calculation formulas of (a) are respectively as follows:
Output MSCAM @ based on the multi-scale channel attention module )、MSCAM(/>) Output of the prediction network/>Obtaining the output/>, of the space-time dual-branch decoder,/>And/>The calculation formula of (2) is as follows:
Wherein, Respectively representing a temporal decoder, a spatial decoder for mapping the prediction features to frames.
2. The method of claim 1, wherein the output of the space-time dual-branch encoder is used to predict the radar echoSending the output to the prediction network to obtain the output/>Comprising:
based on the attention mechanism of each layer of prediction units in the prediction network, according to 、/>、/>And/>Obtain/>Wherein/>For the spatial state output by the layer 1 prediction unit at the current moment t,/>For the time state output by the first layer prediction unit of the previous time t-1,/>For the set of time states output by the first layer prediction unit for the first τ moments,For the spatial state set output by the first layer 1 prediction unit at the first tau moments,/>Is an enhanced time state with a plurality of time step information;
Based on And/>Obtaining the time state/>, which is output by the current moment prediction unitAnd space state/>
Based on the time state output by the current time prediction unitAnd space state/>And obtaining the time state and the space state output by the prediction unit of the last layer.
3. The radar echo extrapolation prediction method according to claim 2, wherein the prediction network is based on the attention mechanisms of the prediction units of each layer according to、/>、/>And/>Obtain/>Comprising:
Will be And/>Respectively performing dot multiplication, and obtaining attention score/>, through softmax function,/>The calculation formula of (2) is as follows:
Scoring the attention score And/>Respectively multiplying and then carrying out addition fusion to obtain long-term motion information/>,/>The calculation formula of (2) is as follows:
Wherein, The time state output by the first layer prediction unit at the first j moments;
Based on Construction of fusion door/>,/>The calculation formula of (2) is as follows:
Where, represents a two-dimensional convolution, For/>Is a convolution kernel of/>Activating a function for Sigmoid;
Based on Fusion door/>Long-term motion information/>Obtaining enhanced time status with multiple time step information,/>The calculation formula of (2) is as follows:
Wherein, Representing the Hadamard product of the matrix.
4. A method of radar echo extrapolation prediction according to claim 3, based onAnd/>Obtaining the time state/>, which is output by the current moment prediction unitAnd space state/>Comprising:
According to And/>Get updated gate/>And reset gate/>,/>And/>The calculation formulas of (a) are respectively as follows:
Wherein, For the current moment/>Convolution kernel for updating gates,/>To enhance the time state/>Convolution kernel for updating gates,/>For the current moment/>Convolution kernel for reset gate,/>To enhance the time state/>A convolution kernel for resetting the gate;
According to 、/>Update gate/>And reset gate/>Obtaining candidate spatial trend information/>And candidate time trend information,/>And/>The calculation formulas of (a) are respectively as follows:
wherein, tan h represents hyperbolic tangent activation function, To enhance the time state/>For generating/>Is used for the convolution kernel of (c),For the current moment/>For generating/>Is a convolution kernel of/>For the current moment/>For generating/>Is a convolution kernel of/>To enhance the time state/>For generating/>Is a convolution kernel of (2);
According to And/>Obtaining the time state/>, of the current time prediction networkAnd space state/>,/>And/>The calculation formulas of (a) are respectively as follows:
5. the method of claim 1, wherein the output of the space-time dual-branch decoder is processed Fusion is carried out to obtain the prediction result/>, of the image sequence to be predictedComprising: the spatio-temporal dual-branch codec structure will/>, by using 1*1 convolutionsAnd/>Output prediction result/>, after fusion,/>The calculation formula of (2) is as follows:
Wherein, The convolution kernel 1*1 is shown for restoring the channel to the initial value, concat is the channel splice.
6. The radar echo extrapolation prediction method according to claim 1, wherein preprocessing the radar echo image sequence comprises: and interpolating and normalizing the radar echo image sequence to remove invalid radar data, and obtaining an effective radar data set.
7. The method of claim 6, wherein inputting the valid sample dataset into a pre-constructed predictive model that incorporates a dual-branch codec and a gated recursive network to obtain a trained predictive model that incorporates a dual-branch codec and a gated recursive network, comprises:
Carrying out parameter initialization on the pre-constructed prediction model to obtain an initialized prediction model;
selecting the effective sample data set to obtain a training sample set and a test sample set;
Inputting the training sample set into the initialized prediction model for parameter updating to obtain a prediction model after each round of learning, and sequentially inputting the test sample set into the learned prediction model according to a preset round to obtain the prediction performance of the prediction model under the preset round;
And evaluating and optimizing the prediction model under the preset rounds based on the convergence condition and the prediction performance to obtain a trained prediction model combining the double-branch coding and decoding and the gating recursion network.
8. A radar echo extrapolation prediction system, comprising:
And a data acquisition module: the method comprises the steps of acquiring a radar echo image sequence to be predicted;
And a pretreatment module: the method comprises the steps of preprocessing the radar echo image sequence to obtain a data set to be predicted;
and a prediction module: inputting the data set to be predicted into a pre-trained prediction model combining the double-branch coding and decoding and a gating recursion network to obtain a radar echo extrapolation prediction image;
the training method of the prediction model combining the double-branch coding and decoding and the gating recursion network comprises the following steps:
Preprocessing an acquired radar echo image sequence sample to obtain an effective sample data set;
Inputting the effective sample data set into a pre-constructed prediction model combining the double-branch coding and decoding and gating recursion network to obtain a trained prediction model combining the double-branch coding and decoding and gating recursion network;
the pre-constructed prediction model combining the dual-branch coding and the gating recursion network comprises the following steps: the system comprises a space-time double-branch coding and decoding structure and a prediction network, wherein the space-time double-branch coding and decoding structure comprises a space-time double-branch coder and a space-time double-branch decoder, the prediction network comprises a plurality of layers of prediction units which are sequentially connected, and the prediction units are gating recurrent neural units based on an attention mechanism;
inputting the data set to be predicted into a pre-trained prediction model combining double-branch coding and decoding and gating recursion network to obtain a radar echo extrapolation prediction image, wherein the method comprises the following steps of:
Sequentially reading an image sequence to be predicted with a batch size of 4 from the data set to be predicted as the input of the current time t of the prediction model combining the double-branch coding and decoding and gating recursion network
Inputting the input by using the space-time double-branch coderExtracting features to obtain the output/>, of the space-time double-branch encoderSaid/>And/>The calculation formulas of (a) are respectively as follows:
Wherein, Representing time-coded information,/>Representing spatially encoded information,/>、/>Representing a temporal encoder and a spatial encoder, respectively, for extracting deep features from an input;
Output of the space-time double-branch encoder Sending the output to the prediction network to obtain the output/>Wherein/>Representing temporal prediction information,/>Representing spatial prediction information;
Outputting the predicted network output Sending the output signals into the space-time double-branch decoder to obtain the output/>, of the space-time double-branch decoderWherein/>Representing time decoding information,/>Representing spatial decoding information;
Outputs of the space-time dual-branch decoder Fusion is carried out to obtain the prediction result/>, of the image sequence to be predicted
The space-time double-branch coding and decoding structure also comprises a multi-scale channel attention module which is used for extracting detail characteristics of each layer of coding information in the space-time double-branch coder and fusing the detail characteristics with each layer of decoding information in the space-time double-branch decoder;
Outputting the predicted network output Sending the output signals into the space-time double-branch decoder to obtain the output/>, of the space-time double-branch decoderComprising:
Using multi-scale channel attention module pairs And/>The global attention G and the global attention L are extracted, and the calculation formulas are respectively as follows:
Where L () represents local attention, G () represents global attention, PWConv represents a 1 x 1 point convolution, PWConv 1 represents a first layer point convolution, PWConv 2 represents a second layer point convolution, Representing a ReLU activation function, GAP represents an average pooling operation Global Average Pooling;
According to And/>Is derived from the global attention G and the local attention L to the output MSCAM (/ >) of the multi-scale channel attention module)、MSCAM(/>),MSCAM(/>) And MSCAM (/ >)) The calculation formulas of (a) are respectively as follows:
Output MSCAM @ based on the multi-scale channel attention module )、MSCAM(/>) Output of the prediction network/>Obtaining the output/>, of the space-time dual-branch decoder,/>And/>The calculation formula of (2) is as follows:
Wherein, Respectively representing a temporal decoder, a spatial decoder for mapping the prediction features to frames.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements a radar echo extrapolation prediction method according to any one of claims 1-7.
CN202410131969.6A 2024-01-31 2024-01-31 Radar echo extrapolation prediction method, system and storage medium Active CN117665825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410131969.6A CN117665825B (en) 2024-01-31 2024-01-31 Radar echo extrapolation prediction method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410131969.6A CN117665825B (en) 2024-01-31 2024-01-31 Radar echo extrapolation prediction method, system and storage medium

Publications (2)

Publication Number Publication Date
CN117665825A CN117665825A (en) 2024-03-08
CN117665825B true CN117665825B (en) 2024-05-14

Family

ID=90082881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410131969.6A Active CN117665825B (en) 2024-01-31 2024-01-31 Radar echo extrapolation prediction method, system and storage medium

Country Status (1)

Country Link
CN (1) CN117665825B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117808650B (en) * 2024-02-29 2024-05-14 南京信息工程大学 Precipitation prediction method based on Transform-Flownet and R-FPN

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115390164A (en) * 2022-10-27 2022-11-25 南京信息工程大学 Radar echo extrapolation forecasting method and system
CN115933010A (en) * 2022-12-28 2023-04-07 南京信息工程大学 Radar echo extrapolation near weather prediction method
CN116842472A (en) * 2023-03-01 2023-10-03 电子科技大学 Land evapotranspiration remote sensing estimation method based on depth space-time coding and decoding network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115390164A (en) * 2022-10-27 2022-11-25 南京信息工程大学 Radar echo extrapolation forecasting method and system
CN115933010A (en) * 2022-12-28 2023-04-07 南京信息工程大学 Radar echo extrapolation near weather prediction method
CN116842472A (en) * 2023-03-01 2023-10-03 电子科技大学 Land evapotranspiration remote sensing estimation method based on depth space-time coding and decoding network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Radar Echo Extrapolation Model Based on a Dual-Branch Encoder–Decoder and Spatiotemporal GRU;Yong Cheng等;《ATMOSPHERE》;20240114;第15卷(第11期);第1-15页 *

Also Published As

Publication number Publication date
CN117665825A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN112446419B (en) Attention mechanism-based space-time neural network radar echo extrapolation prediction method
CN111612066B (en) Remote sensing image classification method based on depth fusion convolutional neural network
CN109001736B (en) Radar echo extrapolation method based on deep space-time prediction neural network
CN112418409B (en) Improved convolution long-short-term memory network space-time sequence prediction method by using attention mechanism
CN117665825B (en) Radar echo extrapolation prediction method, system and storage medium
CN115240425B (en) Traffic prediction method based on multi-scale space-time fusion graph network
CN112071065A (en) Traffic flow prediction method based on global diffusion convolution residual error network
CN111612243A (en) Traffic speed prediction method, system and storage medium
Sun et al. Prediction of Short‐Time Rainfall Based on Deep Learning
CN112415521A (en) CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics
CN113780149A (en) Method for efficiently extracting building target of remote sensing image based on attention mechanism
CN112183886B (en) Short-time adjacent rainfall prediction method based on convolution network and attention mechanism
CN110570035B (en) People flow prediction system for simultaneously modeling space-time dependency and daily flow dependency
CN109829495A (en) Timing image prediction method based on LSTM and DCGAN
CN115390164B (en) Radar echo extrapolation forecasting method and system
CN111047078B (en) Traffic characteristic prediction method, system and storage medium
Xiong et al. Contextual sa-attention convolutional LSTM for precipitation nowcasting: A spatiotemporal sequence forecasting view
CN115933010A (en) Radar echo extrapolation near weather prediction method
CN115902806A (en) Multi-mode-based radar echo extrapolation method
CN116844041A (en) Cultivated land extraction method based on bidirectional convolution time self-attention mechanism
CN114550014A (en) Road segmentation method and computer device
CN116148864A (en) Radar echo extrapolation method based on DyConvGRU and Unet prediction refinement structure
Yao et al. A Forecast-Refinement Neural Network Based on DyConvGRU and U-Net for Radar Echo Extrapolation
CN113341419A (en) Weather extrapolation method and system based on VAN-ConvLSTM
CN117634930B (en) Typhoon cloud picture prediction method, typhoon cloud picture prediction system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant