CN117368881B - Multi-source data fusion long-sequence radar image prediction method and system - Google Patents

Multi-source data fusion long-sequence radar image prediction method and system Download PDF

Info

Publication number
CN117368881B
CN117368881B CN202311677654.3A CN202311677654A CN117368881B CN 117368881 B CN117368881 B CN 117368881B CN 202311677654 A CN202311677654 A CN 202311677654A CN 117368881 B CN117368881 B CN 117368881B
Authority
CN
China
Prior art keywords
data
radar
memory
long
context
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311677654.3A
Other languages
Chinese (zh)
Other versions
CN117368881A (en
Inventor
叶允明
魏衍乐
李旭涛
陈训来
王蕊
王书欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Meteorological Bureau Shenzhen Meteorological Station
Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Original Assignee
Shenzhen Meteorological Bureau Shenzhen Meteorological Station
Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Meteorological Bureau Shenzhen Meteorological Station, Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology filed Critical Shenzhen Meteorological Bureau Shenzhen Meteorological Station
Priority to CN202311677654.3A priority Critical patent/CN117368881B/en
Publication of CN117368881A publication Critical patent/CN117368881A/en
Application granted granted Critical
Publication of CN117368881B publication Critical patent/CN117368881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/95Radar or analogous systems specially adapted for specific applications for meteorological use
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • G01W1/10Devices for predicting weather conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Environmental & Geological Engineering (AREA)
  • Medical Informatics (AREA)
  • Electromagnetism (AREA)
  • Atmospheric Sciences (AREA)
  • Environmental Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a long-sequence radar image prediction method and a long-sequence radar image prediction system fused with multi-source data, which belong to the technical field of weather prediction, and a radar echo prediction training model is firstly modified to pay attention to long-time motion trend; and three encoders are used for encoding radar image data; then, on the basis of radar image data, satellite cloud image data is introduced to be encoded by using a convolution gating memory unit ConvGRU, and the satellite cloud image data and the radar data are fused through a gating structure; introducing terrain elevation data, extracting feature information of the terrain data by using a residual convolution neural network Resnet, and fusing the feature information with radar image data in channel dimension; integrating the features through a feature fusion module, and finally sending the features into a time window decoder to obtain a future radar echo image; the method has good accuracy for long-time precipitation prediction.

Description

Multi-source data fusion long-sequence radar image prediction method and system
Technical Field
The invention belongs to the technical field of weather forecast, and particularly relates to a long-sequence radar image prediction method and system for fusing multi-source data.
Background
Rainfall forecast is always one of the most interesting research fields in weather forecast, and flood and geological disasters caused by strong convection weather cause great harm to life safety and property of people, so that the method has great significance in carrying out proximity forecast and early warning on people. Because rainfall is a complex nonlinear process, the weather conditions are very complex, and the rainfall is very difficult to forecast due to the influence of various factors in the easily-influenced areas, and the accuracy of rainfall forecast is always one of focus information of public attention. At present, the radar echo extrapolation technology is a widely applied short-term weather forecast method. The method utilizes the weather dynamics principle, can support the accurate extrapolation of radar echo images with less calculation, and realizes the real-time prediction of precipitation. It is therefore a key requirement to obtain high accuracy short term weather forecast.
Conventionally, echo movements between adjacent moments can be determined based on radar images captured at these adjacent moments by using an optical flow method or a cross correlation method. This information is then used to infer the radar echo at the next time. Radar reflectivity data can be used to infer the then-current rainfall condition. However, it is worth noting that this approach relies on the assumption of constant motion, whereas atmospheric changes tend to be complex and variable. Therefore, this assumption is often impractical in this field. Although these conventional methods can predict the future position of the precipitation system, their evolution trend cannot be accurately predicted. Therefore, the failure rate of tracking the strong precipitation echo increases significantly, and the prediction accuracy decreases rapidly with time.
With the rapid development of deep learning technology, various deep learning methods have been proposed and applied to the meteorological field. Radar proximity prediction is essentially a predictive problem for time series data, requiring a neural network to predict future locations of reflectivity factors from the distribution patterns observed in radar reflectivity data at different points in time. In this process, a key aspect is to enable the neural network to retain part of the memory of the previous point in time. Along with the continuous progress in the field of image time sequence prediction, the current model greatly improves the capability of capturing space-time correlation, can effectively capture the change between adjacent frames, and has important value in short-term rainfall prediction. However, existing deep learning methods still face some challenges in predicting radar echo data.
Although the existing deep learning technique shows a good result in predicting rainfall distribution compared to the conventional weather method, the following problems need to be solved and improved. On the one hand, the predicted image tends to be more blurred than the image generated by the physical method, with a significant difference in detail from the real radar image. On the other hand, in the case where echo data fluctuates seriously, these methods have difficulty in making accurate predictions. For heavy rainfall areas, even if the prediction time is prolonged, high-quality echo data cannot be consistently generated.
Disclosure of Invention
Because the radar echo image is a single-channel gray scale image, less information is contained. Besides radar image information at the historical moment, the scanning data of the remote sensing meteorological satellite has great relevance with generation and elimination of cloud formed by water molecules, the topographic features of the predicted places have considerable influence on water vapor movement, and the data can help to predict the change trend of the radar image; therefore, the invention provides a long-sequence radar image prediction method and a long-sequence radar image prediction system for fusing multi-source data, which use multi-mode meteorological data to extrapolate radar images, optimize the accuracy of predicted images and expand the prediction time range, and solve the task and the challenge faced in the existing short-cut rainfall prediction task.
The invention is realized by the following technical scheme:
a long-sequence radar image prediction method integrating multi-source data specifically comprises the following steps:
step 1, modifying a radar echo prediction training model to pay attention to long-term motion trend; and three encoders are used for encoding radar image data;
step 2: on the basis of radar image data, satellite cloud image data is introduced to be encoded by using a convolution gating memory unit ConvGRU, and the satellite cloud image data and the radar data are fused through a gating structure;
step 3: further introducing terrain elevation data, extracting feature information of the terrain data by using ResNet, then fusing the features with radar image data in the channel dimension, and combining the space information with the terrain-related data;
step 4: and integrating the extracted features through a feature fusion module. The process fuses the related information of the radar image and the topographic data with each other to form a richer representation. Finally, the fused features are passed to a time window decoder to generate future radar echo images.
Further, in step 1, specifically includes:
the Memory alignment module Memory is trained with all available training data, and rich athletic context information is captured and recorded. These memory vectors can be viewed as an efficient extraction of historical global information by the model, helping the system to better understand long-term dependencies in the input sequence. The mechanism enables the model to effectively integrate historical context information and promote the prediction capability of the model;
in the training phase, all frames are used to calculate a differential sequence, the motion context of the long-term sequence is extracted by another long-term motion context encoder, and the context is stored in the training process; and (3) performing another training on the short sequence formed by the input historical data, and at the moment, performing gradient back propagation on the Memory module no longer, so that only the Memory module is queried and predicted, and finally, the Memory alignment module Memory can extract the motion context and is used for subsequent sequence prediction.
Further, in step 1, the process comprises,
the input of the radar echo prediction training model is a series of radar echo data frames which are respectively input into a space encoder, a dynamic matching encoder and a dynamic environment encoder,
step 1.1, processing historical data of a radar, selecting data of the previous 5 frames, and sequentially inputting the data into a space encoder; the convolved long short-time memory module ConvLSTM receives the extracted space features and obtains the output state according to time stepsAnd cell status
Step 1.2, simultaneously calculating the differential sequence of the radar echo image, inputting the differential sequence into a dynamic matching encoder, and obtaining a Memory vector covering the context of the global motion through a Memory alignment module
Step 1.3 because of the cell StateHistorical information of input sequence from beginning is covered, so that channel attention is utilized through dynamic environment encoderForce mechanism and cell status exported in ConvLSTM +.>To refine the global motion context memory vector +.>To embed the desired motion context in the current step, the refined +.>And output state->Connected, constructed as feature vectors embedded in long-term context and accepted by the decoder for output; the output frame is used as a new input, and longer output is further obtained;
for inquiring the vector +.>For key vector, ++>Is->Dimension of->Is a vector of values.
Further, in step 2, the process comprises,
step 2.1, processing the satellite cloud image data, using an independent convolutional cyclic neural network ConvRNN to encode the satellite cloud image data to obtain the hidden state, using a ConvGRU (convolutional gating memory unit) with lighter training to encode, and obtaining in the step tHidden state, then the hidden state +.>Fusion, obtaining the fusion hidden state->
2.2, fusing data of the radar and the satellite cloud picture by using a gating structure, wherein a specific calculation formula is as follows:
to update the matrix +.>Updating a weight matrix for radar,>for radar hidden state, < >>Updating weight matrix for satellite, < >>For the hidden state of satellite->To update the offset matrix>Resetting matrix->Resetting the weight matrix for radar,>resetting the weight matrix for the satellite,>resetting the offset matrix +.>For candidate hidden state, ++>For the radar hidden state weight matrix,for the satellite hidden state weight matrix, +.>Is a candidate hidden state shift matrix.
Further, in step 3, the process comprises,
and 3.1, extracting feature information of the topographic data by adopting a convolutional neural network of a residual error module because the feature of the topographic data is kept unchanged in time. After ResNet processing, the terrain data is converted into three feature maps of 128×16×16, 64×64×32 and 32×32×64, respectively;
step 3.2, radar information is processed in the channel dimensionInformation about topography>Fusion is performed, and the two are respectively from the motion state memory extraction and the ResNet extraction in the previous step. Restoring the channel by adopting a 1 multiplied by 1 convolution layer to ensure the consistency of the data;
and 3.3, fusing by using a spatial attention mechanism, wherein a terrain fusing mode adopts a dual attention network, and firstly, dimension constraint is carried out on input and the input is divided into three parts: A. b, C this step is obtained by 1×1 convolutional neural network and bilinear pooling. Next, the points a and B are multiplied to obtain a feature map G, which aggregates the global attention features. In order to improve the fusion effect, the attention mechanisms of the global and local features are taken into consideration at the same time, and second-order attention features are further calculated, and are obtained by multiplying the feature graph G by the vector C point after feature dispersion and by dimension expansion.
A long-sequence radar image prediction system integrating multi-source data comprises:
the system comprises a radar data encoding module, a satellite data fusion module, a terrain data fusion module and a time window prediction module;
the radar data coding module changes a radar echo prediction training model to pay attention to long-term motion trend equally; and three encoders are used for encoding radar image data;
the satellite data fusion module takes radar echo images as original basic data, introduces satellite cloud image data, encodes the satellite cloud image data by using a convolution gating memory unit ConvGRU, and fuses the satellite cloud image data and the radar data through a gating structure;
the terrain data fusion module introduces terrain elevation data, extracts feature vectors from the terrain data by using ResNet, fuses the features with radar image data in the channel dimension, and combines spatial information with terrain related data;
the time window prediction module fuses the related information of the radar image and the topographic data with each other to form a richer representation; the fused features are passed to a time window decoder to predict future radar echo images.
An electronic device comprising a memory storing a computer program and a processor implementing the steps of the above method when the processor executes the computer program.
A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the above method.
The invention has the beneficial effects that
The invention uses a time sequence prediction module of motion state memory, adds context information into the model, and enhances the prediction capability of the model on long-time sequences;
the method solves the problem of less radar echo image information by using satellite data, provides a weather guidance factor with a higher scale through a unique gating time sequence fusion module, and improves the accuracy of model prediction;
the invention adopts the model of static terrain elevation data and a dual-attention mechanism, so that the influence of special terrain on the precipitation factor is perceived by the model, and the representation capability of the model is further improved.
Drawings
FIG. 1 is a diagram showing the overall architecture of a long-sequence radar image prediction model fused with multi-source data according to the present invention;
FIG. 2 is a schematic diagram of a timing prediction module for motion state memory according to the present invention;
FIG. 3 is a schematic diagram of satellite feature fusion using a fusion update gate;
FIG. 4 is a schematic diagram of a terrain elevation fusion based on a dual-attention mechanism.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
With reference to fig. 1 to 4;
the long-sequence radar image prediction method fused with the multi-source data is shown in figure 1.
To improve the failure and accuracy of long term precipitation predictions, it is important to enhance the information obtained from radar returns. In order to solve the problem of insufficient information carried by the radar map, a radar echo prediction training model is firstly required to be changed, so that the radar echo prediction training model pays attention to long-term motion trend; meanwhile, in order to supplement key information of the atmospheric motion modeling, heterogeneous data including satellite cloud images and topographic data are introduced while radar image data are mainly used, so that the ability of the model to understand the atmospheric motion mode is greatly enhanced. Therefore, the invention innovatively designs a long-sequence radar image prediction model fused with multi-source data, three encoders related to information aiming at different structures are adopted to effectively encode information from three heterogeneous data types, features are integrated through a feature fusion module, and then the data is fed to a time window decoder to generate forecast radar echo data. This comprehensive approach ensures a more robust and reliable predictive model.
A long-sequence radar image prediction method integrating multi-source data comprises the following steps:
the method specifically comprises the following steps:
step 1, modifying a radar echo prediction training model to pay attention to long-term motion trend; different weather data are coded by adopting different targeted encoders;
step 2: on the basis of radar image data, satellite cloud image data is introduced to be encoded by using a convolution gating memory unit ConvGRU, and the satellite cloud image data and the radar data are fused through a gating structure;
step 3: introducing terrain elevation data, extracting feature vectors from the terrain data by using ResNet, and combining spatial information and terrain-related data in a channel dimension;
and 4, fusing the features by using a attention mechanism, and finally decoding by using a deconvolution cyclic neural network to obtain a predicted radar echo image.
In step 1, specifically, the method includes:
the Memory alignment module Memory is trained with all available training data, and rich athletic context information is captured and recorded. These memory vectors can be viewed as an efficient extraction of historical global information by the model, helping the system to better understand long-term dependencies in the input sequence. This mechanism enables the model to efficiently integrate historical context information and promote its predictive capabilities.
In the training phase, all frames are used to calculate a differential sequence, the motion context of the long-term sequence is extracted by another long-term motion context encoder, and the context is stored in the training process; and then, as described above, the short sequence formed by the input history data is trained for another time, and gradient back propagation is not performed on the Memory module, so that only the Memory module is queried and predicted, and finally, the Memory alignment module Memory can extract the motion context and be used for subsequent sequence prediction.
In the step (1) of the process,
the input of the radar echo prediction training model is a series of radar echo data frames which are respectively input into a space encoder, a dynamic matching encoder and a dynamic environment encoder. As shown in the figure 2 of the drawings,
step 1.1, processing historical data of a radar, selecting data of the previous 5 frames, and sequentially inputting the data into a space encoder; the convolved long short-time memory module ConvLSTM receives the extracted space features and obtains the output state according to time stepsAnd cell status
Step 1.2, simultaneously calculating the differential sequence of the radar echo image, inputting the differential sequence into a dynamic matching encoder, and obtaining a Memory vector covering the context of the global motion through a Memory alignment module
Step 1.3 because of the cell StateHistorical information of input sequences from the start is covered, so by dynamic environmental encoder, using channel attention mechanism and ConvLSTM acquired cell status +.>To refine the global motion context memory vector +.>To embed the desired motion context in the current step, the refined +.>And output state->Connected, constructed as feature vectors embedded in long-term context and accepted by the decoder for output; the output frame is used as a new input, and longer output is further obtained;
for inquiring the vector +.>For key vector, ++>Is->Dimension of->Is a vector of values.
In the step 2 of the process, the process is carried out,
recurrent neural networks are currently challenged to predict the direction of future frames because hidden layer states are derived only from extracting relationship information in the current sequence. To overcome this, the encoder of radar echo is enhanced by introducing a Memory alignment module Memory that creates a mapping between the current input sequence and the historical motion context, effectively recall past motion information. By means of the memory unit, the motion information can be classified into global motion and local motion, thereby improving overall prediction accuracy and enhancing the performance of longer sequence predictions.
In order to fuse the data of satellite cloud images, challenges in overcoming the fusion dimension and the fusion mode are needed. Mishandling of multi-source data features can introduce noise into radar echo tasks, resulting in undesirable prediction results; in addition, the space-time dimensions of the satellite cloud and radar echo images cannot be aligned directly.
Step 2.1, processing the satellite cloud image data by encoding the satellite cloud image data using a separate convolutional recurrent neural network (ConvRNN) and obtaining its hidden state. The role of the satellite data is to provide supplemental information to assist the model in predicting the radar image sequence without the need to predict the satellite data, so special attention is required to capture the large scale weather dynamics information depicted by the satellite data to further refine the prediction process. Thus, encoding with a more lightweight training ConvGRU (convolutional gated memory unit) is achieved in step tHidden state, then the hidden state +.>Fusion, obtaining the fusion hidden state->
And 2.2, carrying out fusion of satellite cloud image data and radar data. Satellite clouds provide a higher time dimension and more complex weather conditions that can be used as a supplementary aid in understanding radar echo image motion. However, due to the isomerism of satellite data with radar and its own redundancy, this information, if fused improperly, can affect the motion and intensity of the radar echo image. In view of this, in order to extract the relevant information necessary for radar echo image more effectively, a gating structure is used to fuse radar and satellite cloud image data, as shown in fig. 3. The specific calculation formula is as follows:
to update the matrix +.>Updating a weight matrix for radar,>for radar hidden state, < >>Updating weight matrix for satellite, < >>For the hidden state of satellite->To update the offset matrix>Resetting matrix->Resetting the weight matrix for radar,>resetting the weight matrix for the satellite,>resetting the offset matrix +.>For candidate hidden state, ++>For the radar hidden state weight matrix,for the satellite hidden state weight matrix, +.>Is a candidate hidden state shift matrix.
In the step (3) of the process,
step 3.1, unlike the time-series characteristic radar and satellite data, the terrain data is static data which does not change with time, so that the terrain data is output to a convolution network adopting a residual error module to obtain information of different dimensions. After ResNet processing, the terrain data is converted into three feature maps of 128×16×16, 64×64×32 and 32×32×64, respectively;
step 3.2, radar information is processed in the channel dimensionInformation about topography>Fusion is performed, and the two are respectively from the motion state memory extraction and the ResNet extraction in the previous step. Restoring the channel by adopting a point-by-point convolution layer so as to ensure the consistency of data;
and 3.3, the fusion process adopts a spatial attention mechanism, so that the influence of the terrain information on radar data can be more accurately and comprehensively captured, and particularly, the complex terrain with larger fall is obtained.
The fusion mode of the terrain adopts a dual-attention network, firstly performs dimension constraint on input and divides the input into three parts: A. b, C this step is obtained by 1×1 convolutional neural network and bilinear pooling. Next, the points a and B are multiplied to obtain a feature map G, which aggregates the global attention features. In order to improve the fusion effect, the attention mechanisms of the global and local features are taken into consideration at the same time, the second-order attention features are further calculated, and the second-order attention features are obtained by multiplying the feature graph G by the vector C point after feature dispersion and by dimension expansion.
A long-sequence radar image prediction system integrating multi-source data comprises:
the system comprises a radar data encoding module, a satellite data fusion module, a terrain data fusion module and a time window prediction module;
the radar data coding module changes a radar echo prediction training model to pay attention to long-term motion trend equally; and three encoders are used for encoding radar image data;
the satellite data fusion module takes radar echo images as original basic data, introduces satellite cloud image data, encodes the satellite cloud image data by using a convolution gating memory unit ConvGRU, and fuses the satellite cloud image data and the radar data through a gating structure;
the terrain data fusion module introduces terrain elevation data, extracts feature vectors from the terrain data by using ResNet, fuses the features with radar image data in the channel dimension, and combines spatial information with terrain related data;
the time window prediction module fuses the related information of the radar image and the topographic data with each other to form a richer representation; the fused features are passed to a time window decoder to predict future radar echo images.
An electronic device comprising a memory storing a computer program and a processor implementing the steps of the above method when the processor executes the computer program.
A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the above method.
The memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be read only memory, ROM, programmable ROM, PROM, erasable PROM, EPROM, electrically erasable EPROM, EEPROM, or flash memory. The volatile memory may be random access memory random access memory, RAM, which acts as an external cache. By way of example and not limitation, many forms of RAM are available, such as static RAM, SRAM, dynamic RAM, DRAM, synchronous DRAM, SDRAM, double data rate synchronous DRAM double data rate SDRAM, DDR SDRAM, enhanced SDRAM, ESDRAM, synchronous link DRAM, SLDRAM and direct memory bus RAM, DR RAM. It should be noted that the memory of the methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The above description of the invention has been made in detail to provide a method and a system for predicting long-sequence radar images with multi-source data fusion, and the above description of the embodiments is only for helping to understand the method and the core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (7)

1. A long-sequence radar image prediction method integrating multi-source data is characterized in that:
the method specifically comprises the following steps:
step 1, modifying a radar echo prediction training model to pay more attention to long-term motion trend; and three encoders are used for encoding radar image data;
in the step (1) of the process,
the input of the radar echo prediction training model is a series of radar echo data frames which are respectively input into a space encoder, a dynamic matching encoder and a dynamic environment encoder,
step 1.1, processing historical data of a radar, selecting data of the previous 5 frames, and sequentially inputting the data into a space encoder; the space features extracted by the ConvLSTM are received by a convolution long-short-time memory module, and the output state H is obtained according to time steps t And cell state C t
Step 1.2, simultaneously calculating the differential sequence of the radar echo image, inputting the differential sequence into a dynamic matching encoder, and obtaining a Memory vector F covering the context of the global motion through a Memory alignment module m
Step 1.3 because of cell State C t Historical information of input sequences from initiation is covered, so that by dynamic environment encoder, channel attention mechanism and cell state C obtained from ConvLSTM are utilized t To refine the globalMotion context memory vector F m To embed the required motion context at the current step, the refined motion context is obtainedAnd output state H t Connected, constructed as feature vectors embedded in long-term context and accepted by the decoder for output; the output frame is used as a new input, and longer output is further obtained;
Q=C t ;K=F m ;V=F m
q is a query vector, K is a key vector, d k A dimension of K, V is a value vector;
step 2: on the basis of radar image data, satellite cloud image data is introduced to be encoded by using a convolution gating memory unit ConvGRU, and the satellite cloud image data and the radar data are fused through a gating structure;
step 3: further introducing topographic data, extracting characteristic information of the topographic data by using ResNet, and then fusing the characteristic information of the topographic data with radar image data in the channel dimension;
step 4: integrating the extracted features through a feature fusion module; the process fuses the related information of the radar image and the topographic data with each other; finally, the fused features are passed to a time window decoder to generate future radar echo images.
2. The method according to claim 1, wherein: in step 1, specifically, the method includes:
the Memory alignment module Memory is trained through all available training data, comprises various exercise context modes, and captures and records exercise context information; the memory vectors corresponding to the multiple motion context modes are regarded as effective extraction of the historical integral information by a model, so that the system is helped to understand long-term dependency in an input sequence;
in the training phase, all frames are used to calculate a differential sequence, the motion context of the long-term sequence is extracted by another long-term motion context encoder, and the context is stored in the training process; and (3) performing another training on the short sequence formed by the input historical data, and at the moment, performing gradient back propagation on the Memory alignment module Memory no longer, so that only the Memory alignment module Memory is queried and predicted, and finally, the Memory alignment module Memory can extract a motion context and is used for subsequent sequence prediction.
3. The method according to claim 2, characterized in that: in the step 2 of the process, the process is carried out,
step 2.1, processing the satellite cloud image data, using an independent convolutional cyclic neural network ConvRNN to encode the satellite cloud image data to obtain the hidden state, using a convolutional gating memory unit ConvGRU with lighter training to encode, and obtaining in the step tHidden state, then the hidden state +.>Fusion, obtaining the fusion hidden state->
2.2, fusing data of the radar and the satellite cloud picture by using a gating structure, wherein a specific calculation formula is as follows:
to update the matrix, W ri Updating a weight matrix for radar,>is in radar hidden state, W si Updating weight matrix for satellite, < >>In a satellite hidden state b i To update the offset matrix>Reset matrix, W rg Resetting a weight matrix for radar, W sg Resetting the weight matrix for the satellite, b g Resetting the offset matrix +.>For candidate hidden state, W rx Is a radar hidden state weight matrix, W sx Is a satellite hidden state weight matrix, b x Is a candidate hidden state shift matrix.
4. A method according to claim 3, characterized in that: in the step (3) of the process,
step 3.1, outputting the topographic data to a convolution network adopting a residual error module because the characteristic of the topographic data is kept unchanged in time; after ResNet processing, the terrain data is converted into three feature maps of 128×16×16, 64×64×32 and 32×32×64, respectively;
step 3.2, radar information is processed in the channel dimensionAnd topographic information d (t) Fusing, wherein the two are respectively from the motion state memory extraction and the ResNet extraction in the previous step; restoring the channel by adopting a 1 multiplied by 1 convolution layer to ensure the consistency of the data;
and 3.3, fusing by using a spatial attention mechanism, wherein a terrain fusing mode adopts a dual attention network, and firstly, dimension constraint is carried out on input and the input is divided into three parts: A. b, C this step is obtained by 1×1 convolutional neural network and bilinear pooling; multiplying the points A and B to obtain a feature map G, wherein the feature map G aggregates global attention features; in order to improve the fusion effect, the attention mechanisms of the global and local features are taken into consideration at the same time, and second-order attention features are further calculated, and are obtained by multiplying the feature graph G by the vector C point after feature dispersion and by dimension expansion.
5. A long-sequence radar image prediction system integrating multi-source data is characterized in that:
the system comprises a radar data encoding module, a satellite data fusion module, a terrain data fusion module and a time window prediction module;
the radar data coding module changes a radar echo prediction training model to pay more attention to long-term motion trend; and three encoders are used for encoding radar image data;
the input of the radar echo prediction training model is a series of radar echo data frames which are respectively input into a space encoder, a dynamic matching encoder and a dynamic environment encoder,
the radar data coding module firstly processes the history data of the radar, selects the data of the previous 5 frames and sequentially inputs the data into the space coder; the space features extracted by the ConvLSTM are received by a convolution long-short-time memory module, and the output state H is obtained according to time steps t And cell state C t
Then, the differential sequence of the radar echo image is calculated at the same time, and is input into a dynamic matching encoder, and a Memory vector F covering the context of the global motion is obtained through a Memory alignment module m
Finally due to cell state C t Historical information of input sequences from initiation is covered, so that by dynamic environment encoder, channel attention mechanism and cell state C obtained from ConvLSTM are utilized t To refine global motion context memory vector F m To embed the required motion context at the current step, the refined motion context is obtainedAnd output state H t Connected, constructed as feature vectors embedded in long-term context and accepted by the decoder for output; the output frame is used as a new input, and longer output is further obtained;
Q=C t ;K=F m ;V=F m
q is a query vector, K is a key vector, d k A dimension of K, V is a value vector;
the satellite data fusion module takes radar echo images as original basic data, introduces satellite cloud image data, encodes the satellite cloud image data by using a convolution gating memory unit ConvGRU, and fuses the satellite cloud image data and the radar data through a gating structure;
the topographic data fusion module introduces topographic data, extracts characteristic information of the topographic data by using ResNet, and then fuses the characteristic information of the topographic data with radar image data in the channel dimension;
the time window prediction module fuses the related information of the radar image and the topographic data with each other; the fused features are passed to a time window decoder to predict future radar echo images.
6. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 4 when the computer program is executed.
7. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 4.
CN202311677654.3A 2023-12-08 2023-12-08 Multi-source data fusion long-sequence radar image prediction method and system Active CN117368881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311677654.3A CN117368881B (en) 2023-12-08 2023-12-08 Multi-source data fusion long-sequence radar image prediction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311677654.3A CN117368881B (en) 2023-12-08 2023-12-08 Multi-source data fusion long-sequence radar image prediction method and system

Publications (2)

Publication Number Publication Date
CN117368881A CN117368881A (en) 2024-01-09
CN117368881B true CN117368881B (en) 2024-03-26

Family

ID=89395074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311677654.3A Active CN117368881B (en) 2023-12-08 2023-12-08 Multi-source data fusion long-sequence radar image prediction method and system

Country Status (1)

Country Link
CN (1) CN117368881B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117647855B (en) * 2024-01-29 2024-04-16 南京信息工程大学 Short-term precipitation prediction method, device and equipment based on sequence length

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998026306A1 (en) * 1996-12-09 1998-06-18 Miller Richard L 3-d weather display and weathercast system
CN111158068A (en) * 2019-12-31 2020-05-15 哈尔滨工业大学(深圳) Short-term prediction method and system based on simple convolutional recurrent neural network
US10996374B1 (en) * 2017-04-11 2021-05-04 DatalnfoCom USA, Inc. Short-term weather forecasting using artificial intelligence and hybrid data
CN112764129A (en) * 2021-01-22 2021-05-07 易天气(北京)科技有限公司 Method, system and terminal for thunderstorm short-term forecasting
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
CN114139690A (en) * 2021-12-09 2022-03-04 南京邮电大学 Short-term rainfall prediction method and device
CN114460555A (en) * 2022-04-08 2022-05-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar echo extrapolation method and device and storage medium
CN114764602A (en) * 2022-05-07 2022-07-19 东南大学 Short-term rainfall prediction method based on space-time attention and data fusion
CN114943365A (en) * 2022-04-11 2022-08-26 哈尔滨工业大学(深圳) Rainfall estimation model establishing method fusing multi-source data and rainfall estimation method
CN115016042A (en) * 2022-06-06 2022-09-06 湖南师范大学 Precipitation prediction method and system based on multi-encoder fusion radar and precipitation information
CN115902806A (en) * 2022-12-01 2023-04-04 中国人民解放军国防科技大学 Multi-mode-based radar echo extrapolation method
CN116148864A (en) * 2023-02-28 2023-05-23 杭州电子科技大学 Radar echo extrapolation method based on DyConvGRU and Unet prediction refinement structure
CN116227349A (en) * 2023-03-01 2023-06-06 哈尔滨工业大学 Short-term precipitation prediction method, device and equipment based on multi-mode RNN
CN116244662A (en) * 2023-02-24 2023-06-09 中山大学 Multisource elevation data fusion method, multisource elevation data fusion device, computer equipment and medium
CN116451881A (en) * 2023-06-16 2023-07-18 南京信息工程大学 Short-time precipitation prediction method based on MSF-Net network model
CN116720156A (en) * 2023-06-25 2023-09-08 北京邮电大学 Weather element forecasting method based on graph neural network multi-mode weather data fusion
CN116912711A (en) * 2023-07-31 2023-10-20 南京信息工程大学 Satellite cloud image prediction method based on space-time attention gate

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230368500A1 (en) * 2022-05-11 2023-11-16 Huaneng Lancang River Hydropower Inc Time-series image description method for dam defects based on local self-attention

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998026306A1 (en) * 1996-12-09 1998-06-18 Miller Richard L 3-d weather display and weathercast system
US10996374B1 (en) * 2017-04-11 2021-05-04 DatalnfoCom USA, Inc. Short-term weather forecasting using artificial intelligence and hybrid data
CN111158068A (en) * 2019-12-31 2020-05-15 哈尔滨工业大学(深圳) Short-term prediction method and system based on simple convolutional recurrent neural network
US11222217B1 (en) * 2020-08-14 2022-01-11 Tsinghua University Detection method using fusion network based on attention mechanism, and terminal device
CN112764129A (en) * 2021-01-22 2021-05-07 易天气(北京)科技有限公司 Method, system and terminal for thunderstorm short-term forecasting
CN114139690A (en) * 2021-12-09 2022-03-04 南京邮电大学 Short-term rainfall prediction method and device
WO2023103587A1 (en) * 2021-12-09 2023-06-15 南京邮电大学 Imminent precipitation forecast method and apparatus
CN114460555A (en) * 2022-04-08 2022-05-10 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar echo extrapolation method and device and storage medium
CN114943365A (en) * 2022-04-11 2022-08-26 哈尔滨工业大学(深圳) Rainfall estimation model establishing method fusing multi-source data and rainfall estimation method
CN114764602A (en) * 2022-05-07 2022-07-19 东南大学 Short-term rainfall prediction method based on space-time attention and data fusion
CN115016042A (en) * 2022-06-06 2022-09-06 湖南师范大学 Precipitation prediction method and system based on multi-encoder fusion radar and precipitation information
CN115902806A (en) * 2022-12-01 2023-04-04 中国人民解放军国防科技大学 Multi-mode-based radar echo extrapolation method
CN116244662A (en) * 2023-02-24 2023-06-09 中山大学 Multisource elevation data fusion method, multisource elevation data fusion device, computer equipment and medium
CN116148864A (en) * 2023-02-28 2023-05-23 杭州电子科技大学 Radar echo extrapolation method based on DyConvGRU and Unet prediction refinement structure
CN116227349A (en) * 2023-03-01 2023-06-06 哈尔滨工业大学 Short-term precipitation prediction method, device and equipment based on multi-mode RNN
CN116451881A (en) * 2023-06-16 2023-07-18 南京信息工程大学 Short-time precipitation prediction method based on MSF-Net network model
CN116720156A (en) * 2023-06-25 2023-09-08 北京邮电大学 Weather element forecasting method based on graph neural network multi-mode weather data fusion
CN116912711A (en) * 2023-07-31 2023-10-20 南京信息工程大学 Satellite cloud image prediction method based on space-time attention gate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向短临预报的长序列雷达图像外推预测算法研究;周盈利;万方学位论文;摘要,第1.2、1.4、2.3-4.3节,图2-5、2-6、4-1、4-2 *

Also Published As

Publication number Publication date
CN117368881A (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN117368881B (en) Multi-source data fusion long-sequence radar image prediction method and system
CN112418409B (en) Improved convolution long-short-term memory network space-time sequence prediction method by using attention mechanism
CN110929092B (en) Multi-event video description method based on dynamic attention mechanism
CN113034380A (en) Video space-time super-resolution method and device based on improved deformable convolution correction
CN114372116A (en) Vehicle track prediction method based on LSTM and space-time attention mechanism
CN114460555B (en) Radar echo extrapolation method and device and storage medium
CN107748942A (en) Radar Echo Extrapolation Forecasting Methodology and system based on velocity field sensing network
CN115002379B (en) Video frame inserting method, training device, electronic equipment and storage medium
CN117575111B (en) Agricultural remote sensing image space-time sequence prediction method based on transfer learning
CN114943365A (en) Rainfall estimation model establishing method fusing multi-source data and rainfall estimation method
Gao et al. Generalized image outpainting with U-transformer
CN116309705B (en) Satellite video single-target tracking method and system based on feature interaction
CN116758104B (en) Multi-instance portrait matting method based on improved GCNet
Zhan et al. DSNet: Joint learning for scene segmentation and disparity estimation
CN116229106A (en) Video significance prediction method based on double-U structure
CN116863347A (en) High-efficiency and high-precision remote sensing image semantic segmentation method and application
CN117665825A (en) Radar echo extrapolation prediction method, system and storage medium
CN113962460B (en) Urban fine granularity flow prediction method and system based on space-time comparison self-supervision
Pan et al. Taylor saves for later: Disentanglement for video prediction using Taylor representation
Yang et al. Video diffusion models with local-global context guidance
Song et al. Contextavo: Local context guided and refining poses for deep visual odometry
CN113947235A (en) Photovoltaic power generation prediction hybrid method based on deep all-day image learning
CN110532868B (en) Method for predicting free space semantic boundary
CN115034478B (en) Traffic flow prediction method based on field self-adaption and knowledge migration
CN116402874A (en) Spacecraft depth complementing method based on time sequence optical image and laser radar data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant