CN114460555B - Radar echo extrapolation method and device and storage medium - Google Patents

Radar echo extrapolation method and device and storage medium Download PDF

Info

Publication number
CN114460555B
CN114460555B CN202210363210.1A CN202210363210A CN114460555B CN 114460555 B CN114460555 B CN 114460555B CN 202210363210 A CN202210363210 A CN 202210363210A CN 114460555 B CN114460555 B CN 114460555B
Authority
CN
China
Prior art keywords
image
radar echo
historical
map
satellite cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210363210.1A
Other languages
Chinese (zh)
Other versions
CN114460555A (en
Inventor
周盈利
李旭涛
姜昊
叶允明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Original Assignee
Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology filed Critical Harbin Institute Of Technology shenzhen Shenzhen Institute Of Science And Technology Innovation Harbin Institute Of Technology
Priority to CN202210363210.1A priority Critical patent/CN114460555B/en
Publication of CN114460555A publication Critical patent/CN114460555A/en
Application granted granted Critical
Publication of CN114460555B publication Critical patent/CN114460555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/95Radar or analogous systems specially adapted for specific applications for meteorological use
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a method, a device and a storage medium for extrapolating radar echo, wherein the method comprises the following steps: acquiring a current radar echo map and a current satellite cloud map of a specified area at the current moment, and a historical radar echo map and a historical satellite cloud map of a past period; coding the current radar echo map and the historical radar echo map by adopting a first IIA-GRU encoder to obtain a first coded image; coding the current satellite cloud picture and the historical satellite cloud picture by adopting a second IIA-GRU encoder to obtain a second coded image; and fusing the first coded image and the second coded image based on a fusion gating mechanism, and decoding a fusion result by adopting a IIA-GRU decoder to obtain a radar echo image in a future period of time of the specified area. The technical scheme of the invention improves the prediction accuracy of the long-sequence radar echo.

Description

Radar echo extrapolation method and device and storage medium
Technical Field
The invention relates to the technical field of meteorological prediction, in particular to a radar echo extrapolation method, a device and a storage medium.
Background
The short-term rainfall prediction is an important research content in the field of meteorological prediction, which means that the rainfall distribution condition in a certain period of time in the future of a specified area is predicted, and the prediction accuracy of the short-term rainfall prediction has important significance for multiple industries such as agriculture, traffic, military and the like. The radar echo image is formed by reflecting electromagnetic waves released by a radar after encountering water molecules in the air, and the reflectivity received by the radar can reflect the water molecule density of the area to a certain extent.
Currently, the following methods are often used for radar echo extrapolation: one method is to construct the motion situation of the echo between adjacent moments by an optical flow method or a cross correlation method based on a radar echo image, extrapolate the radar echo at the next moment according to the motion situation of the echo, and further predict the rainfall situation at the next moment. The method carries out prediction according to radar echo images at adjacent moments and is based on the assumption that the movement of radar echoes is unchanged, however, meteorological changes are complex and variable, and the prediction accuracy of the method is low;
the other method is to use machine learning models such as ConvLSTM to extrapolate radar echoes, the method predicts the radar echoes at the current moment based on the radar echoes at the previous moment, and due to an error transfer phenomenon, the predicted radar echoes become fuzzy gradually along with the increase of time, so that the accuracy in prediction of long-sequence radar echoes is reduced gradually.
Disclosure of Invention
The invention solves the problem of how to improve the prediction accuracy of long-sequence radar echoes.
In order to solve the above problems, the present invention provides a method, an apparatus and a storage medium for radar echo extrapolation.
In a first aspect, the present invention provides a method for radar echo extrapolation, including:
acquiring a current radar echo map and a current satellite cloud map of a specified area at the current moment, and a historical radar echo map and a historical satellite cloud map of a past period;
coding the current radar echo map and the historical radar echo map by adopting a first IIA-GRU encoder to obtain a first coded image; a second IIA-GRU encoder is adopted to encode the current satellite cloud picture and the historical satellite cloud picture to obtain a second encoded image; the coding processing comprises the steps of extracting the characteristics of a current image to obtain a hidden layer characteristic image; based on an interaction attention mechanism, splicing the hidden layer characteristic image and the corresponding historical image to obtain a spliced image; based on a bidirectional attention information extraction mechanism, carrying out spatial information fusion on the spliced image to obtain the coded image; the current image comprises the current radar echo map and the current satellite cloud map, and the historical image comprises the historical radar echo map and the historical satellite cloud map;
and fusing the first coded image and the second coded image based on a gating mechanism, and decoding a fusion result by adopting a IIA-GRU decoder to obtain a radar echo image in a future period of time of the specified area.
Optionally, before the feature extraction of the current image, the method further includes:
and carrying out down-sampling on the current image and the historical image to obtain the current image after down-sampling processing and the historical image after down-sampling processing.
Optionally, the stitching the hidden layer feature image and the corresponding historical image based on the interactive attention mechanism to obtain a stitched image includes:
based on an interaction attention mechanism, carrying out time dimension interaction on the hidden layer feature image at the current moment and the historical image in a past period of time to obtain a processed image;
and splicing the processed image and the hidden layer characteristic image to obtain the spliced image.
Optionally, the performing spatial information fusion on the stitched image based on a bidirectional attention information extraction mechanism to obtain the encoded image includes:
determining a first weight of each channel of the spliced image by adopting a maximum pooling layer and an average pooling layer, and carrying out weighted summation on each channel according to the first weight to obtain a first weight image;
determining second weights of different positions in the spliced image based on a space self-attention mechanism, and performing weighted summation on the positions according to the second weights to obtain a second weight image;
and performing information Fusion on the first weight image and the second weight image by using Sum Fusion operation to obtain the coded image.
Optionally, the fusing the first encoded image and the second encoded image based on a gating mechanism comprises:
based on a forgetting gate mechanism, fusing the first coded image and the second coded image by adopting a first formula to obtain the fusion result, wherein the first formula comprises:
Figure 176190DEST_PATH_IMAGE001
,
wherein the content of the first and second substances,
Figure 890068DEST_PATH_IMAGE002
indicating that the output of the update gate is,
Figure 404226DEST_PATH_IMAGE003
the time step is represented by the time-step,
Figure 471539DEST_PATH_IMAGE004
a sigmoid activation function is represented,
Figure 20332DEST_PATH_IMAGE005
and
Figure 842794DEST_PATH_IMAGE006
indicating that the weight parameter of the door is updated,
Figure 906565DEST_PATH_IMAGE007
representing the first coded image or images and the second coded image,
Figure 839886DEST_PATH_IMAGE008
representing the second encoded image or images,
Figure 931601DEST_PATH_IMAGE009
the output of the forgetting gate is represented,
Figure 659386DEST_PATH_IMAGE010
and
Figure 210453DEST_PATH_IMAGE011
a weight parameter representing a forgetting gate,
Figure 681885DEST_PATH_IMAGE012
a candidate hidden state is represented that is,
Figure 205271DEST_PATH_IMAGE013
to represent
Figure 307219DEST_PATH_IMAGE014
The function is activated in such a way that,
Figure 283265DEST_PATH_IMAGE015
and
Figure 355126DEST_PATH_IMAGE016
a weight parameter representing a candidate hidden state,
Figure 733018DEST_PATH_IMAGE017
representing the fusion result.
Optionally, before the encoding processing is performed on the current radar echo map and the historical radar echo map by using the first IIA-GRU encoder, the method further includes:
acquiring a historical radar echo map sequence and a historical satellite cloud map sequence of the specified area in a past period of time;
performing time matching and space matching on the historical radar echo map sequence and the historical satellite cloud map sequence to obtain a processed radar echo map sequence and a processed satellite cloud map sequence;
and training a model to be trained by adopting the processed radar echo diagram sequence and the processed satellite cloud diagram sequence to obtain a trained model, wherein the model comprises the first IIA-GRU encoder, the second IIA-GRU encoder and the IIA-GRU decoder.
Optionally, the time matching and the space matching the historical radar echo map sequence and the historical satellite cloud map sequence include:
unifying time zones of the historical radar echo diagram sequence and the historical satellite cloud diagram sequence to obtain a unified radar echo diagram sequence and a unified satellite cloud diagram sequence;
selecting a plurality of samples from the unified radar echo map sequence and the unified satellite cloud map sequence, wherein the samples comprise a plurality of radar echo maps at different moments and corresponding satellite cloud maps;
and carrying out interpolation processing on the satellite cloud pictures in the samples, and unifying the space areas of the satellite cloud pictures and the radar echo pictures.
Optionally, the interpolating the satellite cloud images in the sample, and unifying the spatial areas of the satellite cloud images and the radar echo image includes:
for each pixel point in the radar echo map, calculating the pixel value of the pixel point according to the longitude and latitude of a plurality of nearest neighbor pixel points of the pixel point based on an area weighting calculation method;
and taking the pixel value as satellite data of each pixel point to generate the satellite cloud picture matched with the space area of the radar echo picture.
Optionally, the area weighting calculation method is represented by a second formula, where the second formula includes:
Figure 989556DEST_PATH_IMAGE018
wherein, the first and the second end of the pipe are connected with each other,
Figure 718478DEST_PATH_IMAGE019
representing pixel points
Figure 531713DEST_PATH_IMAGE020
Is determined by the pixel value of (a) a,
Figure 826428DEST_PATH_IMAGE021
representing said pixel point
Figure 332496DEST_PATH_IMAGE022
To (1) a
Figure 955238DEST_PATH_IMAGE023
The number of the nearest-neighbor pixel points,
Figure 572164DEST_PATH_IMAGE024
pixel point representing the nearest neighbor
Figure 721386DEST_PATH_IMAGE021
The latitude of (a) is determined,
Figure 398355DEST_PATH_IMAGE025
pixel points representing the nearest neighbors
Figure 259126DEST_PATH_IMAGE021
The longitude of (a) is determined,
Figure 679743DEST_PATH_IMAGE026
representing said pixel point
Figure 886733DEST_PATH_IMAGE027
The latitude of (a) is determined,
Figure 796920DEST_PATH_IMAGE028
representing said pixel point
Figure 456572DEST_PATH_IMAGE027
Longitude of (c).
In a second aspect, the present invention provides a radar echo extrapolation apparatus, including:
the acquisition module is used for acquiring a current radar echo map and a current satellite cloud map of a specified area at the current moment, and a historical radar echo map and a historical satellite cloud map after a period of time;
the encoding module is used for encoding the current radar echo map and the historical radar echo map by adopting a first IIA-GRU encoder to obtain a first encoded image; coding the current satellite cloud picture and the historical satellite cloud picture by adopting a second IIA-GRU encoder to obtain a second coded image; the coding processing comprises the steps of extracting the characteristics of a current image to obtain a hidden layer characteristic image; based on an interaction attention mechanism, splicing the hidden layer characteristic image and the corresponding historical image to obtain a spliced image; based on a bidirectional attention information extraction mechanism, carrying out spatial information fusion on the spliced image to obtain the coded image; the current image comprises the current radar echo map and the current satellite cloud map, and the historical image comprises the historical radar echo map and the historical satellite cloud map;
and the prediction module is used for fusing the first coded image and the second coded image based on a gating mechanism, and decoding a fusion result by adopting a IIA-GRU decoder to obtain a radar echo image in a future period of time of the specified area.
In a third aspect, the invention provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the method for radar echo extrapolation according to any one of the first aspect.
The method, the device and the storage medium for extrapolating the radar echo have the advantages that: acquiring a current image and a historical image of a designated area, wherein the current image comprises a current radar echo map and a current satellite cloud map, the historical image comprises a historical radar echo map and a historical satellite cloud map, and the current image and the corresponding historical image are respectively encoded by adopting a pre-trained encoder, wherein the hidden layer characteristic image and the historical image are spliced based on an interactive attention mechanism, so that time characteristics can be extracted, and the motion trend of the radar echo or the satellite cloud map on a long sequence can be learned; and performing spatial information fusion on the spliced image based on a bidirectional attention information extraction mechanism, extracting spatial features in the image, and fusing the spatial-temporal features. Compared with the prior art that the radar echo image at a single moment is adopted for prediction, the error accumulation is avoided by learning the motion trend of the radar echo or the satellite cloud image and combining the spatial characteristics, and the prediction accuracy of the long-sequence radar echo image is improved. And then fusing a first coded image obtained by coding the radar echo map and a second coded image obtained by coding the satellite cloud map based on a gating mechanism, decoding a fused result through a trained decoder, and predicting the radar echo map for a period of time in the future by taking the radar echo map as a main basis of radar echo extrapolation and the satellite cloud map as an auxiliary basis of radar echo extrapolation. Compared with the prior art that only the radar echo map is adopted for radar echo extrapolation, the characteristics of the radar echo map and the satellite cloud map, which are closely related to meteorological data, are fully utilized through a gating mechanism, and the prediction accuracy of the long-sequence radar echo is improved.
Drawings
FIG. 1 is a schematic flow chart of a method for extrapolating a radar echo according to an embodiment of the present invention;
FIG. 2 is a logic diagram illustrating stitching of a hidden layer feature image and a historical image based on an interaction attention mechanism according to an embodiment of the present invention;
FIG. 3 is a logic diagram of spatial information fusion for a spliced image based on a bidirectional attention information extraction mechanism according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a nearest neighbor pixel point according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a radar echo extrapolation apparatus according to another embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. While certain embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present invention. It should be understood that the drawings and the embodiments of the present invention are illustrative only and are not intended to limit the scope of the present invention.
It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments"; the term "optionally" means "alternative embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present invention are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in the present invention are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that reference to "one or more" unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present invention are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
As shown in fig. 1, a method for extrapolating a radar echo according to an embodiment of the present invention includes:
and step S110, acquiring a current radar echo map and a current satellite cloud map of the current time of the designated area, and a historical radar echo map and a historical satellite cloud map of a past period of time.
Specifically, a radar echo map and a satellite cloud map of a specified area after a certain period of time from the current time are acquired, for example, a radar echo map and a satellite cloud map of the specified area within 30 minutes from the current time are acquired, or a radar echo map of the current time is acquired, and a radar echo map and a satellite cloud map of multiple times are arbitrarily selected from the radar echo map and the satellite cloud map of the past 30 minutes, and the radar echo map can adopt a radar gray scale map.
Step S120, a first IIA-GRU (information interaction attention-based cyclic convolution neural network model) encoder is adopted to encode the current radar echo map and the historical radar echo map to obtain a first encoded image; and coding the current satellite cloud picture and the historical satellite cloud picture by adopting a second IIA-GRU encoder to obtain a second coded image.
Optionally, the encoding the current radar echo map and the historical radar echo map by using a first IIA-GRU encoder includes: extracting features of a current radar echo map to obtain a first hidden layer feature image; splicing the first hidden layer characteristic image and the historical radar echo map based on an interaction attention mechanism to obtain a first spliced image; and performing spatial information fusion on the first spliced image based on a bidirectional attention information extraction mechanism to obtain a first coded image.
The encoding processing of the current satellite cloud picture and the historical satellite cloud picture by adopting a second IIA-GRU encoder comprises the following steps: extracting the features of the current satellite cloud picture to obtain a second hidden layer feature image; splicing the second hidden layer characteristic image and the historical satellite cloud image based on an interaction attention mechanism to obtain a second spliced image; and performing spatial information fusion on the second spliced image based on a bidirectional attention information extraction mechanism to obtain a second coded image.
Specifically, the current image may be input to a cyclic convolution neural network, and hidden layer state information is extracted through a hidden layer to obtain a hidden layer feature image. The radar echo diagrams and the satellite cloud diagrams are encoded through the IIA-GRU encoder, the dependency relationship between the radar echo diagrams of the long sequence and the dependency relationship between the satellite cloud diagrams of the long sequence can be learned, the motion trends of the radar echo diagrams and the satellite cloud diagrams can be accurately learned, the error accumulation caused by prediction by adopting the radar echo diagrams at a single moment is avoided, and the prediction accuracy is improved. Meanwhile, based on a bidirectional attention information extraction mechanism, the spatial features in the image can be accurately extracted, and the definition and accuracy of the predicted radar echo map can be improved.
And S130, fusing the first coded image and the second coded image based on a gating mechanism, and decoding a fusion result by adopting a IIA-GRU decoder to obtain a radar echo image in a future period of time of the specified area.
Specifically, the atmospheric motion law is very complex, modeling of the atmospheric motion law is very difficult, a radar echo map is image data of a single channel, the atmospheric motion law cannot be well modeled only by using information provided by the radar echo map, different from radar data, a satellite cloud map is multi-channel data, if the radar echo map displays two-dimensional information, the satellite cloud map displays three-dimensional atmospheric information, the satellite cloud map of each channel can reflect specific atmospheric information, and for example, infrared channel data can be used for calculating information such as cloud top temperature and pressure. The method has the advantages that atmospheric motion transient change can be better modeled by introducing the satellite cloud pictures, the first coded images and the second coded images at corresponding moments are fused, radar echo images in the designated areas are predicted in an auxiliary mode through the satellite cloud pictures, and the accuracy of prediction of the radar echo images can be improved.
In the embodiment, a current image and a historical image of a designated area are obtained, the current image comprises a current radar echo map and a current satellite cloud map, the historical image comprises a historical radar echo map and a historical satellite cloud map, and the current image and the corresponding historical image are respectively encoded by adopting a pre-trained encoder, wherein the hidden layer characteristic image and the historical image are spliced based on an interactive attention mechanism, so that time characteristics can be extracted, and the motion trend of the radar echo or the satellite cloud map on a long sequence is learned; and performing spatial information fusion on the spliced image based on a bidirectional attention information extraction mechanism, extracting spatial features in the image, and fusing spatiotemporal features. Compared with the prior art that the radar echo image at a single moment is adopted for prediction, the error accumulation is avoided by learning the motion trend of the radar echo or the satellite cloud image and combining the spatial characteristics, and the prediction accuracy of the long-sequence radar echo image is improved. And then fusing a first coded image obtained by coding the radar echo map and a second coded image obtained by coding the satellite cloud map based on a gating mechanism, decoding a fused result through a trained decoder, and predicting the radar echo map for a period of time in the future by taking the radar echo map as a main basis of radar echo extrapolation and the satellite cloud map as an auxiliary basis of radar echo extrapolation. Compared with the prior art that only the radar echo map is adopted for carrying out radar echo extrapolation, the characteristics closely related to meteorological data in the radar echo map and the satellite cloud map are fully utilized through a gating mechanism, and the prediction accuracy of the long-sequence radar echo is improved.
Optionally, before the feature extraction of the current image, the method further includes:
and downsampling the current image and the historical image to obtain the downsampled current image and the downsampled historical image.
In this optional embodiment, down-sampling processing is performed on the current image and the historical image, and the processed current image and historical image are used as input data for encoding processing, so that the field of view of the image is enlarged, the size of the image can be reduced, and the image processing speed of the model is increased.
Optionally, the stitching the hidden layer feature image and the corresponding historical image based on the interactive attention mechanism to obtain a stitched image includes:
and carrying out time dimension interaction on the hidden layer feature image at the current moment and the historical image in a past period of time based on an interaction attention mechanism to obtain a processed image.
Specifically, as shown in fig. 2, the interaction process can be expressed by the following formula:
Figure 352983DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 148901DEST_PATH_IMAGE030
the processed image is represented as a result of the processing,
Figure 229990DEST_PATH_IMAGE031
representing the hidden layer feature image at the current time (time t-1),
Figure 376937DEST_PATH_IMAGE032
a history image representing a past period of time (time 0 to t-2),
Figure 326308DEST_PATH_IMAGE033
represent
Figure 242311DEST_PATH_IMAGE034
As a function, DA in the figure represents a two-way attention interaction mechanism.
And splicing the processed image and the hidden layer characteristic image to obtain the spliced image.
Specifically, the stitched image can also be used as an input of the cyclic convolution neural network at the next time, and the cyclic convolution neural network is represented in a solid box at the right side in fig. 2.
In the optional embodiment, time dimension interaction is performed by using the hidden layer feature image at the current moment and the historical image at a past period of time, so that the influence of the motion trend of the past historical image on the current image can be obtained, the processed image and the hidden layer feature image at the current moment are spliced, the motion trend of the image is merged into the current hidden layer feature image, and compared with the method that the radar echo map at a single moment is adopted for prediction in the prior art, the prediction of the radar echo map is performed by combining the motion trend of the image, the prediction accuracy reduction caused by error accumulation is avoided, and the prediction accuracy of the long-sequence radar echo map is improved.
Optionally, as shown in fig. 3, the performing spatial information fusion on the stitched image based on the bidirectional attention information extraction mechanism to obtain the encoded image includes:
determining a first weight of each channel of the spliced image by adopting a maximum pooling layer and an average pooling layer, and performing weighted summation on each channel according to the first weight to obtain a first weight image;
determining second weights of different positions in the spliced image based on a space self-attention mechanism, and performing weighted summation on the positions according to the second weights to obtain a second weight image;
and performing information Fusion on the first weight image and the second weight image by Sum Fusion operation to obtain the coded image.
Specifically, the Sum Fusion operation includes sequentially performing convolution and hierarchical regularization on the image, and then performing processing by using Elu activation function, which is a specific process in the prior art and is not described herein again.
In this optional embodiment, the maximum pooling layer and the average pooling layer are used to determine the first weight of each different channel in a single time frame, and the channels are subjected to weighted summation according to the first weights, so that the attention of the channels with radar echoes can be improved; and determining second weights of different positions in the spliced image based on a space self-attention mechanism, performing weighted summation on the positions according to the second weights, improving the attention of the positions with radar echoes, performing information Fusion on the first weight image and the second weight image by Sum Fusion operation, realizing accurate extraction and Fusion of spatial features, and further improving the definition and accuracy of the predicted radar echo image.
It should be noted that the process of decoding the fusion result by using the IIA-GRU decoder is similar to the encoding process, and the difference is that the down-sampling process is used in the decoding process, the up-sampling process is used in the encoding process, when the radar echo map is predicted by decoding, the history image is not needed when the first image is predicted, and when the second image is predicted, the first image is used as the history image; when the third image is predicted, the first image and the second image are taken as historical images, and so on, which is not repeated herein.
Optionally, the fusing the first encoded image and the second encoded image based on a gating mechanism comprises:
based on a forgetting gate mechanism, fusing the first coded image and the second coded image by adopting a first formula to obtain the fusion result, wherein the first formula comprises:
Figure 228721DEST_PATH_IMAGE001
,
wherein the content of the first and second substances,
Figure 862965DEST_PATH_IMAGE002
indicating the output of the update gate or gates,
Figure 101180DEST_PATH_IMAGE003
the time step is represented by the time-step,
Figure 934006DEST_PATH_IMAGE004
a sigmoid activation function is represented,
Figure 29001DEST_PATH_IMAGE005
and
Figure 827238DEST_PATH_IMAGE006
a weight parameter indicating an updated door,
Figure 931460DEST_PATH_IMAGE007
representing the first coded image or images of the first coded image,
Figure 618793DEST_PATH_IMAGE008
representing the second encoded image or images,
Figure 150269DEST_PATH_IMAGE009
the output of the forgetting gate is represented,
Figure 431209DEST_PATH_IMAGE010
and
Figure 339122DEST_PATH_IMAGE011
a weight parameter representing a forgetting gate,
Figure 146541DEST_PATH_IMAGE012
a candidate hidden state is represented that is,
Figure 848918DEST_PATH_IMAGE013
to represent
Figure 945050DEST_PATH_IMAGE014
The function is activated in such a way that,
Figure 578025DEST_PATH_IMAGE015
and
Figure 177634DEST_PATH_IMAGE016
a weight parameter representing a candidate hidden state,
Figure 113229DEST_PATH_IMAGE017
representing the fusion result.
In the optional embodiment, when the radar echo extrapolation is performed and a radar echo map in a future period of time is predicted, the historical radar echo map is a main prediction basis, the satellite cloud map is an auxiliary basis, and if the historical radar echo map and the satellite cloud map are directly spliced, features closely related to prediction accuracy cannot be fully utilized, for example, features closely related to the radar echo map in the future period of time in the radar echo map may be omitted, and features with low correlation degree with the future radar echo map in the satellite cloud map are added, so that the prediction accuracy is low. Therefore, the embodiment proposes that a forgetting gate mechanism is adopted to perform linear fusion on the two coded images, the use proportion of the satellite cloud picture can be determined through the gate control mechanism, so as to fully mine the characteristics closely associated with the future radar echo picture, so that the satellite cloud picture can better assist and supplement the historical radar echo picture, and further generate better prediction effect.
Optionally, before the encoding processing is performed on the current radar echo map and the historical radar echo map by using the first IIA-GRU encoder, the method further includes:
and acquiring a historical radar echo map sequence and a historical satellite cloud map sequence of the specified area in the past period.
Illustratively, taking the designated area as the Guangdong province as an example, all historical radar echo maps in 2018, 2019 and 2020 in the Guangdong province are collected, the area size of the historical radar echo map is 700 × 900, 700km × 900km covers the whole Guangdong province region, and national satellite cloud maps corresponding to FY-4A are collected, all the historical radar echo maps form a historical radar echo map sequence, and all the satellite cloud maps form a historical satellite cloud map sequence.
And performing time matching and space matching on the historical radar echo map sequence and the historical satellite cloud map sequence to obtain a processed radar echo map sequence and a processed satellite cloud map sequence.
Specifically, since the historical radar echo map sequence and the historical satellite cloud map sequence are not matched in time and space, time matching and space matching are required to be performed on the two data for subsequent processing.
And training a model to be trained by adopting the processed radar echo diagram sequence and the processed satellite cloud diagram sequence to obtain a trained model, wherein the model comprises the first IIA-GRU encoder, the second IIA-GRU encoder and the IIA-GRU decoder.
In the prior art, models such as ConvLSTM, ConvGRU and TrajGRU are often applied to a prediction task, if the models are applied to radar echo extrapolation, radar echo data predicted by the models become fuzzy along with the increase of forecast time, the models are not suitable for radar echo image prediction of long sequences, and high echo data which is consistent with actual conditions can not be predicted along with the increase of forecast time in the prediction of high echo (heavy rainfall) areas.
In the optional embodiment, the IIA-GRU model has superior performance in capturing the remote dependency relationship, so that global information is not forgotten too much along with the increase of time, overall motion information can be considered at each time step, image prediction becomes more accurate, satellite cloud map data are blended under the condition that information of a single radar echo map is insufficient, the expression capability of the model is further improved, and the prediction accuracy of the radar echo map is further improved.
Optionally, the time matching and the space matching of the historical radar echo map sequence and the historical satellite cloud map sequence include:
unifying the time zones of the historical radar echo diagram sequence and the historical satellite cloud diagram sequence to obtain a unified radar echo diagram sequence and a unified satellite cloud diagram sequence.
Specifically, for the problem of time mismatch, greenwich mean time corresponding to the satellite cloud image is first converted into the same time zone as the historical radar echo image, for example, beijing time in the east eight area.
And selecting a plurality of samples from the unified radar echo map sequence and the unified satellite cloud map sequence, wherein the samples comprise a plurality of radar echo maps at different moments and corresponding satellite cloud maps.
In particular, due to the characteristics of the satellite cloud data, data may be missing for certain time periods, such as missing satellite cloud data within 0-15 minutes at 19 and missing satellite cloud data within 0-8 minutes at 18, resulting in a random sampling of satellite clouds from the unified satellite cloud sequence that is not the same as the time span of the sampled radar echo pattern. Therefore, when data is sampled, the satellite cloud images in each half-hour period are sampled forwards sequentially by taking the time of the last radar echo image as a starting point, and if the satellite cloud images are less than a preset frame number, for example, 5 frames, the samples are abandoned; if the satellite cloud picture is more than or equal to 5 frames, the 5 frames of satellite cloud pictures closest to the starting point are selected as a sample.
It should be noted that, for the time interval inconsistency between satellite clouds within a sample, for example, two satellite clouds may be separated by 4 minutes, another two satellite clouds may be separated by 6 minutes, if the difference between the time intervals is smaller than the preset threshold, and the resolution of the satellite data is higher, for example, the resolution is 4 power, the change is considered to be smaller as a whole, and the error is within the acceptable range.
And carrying out interpolation processing on the satellite cloud pictures in the samples, and unifying the space areas of the satellite cloud pictures and the radar echo pictures.
Specifically, for the problem of spatial mismatch, the resolutions of the satellite clouds of the channels may be unified, for example, unified to 4 powers, and then the national satellite images are cut from 730 × 1280 size to 300 × 400 size, for example, the satellite images with 300 × 400 size in the center of the designated area according to the latitude and longitude range corresponding to the radar echo map. And then, for each pixel point in the satellite image obtained by cutting, calculating a pixel value according to the longitude and latitude, and then interpolating the satellite image with the size of 300 × 400 into the satellite image with the size of 700 × 900 which is the same as the area of the historical radar echo map according to the pixel value of each pixel point, so that the matching of the satellite cloud map and the radar echo map in space is completed.
Optionally, the interpolating the satellite cloud images in the sample, and unifying the spatial areas of the satellite cloud images and the radar echo image includes:
for each pixel point in the radar echo map, calculating the pixel value of the pixel point according to the longitude and latitude of a plurality of nearest neighbor pixel points of the pixel point based on an area weighting calculation method;
and taking the pixel value as satellite data of each pixel point to generate the satellite cloud picture matched with the space area of the radar echo picture.
Specifically, as shown in fig. 4, for a pixel point P in the radar echo map, four nearest neighbor pixel points S1, S2, S3, and S4 are found in the satellite cloud map according to the longitude and latitude of the pixel point P, a pixel value of the pixel point P is calculated by an area weighting calculation method, the pixel value is used as satellite data of the pixel point P, and the satellite data of all the pixel points form the satellite cloud map matched with the area of the radar echo map.
Optionally, the area weighting calculation method is represented by a second formula, where the second formula includes:
Figure 431078DEST_PATH_IMAGE035
wherein, the first and the second end of the pipe are connected with each other,
Figure 352897DEST_PATH_IMAGE019
representing pixel points
Figure 72591DEST_PATH_IMAGE036
Of said pixel value,
Figure 179088DEST_PATH_IMAGE021
Representing said pixel point
Figure 249812DEST_PATH_IMAGE037
To (1) a
Figure 37639DEST_PATH_IMAGE038
The number of the nearest-neighbor pixel points,
Figure 34676DEST_PATH_IMAGE024
pixel point representing the nearest neighbor
Figure 312074DEST_PATH_IMAGE021
The latitude of the user is determined by the latitude of the user,
Figure 338936DEST_PATH_IMAGE025
pixel points representing the nearest neighbors
Figure 868137DEST_PATH_IMAGE021
The longitude of (a) is determined,
Figure 296845DEST_PATH_IMAGE026
representing said pixel point
Figure 745144DEST_PATH_IMAGE037
The latitude of (a) is determined,
Figure 446252DEST_PATH_IMAGE028
representing said pixel point
Figure 575882DEST_PATH_IMAGE037
Longitude of (c).
Specifically, channel values are normalized according to the numerical range of each channel of the satellite image, wherein the value range of the sixth channel is between 0-65535, the values of the other channels are between 0-4095, and the channel values are normalized to the range of 0-255 finally, and the data of 14 channels are stored in a matrix form by adopting an npy format.
Compared with the prior art that the scales of the two images are corresponding through convolution and deconvolution, in the optional embodiment, the two data are aligned in time through adjusting the time zones, the satellite cloud images have satellite data of a plurality of channels, the resolution of each channel is different, and the data of the satellite cloud images are kept consistent on each channel through adjusting the resolution, so that the satellite cloud images can assist the radar echo images to perform radar echo extrapolation, and the prediction accuracy of the radar echo images is improved.
As shown in fig. 5, another embodiment of the present invention provides a radar echo extrapolation apparatus, including:
the acquisition module is used for acquiring a current radar echo map and a current satellite cloud map of a specified area at the current moment, and a historical radar echo map and a historical satellite cloud map of a past period;
the encoding module is used for encoding the current radar echo map and the historical radar echo map by adopting a first IIA-GRU encoder to obtain a first encoded image; coding the current satellite cloud picture and the historical satellite cloud picture by adopting a second IIA-GRU encoder to obtain a second coded image; the coding processing comprises the steps of extracting the characteristics of a current image to obtain a hidden layer characteristic image; based on an interaction attention mechanism, splicing the hidden layer characteristic image and the corresponding historical image to obtain a spliced image; based on a bidirectional attention information extraction mechanism, carrying out spatial information fusion on the spliced image to obtain the coded image; the current image comprises the current radar echo map and the current satellite cloud map, and the historical image comprises the historical radar echo map and the historical satellite cloud map;
and the prediction module is used for fusing the first coded image and the second coded image based on a gating mechanism, and decoding a fusion result by adopting a IIA-GRU decoder to obtain a radar echo image in a future period of time of the specified area.
The radar echo extrapolation apparatus of this embodiment is used to implement the radar echo extrapolation method as described above, and the beneficial effects correspond to those of the radar echo extrapolation method, and are not described herein again.
Another embodiment of the present invention provides an electronic device, including a memory and a processor; the memory for storing a computer program; the processor is configured to implement the method for radar echo extrapolation as described above when the computer program is executed.
Yet another embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for radar echo extrapolation as described above.
An electronic device that can be a server or a client of the present invention, which is an example of a hardware device that can be applied to aspects of the present invention, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
The electronic device includes a computing unit that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) or a computer program loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device can also be stored. The computing unit, the ROM, and the RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like. In this application, the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Although the present disclosure has been described above, the scope of the present disclosure is not limited thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present disclosure, and these changes and modifications are intended to be within the scope of the present disclosure.

Claims (9)

1. A method of radar echo extrapolation, comprising:
acquiring a current radar echo map and a current satellite cloud map of a specified area at the current moment, and a historical radar echo map and a historical satellite cloud map of a past period;
coding the current radar echo map and the historical radar echo map by adopting a first IIA-GRU encoder to obtain a first coded image; coding the current satellite cloud picture and the historical satellite cloud picture by adopting a second IIA-GRU encoder to obtain a second coded image; the coding processing comprises the steps of extracting the characteristics of a current image to obtain a hidden layer characteristic image; based on an interactive attention mechanism, splicing the hidden layer characteristic image and the corresponding historical image to obtain a spliced image; based on a bidirectional attention information extraction mechanism, carrying out spatial information fusion on the spliced image to obtain the coded image; the current image comprises the current radar echo map and the current satellite cloud map, and the historical image comprises the historical radar echo map and the historical satellite cloud map;
fusing the first coded image and the second coded image based on a gating mechanism, and decoding a fusion result by adopting a IIA-GRU decoder to obtain a radar echo image in a future period of time of the specified area; the first IIA-GRU encoder and the second IIA-GRU encoder represent information interaction attention-based cyclic convolutional neural networks for encoding, and the IIA-GRU decoder represents information interaction attention-based cyclic convolutional neural networks for decoding;
the spatial information fusion of the spliced image based on the bidirectional attention information extraction mechanism to obtain the encoded image comprises:
determining a first weight of each channel of the spliced image by adopting a maximum pooling layer and an average pooling layer, and carrying out weighted summation on each channel according to the first weight to obtain a first weight image;
determining second weights of different positions in the spliced image based on a space self-attention mechanism, and performing weighted summation on the positions according to the second weights to obtain a second weight image;
and performing information Fusion on the first weight image and the second weight image by Sum Fusion operation to obtain the coded image.
2. The method of claim 1, wherein prior to the extracting features from the current image, further comprising:
and carrying out down-sampling on the current image and the historical image to obtain the current image after down-sampling processing and the historical image after down-sampling processing.
3. The radar echo extrapolation method of claim 1, wherein the stitching the hidden layer feature image and the corresponding historical image based on the mutual attention mechanism to obtain a stitched image comprises:
based on an interaction attention mechanism, carrying out time dimension interaction on the hidden layer feature image at the current moment and the historical image in a past period of time to obtain a processed image;
and splicing the processed image and the hidden layer characteristic image to obtain the spliced image.
4. The method of radar echo extrapolation according to any one of claims 1 to 3 wherein the fusing the first and second encoded images based on a gating mechanism comprises:
based on a forgetting gate mechanism, fusing the first coded image and the second coded image by adopting a first formula to obtain the fusion result, wherein the first formula comprises:
Figure DEST_PATH_IMAGE001
,
wherein the content of the first and second substances,
Figure 416332DEST_PATH_IMAGE002
indicating that the output of the update gate is,
Figure DEST_PATH_IMAGE003
the time step is represented by the time-step,
Figure 802314DEST_PATH_IMAGE004
a sigmoid activation function is represented,
Figure DEST_PATH_IMAGE005
and
Figure 316472DEST_PATH_IMAGE006
a weight parameter indicating an updated door,
Figure DEST_PATH_IMAGE007
representing the first coded image or images and the second coded image,
Figure 180523DEST_PATH_IMAGE008
representing the second encoded image or images,
Figure DEST_PATH_IMAGE009
the output of the forgetting gate is represented,
Figure 463737DEST_PATH_IMAGE010
and
Figure DEST_PATH_IMAGE011
a weight parameter representing a forgetting gate,
Figure 755041DEST_PATH_IMAGE012
a candidate hidden state is represented that is,
Figure DEST_PATH_IMAGE013
represent
Figure 490916DEST_PATH_IMAGE014
The function is activated in such a way that,
Figure DEST_PATH_IMAGE015
and
Figure 893078DEST_PATH_IMAGE016
a weight parameter representing a candidate hidden state,
Figure DEST_PATH_IMAGE017
representing the fusion result.
5. The method for radar echo extrapolation according to any one of claims 1 to 3, wherein before the encoding process of the current radar echo map and the historical radar echo map by using the first IIA-GRU encoder, the method further comprises:
acquiring a historical radar echo map sequence and a historical satellite cloud map sequence of the specified area in a past period of time;
performing time matching and space matching on the historical radar echo map sequence and the historical satellite cloud map sequence to obtain a processed radar echo map sequence and a processed satellite cloud map sequence;
and training a model to be trained by adopting the processed radar echo diagram sequence and the processed satellite cloud diagram sequence to obtain a trained model, wherein the model comprises the first IIA-GRU encoder, the second IIA-GRU encoder and the IIA-GRU decoder.
6. The method of radar echo extrapolation of claim 5, wherein the time-matching and space-matching the sequence of historical radar echo maps and the sequence of historical satellite clouds comprises:
unifying time zones of the historical radar echo diagram sequence and the historical satellite cloud diagram sequence to obtain a unified radar echo diagram sequence and a unified satellite cloud diagram sequence;
selecting a plurality of samples from the unified radar echo map sequence and the unified satellite cloud map sequence, wherein the samples comprise a plurality of radar echo maps at different moments and corresponding satellite cloud maps;
and carrying out interpolation processing on the satellite cloud pictures in the samples, and unifying the space areas of the satellite cloud pictures and the radar echo pictures.
7. The method of claim 6, wherein the interpolating the satellite cloud maps in the samples, unifying spatial areas of the satellite cloud maps and the radar echo map comprises:
for each pixel point in the radar echo map, calculating the pixel value of the pixel point according to the longitude and latitude of a plurality of nearest neighbor pixel points of the pixel point based on an area weighting calculation method;
and taking the pixel value as satellite data of each pixel point to generate the satellite cloud picture matched with the space area of the radar echo picture.
8. A radar echo extrapolation apparatus, comprising:
the acquisition module is used for acquiring a current radar echo map and a current satellite cloud map of a specified area at the current moment, and a historical radar echo map and a historical satellite cloud map of a past period;
the encoding module is used for encoding the current radar echo map and the historical radar echo map by adopting a first IIA-GRU encoder to obtain a first encoded image; coding the current satellite cloud picture and the historical satellite cloud picture by adopting a second IIA-GRU encoder to obtain a second coded image; the coding processing comprises the steps of carrying out feature extraction on a current image to obtain a hidden layer feature image; based on an interaction attention mechanism, splicing the hidden layer characteristic image and the corresponding historical image to obtain a spliced image; based on a bidirectional attention information extraction mechanism, carrying out spatial information fusion on the spliced image to obtain the coded image; the current image comprises the current radar echo map and the current satellite cloud map, and the historical image comprises the historical radar echo map and the historical satellite cloud map;
the prediction module is used for fusing the first coded image and the second coded image based on a gating mechanism and decoding a fused result by adopting a IIA-GRU decoder to obtain a radar echo image in a future period of time of the specified area; the first IIA-GRU encoder and the second IIA-GRU encoder represent information interaction attention-based cyclic convolutional neural networks for encoding, and the IIA-GRU decoder represents information interaction attention-based cyclic convolutional neural networks for decoding;
the spatial information fusion of the spliced image based on the bidirectional attention information extraction mechanism to obtain the encoded image comprises:
determining a first weight of each channel of the spliced image by adopting a maximum pooling layer and an average pooling layer, and carrying out weighted summation on each channel according to the first weight to obtain a first weight image;
determining second weights of different positions in the spliced image based on a space self-attention mechanism, and performing weighted summation on the positions according to the second weights to obtain a second weight image;
and performing information Fusion on the first weight image and the second weight image by using Sum Fusion operation to obtain the coded image.
9. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method for radar echo extrapolation according to any one of claims 1 to 7.
CN202210363210.1A 2022-04-08 2022-04-08 Radar echo extrapolation method and device and storage medium Active CN114460555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210363210.1A CN114460555B (en) 2022-04-08 2022-04-08 Radar echo extrapolation method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210363210.1A CN114460555B (en) 2022-04-08 2022-04-08 Radar echo extrapolation method and device and storage medium

Publications (2)

Publication Number Publication Date
CN114460555A CN114460555A (en) 2022-05-10
CN114460555B true CN114460555B (en) 2022-08-23

Family

ID=81417272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210363210.1A Active CN114460555B (en) 2022-04-08 2022-04-08 Radar echo extrapolation method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114460555B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016042B (en) * 2022-06-06 2023-07-28 湖南师范大学 Precipitation prediction method and system based on multi-encoder fusion radar and precipitation information
CN114924249B (en) * 2022-07-22 2022-10-28 中国科学技术大学 Millimeter wave radar-based human body posture estimation method and device and electronic equipment
CN115128570B (en) * 2022-08-30 2022-11-25 北京海兰信数据科技股份有限公司 Radar image processing method, device and equipment
CN117368881B (en) * 2023-12-08 2024-03-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Multi-source data fusion long-sequence radar image prediction method and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09257951A (en) * 1996-03-22 1997-10-03 Nippon Telegr & Teleph Corp <Ntt> Weather forcasting device
CN105445816B (en) * 2015-12-14 2017-10-03 中国气象局气象探测中心 A kind of cloud radar and satellite sounding data fusion method and system
CN108445464B (en) * 2018-03-12 2021-09-10 南京恩瑞特实业有限公司 Satellite radar inversion fusion method based on machine learning
US11169263B2 (en) * 2019-10-04 2021-11-09 International Business Machines Corporation Predicting weather radar images
CN111158068B (en) * 2019-12-31 2022-09-23 哈尔滨工业大学(深圳) Short-term prediction method and system based on simple convolution cyclic neural network
CN112446419B (en) * 2020-10-29 2023-07-11 中山大学 Attention mechanism-based space-time neural network radar echo extrapolation prediction method
CN112415521A (en) * 2020-12-17 2021-02-26 南京信息工程大学 CGRU (China-swarm optimization and RU-based radar echo nowcasting) method with strong space-time characteristics
CN112946784B (en) * 2021-03-29 2022-09-16 杭州电子科技大学 SuperDARN radar convection diagram short-term forecasting method based on deep learning
CN113240169A (en) * 2021-05-10 2021-08-10 东南大学 Short-term rainfall prediction method of GRU network based on multi-mode data and up-down sampling
CN113657477B (en) * 2021-08-10 2022-04-08 南宁五加五科技有限公司 Method, device and system for forecasting short-term rainfall

Also Published As

Publication number Publication date
CN114460555A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN114460555B (en) Radar echo extrapolation method and device and storage medium
Han et al. Convective precipitation nowcasting using U-Net model
CN107369166B (en) Target tracking method and system based on multi-resolution neural network
WO2022062543A1 (en) Image processing method and apparatus, device and storage medium
CN114067019B (en) Urban waterlogging risk map rapid prefabricating method based on coupling deep learning-numerical simulation
CN116519106B (en) Method, device, storage medium and equipment for determining weight of live pigs
CN116205962B (en) Monocular depth estimation method and system based on complete context information
CN115660041A (en) Sea wave height prediction and model training method, electronic device and storage medium
CN115205150A (en) Image deblurring method, device, equipment, medium and computer program product
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN112819199A (en) Precipitation prediction method, device, equipment and storage medium
CN114998373A (en) Improved U-Net cloud picture segmentation method based on multi-scale loss function
Han et al. Precipitation nowcasting using ground radar data and simpler yet better video prediction deep learning
CN117095132B (en) Three-dimensional reconstruction method and system based on implicit function
CN117634556A (en) Training method and device for semantic segmentation neural network based on water surface data
CN116721206A (en) Real-time indoor scene vision synchronous positioning and mapping method
CN115222947B (en) Rock joint segmentation method and device based on global self-attention transformation network
CN115797557A (en) Self-supervision 3D scene flow estimation method based on graph attention network
CN115983370A (en) Scattered data interpolation model training method, interpolation method and device
CN115131414A (en) Unmanned aerial vehicle image alignment method based on deep learning, electronic equipment and storage medium
CN115205530A (en) Low-altitude unmanned-machine-oriented real-time image semantic segmentation method
CN115082624A (en) Human body model construction method and device, electronic equipment and storage medium
Yao et al. A Forecast-Refinement Neural Network Based on DyConvGRU and U-Net for Radar Echo Extrapolation
CN115115972A (en) Video processing method, video processing apparatus, computer device, medium, and program product
CN113610856A (en) Method and device for training image segmentation model and image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant